This paper examines the use of Web search engines by faculty and students to support learning, teaching, and research. We explore the academic tasks supported by search engine use to investigate if and how students and scholars vary in their use patterns. We also investigate the satisfaction levels with search outcomes and trust in search engines in supporting specific tasks. This study is based on triangulating three data–gathering methods, including a Web–based survey, interviews, and search log reviews. One of the goals of the study is to demonstrate how each methodology exhibits a unique strength in collecting information about different dimensions of search behavior and perceptions. We conclude that, although there are variations in search engine use among the faculty, graduate and undergraduate students surveyed, there is convergence in means of overall satisfaction with the outcomes of their searches and trust in search engines in supporting their studies and research. The paper concludes with a discussion of the implications of the findings for future search engine research and information practitioners.
Search engines have become an integral part of our information environment. Increasingly they are replacing the role of libraries in facilitating information discovery and access. Googling has become synonymous with research (Mostafa, 2005). Recent statistics indicate that Google has become the search interface of choice for many faculty and students to address their information needs, far exceeding their use of library catalogs or other online citation databases (Griffiths and Brophy, 2005). An international survey (OCLC, 2005) reports that 89 percent of information searches undertaken by college students begin with a search engine and Google is the overwhelming favorite (68 percent). The same trend is observed for faculty and researchers (Schonfeld and Guthrie, 2006).
The purpose of this research is to explore the search engine experiences of students and faculty as they seek information to support their learning, teaching, and research. The study offers a holistic perspective by examining use behaviors within the context of supported processes and expected outcomes within an academic environment. Drawing on multiple data gathering methods, we merge qualitative and quantitative information to illustrate the situated and goal driven nature of information gathering and consumption process.
Search engine use is an embedded task that is determined by individuals’ specific work contexts and needs. Marchionini (1995) defines a task as the manifestation of an information seeker’s query that determines the information–seeking action . As Kim (2009) concludes in his study, in order to expand our understanding of users’ interactions with search engines, we must expand our knowledge of the search context and associated tasks. This context includes not only finding information but also utilizing the discovered information successfully to accomplish a certain task. Considering the search engine use patterns of specific user groups will facilitate more task–focused assessment and development of search engines. Although our study applies to research engines in general, we intend to focus specifically on the Google search engine, as it is the most commonly used search environment, as also confirmed by the data gathered for this study.
Research goals and questions
As Jansen, et al. (2008) point out, in order for search engines to improve, we need to expand our understanding of user behavior and the underlying intent with which users conduct searches. Web searches reflect a diverse set of underlying user goals and we assume that understanding these objectives will help to improve search engines and develop applications that can enable effective use of information identified through searches. Our research goals include:
Exploring how students and faculty find information using search engines to support their research and studies and determining whether search behavior and satisfaction levels vary by academic discipline (science, humanities, engineering, social sciences) or affiliation type (undergraduate students, graduate students, faculty);
Understanding significant aspects of the experiences and perceptions of searchers, such as the extent to which they trust in search results, their satisfaction with the outcomes of search results, and problems encountered and negative consequences of search engine use; and,
Suggesting design principles based on the data gathered that will improve the functionality of search engines, leading to greater user satisfaction in search engine use in support of learning, teaching, and research.
In order to investigate these issues, we pose the following research questions:
RQ1: What are the academic goals (search intents) of students and faculty when they use search engines?
RQ2: Are there any variations in search intents based on affiliation type or academic discipline?
RQ3: Are there any variations in satisfaction levels based on affiliation type or academic discipline?
RQ4: Do students and faculty trust search engines to provide an adequate representation of the information space in support of learning, teaching, and research?
RQ5: What are the areas of improvement for increased user satisfaction for academic searches?
As described in the research methodology section, we are also interested in comparing various data–gathering methods in order to assess the unique strengths of each strategy in shedding light on diverse aspects of human-information interactions.
Internet search engines increasingly serve as the first option for people who want to find information. A diverse range of articles report the results of studies of the information–seeking and retrieval behavior observed in search engine environments (Kim, 2009; Thatcher, 2008; Jansen, et al., 2008; Griffiths and Brophy, 2005; Mostafa, 2005; Rose and Levinson, 2005; Broder, 2002; Choo, et al., 2000). However, as Hargittai (2007) argues, most of the research focuses on technical aspects of search engines without taking into consideration the sociocultural context of use or the practices of the users who rely on search engines. Marchionini points out that the initial search engine research focal point has been on increasing efficiency, precision, and the recall capacities of search engines . Such research findings have not only benefited searchers through improved search algorithms and interfaces but also have supported the commercial sector by improving the accuracy of advertisement placement and other online e–commerce activities.
Our goal is not to question search algorithms or user interface design aspects of search engines, but rather to examine the needs–based process that results in a user’s typing text into the search engine query box as well as that user’s perception of the results of such a search. In this regard, Spink (2002) explores a user–centered approach to the evaluation of the Web search engine and provides a useful framework for our study. Her user–centered approach to such evaluation includes effectiveness and usability. Assessment of effectiveness is based on gauging the impact of users’ interactions with search engines on information problems at the information–seeking stage. In contrast, usability testing involves the assessment of screen layout and system capabilities for users. Our study takes search engine effectiveness–related issues into account as it investigates satisfaction with search results. We believe that in real life (work–in–practice), users base their overall assessment of search engines on how the search results support their overall goals, such as verifying a citation, rather than the specifics of the search results displayed.
The purpose of search engine use
Extant work on understanding user Web–search behavior focuses mainly on how users search but not on why they search. One of the few exceptions is the taxonomy of Web searching developed by Broader (2002), which was expanded by Rose and Levinson (2005). They differentiate between navigational searches that intend to find a specific Web site and information searches focused on finding information about a specific topic. A third category is identified as transactional searches that aim to support such behavior as downloading software or online shopping. We implement these categories in gathering data to understand the usage patterns of students and faculty.
Our goal is to contribute to existing research by focusing specifically on search engine use behavior for academic use. We seek to consider various purposes behind academic uses of search engines and searchers’ perceptions and satisfaction levels with the results. The outcomes of studies such as ours can be utilized to develop design principles to improve the functionality of search engines, leading in turn to greater user satisfaction.
Underlying reasons behind search results viewing patterns
Several related studies conclude that search engine users are unwilling to invest additional effort to improve their strategies and often settle on simple keyword searches, viewing only the first results page (Haglund and Olsson, 2008; Jansen, et al., 2008; Griffiths and Brophy, 2005). This observation is often linked to Zipf’s principle of least effort, which is perceived as a natural impulse in academics (Brophy and Bawden, 2005). The principle implies that information seekers tend to use the most convenient search method and searches end as soon as minimally acceptable results are found. It emphasizes an inclination on the part of most searchers to use tools that are familiar and easy to use.
In a related study, Griffiths and Brophy (2005) conclude that students often trade the quality of the results they obtain for effort and time spent searching and are confused about how to assess the quality of search outcomes. Based on their study of academic researchers, Haglund and Olsson (2008) argue that search methodology is often based on “trial and error” and is usually undertaken with no particular strategy. Other studies examine the effect of search user interface design on user behavior, such as link following, and conclude that users often do not go beyond the top 20 results of retrieved search sets; indeed, the majority of the users view only one page of results (O’Brien and Keane, 2006; Granka, et al., 2004; Jansen and Spink, 2003).
Some studies conclude that trust and branding are significant factors in determining search engine use behavior. For instance, Pan, et al.’s (2007) eye tracking experiment reveals that college students click on findings in higher positions also due to their substantial trust in Google’s ability to rank results by their true relevance to the query. Jansen, et al.’s (2009) laboratory study also indicates that branding affects user perception in regard to search engine performance and users generally view their favorite and familiar search engines in a positive manner. On the other hand, O’Brien and Keane (2006) propose that underlying reason behind this behavior is likely to be users seeking satisfactory as opposed to optimal results. One of our research goals is to consider how our informants view search results and their overall satisfaction with the results in fulfilling their research needs.
Refining search results
The notion of context within information–seeking behavior has received a great deal of attention in the information retrieval (IR) literature over the past decade, primarily in studies of information seeking and IR interactions (Cool and Spink, 2002). Recently, consideration of context in IR has expanded to include the search engine domain as a means of addressing the unique problems faced in this new search environment. For instance, Liao (2008) argues that we are facing information deluge on the Web and spend much time and effort examining the results provided by a search engine because search engines function with limited knowledge about users’ preferences and search histories. A new research genre (structural re–ranking) now exists to examine how to re–rank search engine results, based on new contextual information, to better list search results in an order that is more desirable to a user (Joachims and Radlinski, 2007). Case studies such as ours that investigate search engine use in support of specific tasks will contribute to this realm of research by addressing the context of information–seeking behavior.
Information retrieval (IR) research has been shifting from the study of discrete elements, such as the typing in of a few keywords in a search box, towards providing an ecological account of “human–information interaction” (Marchionini, 2008). The research agenda in human–computer interaction community is expanding to incorporate how users interact with information throughout search sessions. Search engines help to retrieve digital information represented in a wide range of formats such as text, audio, video, data, and simulations. In this new information environment, users are no longer simply reading or printing information but instead are engaged in more complex processes such as reviewing, selecting, linking, downloading, installing, annotating, interpreting, and saving in a citation database. Evolving our research perspective from human–computer interaction to human–information interaction facilitates a shift towards understanding how search engines are used to fulfill various purposes of individuals.
Figure 1: Conceptual framework used in the study.
As illustrated in Figure 1, we view search engine use as a task–driven process motivated by the learning, teaching, and research needs of our specific population. Taking into consideration search intent within the scope of general academic tasks provides a holistic research framework. Our theoretical framework presupposes the nested nature of information searching, conceiving it not as a discrete activity but as an effort that is integrated into a full academic workflow of thinking, conceptualizing, writing, reading, reviewing, and reflecting. The goal of search engine use among academic users is not merely acquiring information but putting this information into use to address a specific need such as writing a term paper.
Table 1 includes the definitions for the key constructs used in the conceptual model. Our study is an exploratory one; therefore, it does not aim to test hypotheses derived from this conceptual framework. The goal is to expand our understanding of how students and faculty interact with search engines and variances in search patterns and consequences for different user groups. The premise that context matters broadly frames our investigation and determines the scope of our study. Search engine use behavior is seen as a communicative process determined by the tasks and search intents of a user. Within this model, our focus in on variances in search behavior and satisfaction level among undergraduate students, graduate students, and faculty. We are also interested in exploring their perceptions such as trust in the performance of a search engine.
Table 1: Key constructs used in the conceptual framework. Construct Definition Task The manifestation of an individual’s query that determines the information–seeking action (Marchionini, 1995). Search intent Navigational queries intend to find a specific Web site such as locating a home page; information queries focus on finding information about a specific topic; and, transactional searches aim to support such behavior as downloading software or online shopping (Rose and Levinson, 2005; Broader, 2002). Trust Confidence in a search engine’s ability to retrieve and rank results by their true relevance to a query (Pan, et al., 2007). Satisfaction User’s judgment of overall success of a search engine in providing help for specific information needs or problem (Su, 2003) .
As Jansen’s (2006) analysis reveals, the primary research methodology in Web–searching studies utilizes transaction logs to analyze and compare search engine use. Studying search engine behavior based on log analysis is difficult for several reasons. First, although search engines themselves generate voluminous datasets based on the logs of users, these data are proprietary and often difficult to obtain (Hargittai, 2007). Also, such log data lack important contextual and outcomes information, including information about tasks supported and satisfaction with search results. Transaction logs allow objective, quantitative, and generalizable analysis of searching and viewing patterns. They do not, however, support the assessment of either the degree of success with which users located needed information or the level of satisfaction with search results they experience. For instance, if a searcher changes her search strategy by revising the keywords used, does that indicate that she was not able to find what she needed or might it indicate that she located the required information and is now searching for another related piece?
Another common research methodology utilizes laboratory studies in experimental settings (Kurland and Lee, 2005; Joachims and Radlinski, 2007). This method is successful in gathering data in regard to session length, query complexity, and content viewed; however, it may be of limited utility in helping researchers uncover the actual search and review process. Laboratory studies are often based on pre–determined and scripted search scenarios in order to test specific behavior such as preferences for navigating search results. Although they are instrumental in revealing usage patterns, such as a tendency on the part of searchers to click on a result within the first three pages of search results, they may not be as effective in illuminating the underlying reasons behind such regular search behavior or subsequent perceptions and satisfaction levels.
To overcome the limitations of transaction log analysis and laboratory studies, our study is based on triangulating three research methods to investigate different aspects of search engine use behavior. We believe that each research methodology has its own strengths in facilitating our exploration of search engine use behavior and perceptions. Together, the data–gathering strategies not only give us a holistic picture for our case study but also allow us to compare varied research approaches. Our research methodology was approved by Cornell’s Institutional Review Board in October 2008. The following section introduces our data–gathering strategies, which consist of interviews with librarians, a Web–based questionnaire, follow–up interviews, and Google History log discussions. Table 2 lists the data gathering methods and their main contributions to the overall research strategy.
Table 2: Data gathering methods used in the study. Data gathering method Goals supported Sample size Interviews with librarians Gather perspectives on how faculty and students are using search engines to inform our questionnaire design and follow–up interviews. 10 reference librarians from different disciplinary libraries. Web–based questionnaire Gather baseline search engine use to understand search engine preferences, search intents, frequency of search, and satisfaction levels. 96 students and faculty. Interviews Explore perceptions about results obtained and trust in effectiveness of search engines in supporting academic work. 32 students and faculty. Google history logs Confirm the information gathered via questionnaire about search patterns such as search intent and frequency of searches. 24 students and faculty.
Interviews with librarians
As a part of our initial survey phase, we interviewed an interdisciplinary group of Cornell University Library reference librarians to gather their perspectives on how faculty and students are using search engines to support their scholarly research. We conducted 15–20 minute interviews with 10 reference librarians representing the following broadly conceived disciplines and programs: social sciences, humanities, engineering, arts, and agriculture and life sciences. We elicited their observations in regard to how students and faculty use search engines in finding information in support of their research and studies. In the first step of our research, interviews with reference librarians informed our understanding of tasks supported by search engines and common search engine use behaviors and perceptions. We used this information to structure a questionnaire in order to consider variations among distinct academic groups as well as to confirm search engine patterns and frequency of use.
The Web–based questionnaire was administered to gather baseline search engine usage information from a sample group of Cornell students and faculty in November and December 2008. We gathered information about demographic characteristics (age, academic status, and gender), search engine usage patterns and search types, and frequency of search engine use. We also included questions to help us understand why they search and how they navigate, and to gauge their general satisfaction levels. In particular, we administered the Web–based survey to provide a baseline “profile” or approximation of key user behavior to address our research question about potential search behavior variances. The primary multiple choice questions from the questionnaire include:
- What is your primary search engine?
- Which rating describes how frequently you use each search engine (frequency scale)?
- Please indicate how you use search engines (see Figure 4 for a list of sample search intents)
- What is your overall satisfaction level with search engines in means of fulfilling your research and learning needs?
We recruited informants by sending a participation invitation to approximately 200 faculty members selected randomly from the Cornell University directory. The response rate from faculty contacted was 18 percent. We also recruited undergraduate and graduate students in commonly visited cafes at the university by offering them three–dollar coupons. Ninety–six individuals completed the Web–based questionnaire, representing various university affiliations and disciplinary backgrounds. More information about the informants’ demographics is included in Table 3. There is a relatively representative distribution across university affiliation types, with 25 percent at the undergraduate level, 36 percent at the graduate level, and 39 percent consisting of faculty and researchers. The disciplinary representation is not even, however, as the majority of the respondents represent the social science (37 percent) and engineering (33 percent) domains.
Table 3: Distribution of survey respondents by affiliation and discipline.
Note: The numbers separated by plus signs indicate male vs. female (e.g., 4+6). The numbers in parentheses specify the follow–up interviews.
Social sciences Engineering Humanities Sciences Row total Percentage Undergrad 4+6=10
25% Graduate 4+6=10
36% Faculty 10+4=14
39% Column total 18+16=34 20+10=30 9+6=15 6+7=13 53+39=92
100% Percentage 37% 33% 16% 14%
Interviews with students and faculty
As a follow–up to the Web–based questionnaire, we conducted 20–40 minute structured interviews with 32 students and faculty to examine their search engine usage patterns and to investigate their perceptions and satisfaction levels . The interviews were conducted during November and December 2008 and from April through June 2009. Our recruitment strategy was based on the questionnaire, since one of the questions we asked was whether they were willing to be interviewed in return for being included in a drawing to win dinner for two at a local restaurant. We have included the demographic data on the interviewees in Table 3.
The goal of the interviews was to investigate the social and cognitive aspects of the search process in the context of interviewees’ daily academic activity. We asked questions to understand what is involved in their research and how they go about finding and using relevant resources to support their learning and research activities. We were also interested in learning when and why they find search engines useful as well as recording their suggestions for improvement. Such questions are difficult to articulate in an online survey, as they may require probing or follow–up clarifications. The interview questions included:
Would you show us how you search for scholarly materials using a search engine?
When (in which cases) do you find search engines useful and what do you like about them?
When do you find search engines insufficient or frustrating? Describe for us a case of search engine use that did not result in what you needed.
Do you think that Google provides you with an adequate representation of the information space you are interested in?
Google history logs
Web History is a Google feature that records personal searches . It tracks Web–searching activities, such as most–visited sites and top searches, revealing trends in the behavior. Figure 2 is a screen shot of a Google History page showing search statements as well as tracking search patterns such as frequency of searches and commonly used search terms and phrases. As we scheduled interviews, 24 informants agreed to activate their history sites so that they could share with us their actual search behavior information.
During the interviews, we asked informants to retrieve their history pages and give us information about their frequency of use as indicated by Google’s heat calendar, sample search topics, their satisfaction level with the search results (as they recalled their experiences), and their use of alternative strategies (such as researching by using different keywords). To respect the privacy of personal search information, we did not directly view the Google History Logs but relied on the informants’ interpretation of the information as they reported facts from their log histories. This practice inevitably allowed the informants to filter information that they disclose to us; we do not, however, perceive this as one of the shortcomings of the study. We believe that informants provided us with sufficiently informative observations.
Figure 2: Sample Google History Page showing frequency of search and specific search history for a given date.
As described earlier, we put forth five research questions to structure our exploratory study. The first two questions involved investigating the search engine use intents of students and faculty and questioned if there were different approaches based on their tasks (student vs. faculty) and disciplinary affiliations. We also gathered our informants’ opinions on their satisfaction levels with the results of their searches within the scope of their academic work. A final issue investigated was if our informants trust a search engine such as Google to provide an adequate representation of the information space in support of their learning and research. The following discussion describes the results of our study in regard to these five matters.
Search intent of students and faculty
The reference librarians interviewed unanimously confirmed that both faculty and students prefer search engines over other resources to support their academic work. We asked the librarian informants to provide anecdotal observations about the types of goals search engine searches are supporting. Figure 3 summarizes our findings from the interviews. Navigational searches involve searching for information by a given data point (such as the name of an author or a publication) whereas informational searches (such as locating a publisher’s home page) are more broadly focused. Transactional searches support accomplishing tasks such as connecting to a database for statistical analysis.
Figure 3: Goals behind academic searches categorized by librarian informants.
We found the information represented in Figure 3 useful in preparing our questionnaire as we included questions to verify and quantify tasks that are supported through search engine use. However, our follow–up interviews suggested that it was not easy or sometimes possible for the student and faculty informants to categorize their search goals. Often, search types were nested and involved multiple intents. For instance, a search goal that can be characterized as an information search sometimes transitioned into a navigation search as an individual progressed within a given search and followed different links of interest. Therefore, we decided not to use search intent type categories in reporting the findings of the questionnaire and included actual examples of search goals as included in the questionnaire (see Figure 4).
Variations in search engine use
The librarians interviewed observed some variations in the search behaviors of undergraduate students, faculty, and graduate students. For instance, they noted that there is a broader awareness of specialized Google tools such as Google Scholar and Google Book among faculty members and graduate students. However, they pointed out that, regardless of affiliation type or academic discipline there is an increasing reliance on search engines for supporting academic work. They reported an overall convergence of search techniques and a homogenization of information–seeking patterns among various groups of users. They noted that both students and faculty prefer simple keyword searches while rarely using tools that are designed to increase the precision of search results (such as qualifying keywords with brackets or quotation marks).
The key purpose of our Web–based survey was gathering data to test for differences in search engine use frequency, intent, and satisfaction levels based on affiliation types and disciplinary affiliations. The data analysis revealed negligible variations of use frequency, search intent, or satisfaction level based on disciplinary affiliations. Our findings may indicate the convergence of search engine use behavior; however, we cannot ascertain such a conclusion due to the limitations of our sample size. We found some differences among faculty, graduate students, and undergraduate students. The following section describes some of our findings from the questionnaire to illustrate search engine usage patterns and variance among distinct academic groups.
Tasks supported by search engines. As seen in Figure 4, the survey respondents use search engines for a wide range of academic purposes. The graphical display of percentage distribution of use also indicates that there are variations in tasks supported. The faculty and graduate student respondents display similar characteristics, whereas the student group indicated a different use pattern. The interview process was undertaken in part to review our informants’ Google Histories. According to the Google History records of those who were interviewed, the average number of searches was 21 a day or more. Often, these searches were grouped in four or five sessions. When an informant was asked about this without referring to his or her Google History results, there was a tendency to underestimate the number of Google searches. Many of our informants were surprised to see how often they use the Google search engine. An observation we heard often during the interviews was that a search engine such as Google supports information needs for work, study, and life. The use indicated day–long utilization of search engines to support everyday life information needs. The Google History examples reflected a diverse range of searches, from “cheap tickets to China” (as a graduate student was preparing to go home for winter break) to “high energy physics particle accelerator.” It is difficult to examine search engine use for a single purpose because information–seeking behavior is pervasive and involves all types of information needs, including academic research, recreation, health, and hobbies.
Figure 4: Tasks supported through search engine use by academic affiliation type. For instance, 50% of the undergraduate respondents (n=23) indicated that they search for articles and books once a day or more. The Y–axis includes tasks supported by academic affiliation. The X–axis indicates the percentage of respondents within each category of academic status. The responses are summarized into two groups, ‘once a day or more’ and ‘less than once a week,’ for the purpose of sample comparison.
Figure 5: Percentage of search engine use among the groups of informants comparing their preference for three search engines by frequency of use. The charts indicate strong preference for Google as the primary search engine. The figure also demonstrates that the frequent users prefer Google as their search engine.
Satisfaction with search engines
Overall, the librarians interviewed during our study commented that they rarely hear complaints about search engines and that students and faculty appear to be satisfied, especially with Google. They stressed that there is blind trust and an increasing reliance on search results, especially on whatever appears on the first couple of screens. They expressed concerns that convenience and expediency are driving the information–seeking behavior of academics, primarily among undergraduate students.
In the Web–based survey, we asked students and faculty to rate their overall satisfaction with their chosen search engine based on their assessment of the utility of the returned results in facilitating their academic work . The survey results show that the undergraduate students surveyed are more satisfied with the performance of search engines, while the opinions of the faculty and graduate student respondents appear to be similar to one another. If we were to merge satisfaction levels 4 (somewhat satisfied) and 5 (very satisfied) as shown in Figure 6, 70 percent of the faculty respondents, 78 percent of the graduate students, and 82 percent of the undergraduate respondents indicated that they were satisfied with how well search engines support their research and studies.
Our follow–up interviews with 32 students and faculty continued to confirm that Google is uniformly the search engine of choice. Sample comments from the interviews include, “Google delivers,” “I always start my research at Google,” and “Google is my personal search engine.” Google was consistently described as reliable, efficient, and fast. The adjectives used during the interviews in characterizing Google included “thorough,” “comprehensive,” “easy,” “clean,” and “accurate.”
Figure 6: Satisfaction levels of faculty, graduate students, and undergraduate students. The respondents answer the question, “Overall, how satisfied are you with search engines?” using a 5–point scale of (1) Very Dissatisfied, (2) Somewhat Dissatisfied, (3) Neutral, (4) Somewhat Satisfied, and (5) Very Satisfied.
Role of trust in reviewing search results
Similar to the findings reported in the literature review section of the paper, the review of the Google History logs and the informants’ accounts of their search behavior during the interviews indicate that most searches are conducted using one to three keywords. They reported that they often find what they need within the first two pages of results and rarely feel the need to view more than what is shown on the first two screens. Informants reported that if they cannot find what they seek within the first two pages of search results, they revise their search by adding additional keywords or using an alternative keyword.
When we asked for the reasons behind the preference for revising their search statements rather than continuing to look at the search results past the first couple of pages, most respondents stated that, based on their experience, they trust that what they need will appear within the first couple of pages. They said that it was a matter of using the right keywords and that they felt the search engines were very precise in their indexing and ranking. A communication undergraduate student said, “The reason I usually do not go past the first page is I either find what I want or figure out by looking at the titles listed that I need to revise my search.” We heard similar remarks from others indicating that searchers do not go beyond the first couple of pages of findings, not necessarily because they follow the principle of least effort but also because of previous search experiences. Several informants reported that they feel in control as searchers and believe that they have the necessary skills to successfully find what they need.
Users find Google very intuitive and easy to use and believe that it represents the information space in which they are interested with excellent breadth and depth. Our informants, especially the undergraduate and graduate students, consistently made remarks that reflected their trust in and loyalty to Google. One of our graduate student informants from information science expressed her opinion by saying, “It [Google search] is organic — so natural to interact with.” Several informants noted that they felt the search engine algorithms were consistently improving and functioning in a more sophisticated way. As a graduate student from the engineering department noted, “The addition of new Google portals such as Google Scholar is great because the company recognizes that academics have different information needs.” There is confidence in Google’s broad and diverse coverage. The informants also mentioned that they have faith in Google due to its proven track record of innovation and constant improvement of the search algorithms and crawling techniques that enable users to harvest information.
Areas for improvement
While the Web has made the communication and sharing of research ideas and results among researchers much more efficient, its dynamic, burgeoning, and unstructured nature also brings along information overload and other challenges involved in analyzing and refining search results (Chau, et al., 2006). Our interviews revealed that there were concerns about the information management challenges associated with having access to large and diverse corpuses of digital information. The problem space was described as assessing and using the information found through the search engines rather than as the actual discovery process. Based on our case study, we can make the following observations that were revealed through our interviews.
Information filtering and management. The most common suggestion for improving the search engine experience was for search engines to provide convenient and integrated access to tools for assessing and organizing information found, especially for managing citations. This comment reminds us of the integrated nature of the research process, as it involves searching, filtering, reviewing, extracting, printing, and taking notes. The conclusion section includes a design principle in support of this finding. Another challenge brought up during our interviews was the difficulty of differentiating searches that rely on the same keywords. A common example would be differentiating the task of looking for a specific full–text article written by a certain author versus that of identifying articles that cite papers by a specific author. Entering “Joy Asam” may, for example, retrieve information in both categories. It is difficult to conduct a search that does one or the other but not both. As one of the undergraduate history students said, “When you are dealing with an overflow of information, research sometimes seems more difficult than trying to deal with scarce resources.” Another undergraduate engineering student made a similar remark: “Sometimes I spend too much time trying to find exactly what I want [from a list of findings] so I keep on trying new ways of searching.” Such observations led us to postulate that the preference for viewing only the first couple of pages of search results can be also attributed to trying to overcome some of the shortcomings of search engines.
Information overload. Among some of our informants, the abundance of information that searches reveal and the ease of discovering and retrieving it pose a temptation to print or download without skimming. Several of the students interviewed expressed concerns about information overload and referenced unread articles on their desks or occupying their hard drives as saved PDF files. Such responses were also in some cases conflicting, as some observed that, although they sometimes found themselves becoming distracted by leaving the main search path, they also find this flexibility useful as it supports the discovery of unexpected information. For instance, a social scientist faculty informant explained, “I was looking for a particular article in the Rural Sociology journal when I noticed that one of the issues is relevant to one graduate student’s research area.” She went on to describe how she opened her e–mail client to send the information to her student before proceeding with her search. She described this as “not getting distracted or sidetracking but discovering something new along the way.”
Privacy of Web searches. During our interviews, the majority of the informants expressed their awareness that search engines generate advertising revenues based on searchers’ search engine use patterns. They appeared nevertheless to continue to perceive search engines as fair and unbiased sources of information. The Google History log was an unknown feature to most of our informants. We felt that seeing the information captured by Google brought a new level of awareness of the amount of personal search information captured by Google. Some of the informants questioned if and how this personal search pattern information is being used by Google and its commercial partners.
Our research was designed as a case study to investigate how search engines are being used by students and faculty. Rather than being structured as a generalizable study, it was indented to be an exploratory study to expand our understanding of how search engines support academic work, and of similarities and differences in search patterns and perceptions. We also tested the roles of various research methods in revealing information about distinct aspects of human–information interactions. We believe that using multiple data–gathering methods helped us view user–search engine interactions from complementary research angles.
The interviews with reference librarians were useful in situating the case study within the context of academic institutions. Through the use of a questionnaire, we gathered quantitative data to verify the librarians’ observations about search intents as well as gathering baseline information about usage patterns. We found the follow–up interviews instrumental in uncovering contextual issues, especially both positive and negative consequences of search engine use. Integrating the Google History log analysis with our interviews allowed us to learn about actual usage without trying to interpret anonymous log data that lack context.
As described in the Findings section, our data collectively indicate that our informants use search engines pervasively to support a variety of tasks. Although we noted some differences in search patterns based on affiliation type, we also observed considerable convergence in search behavior and perceptions regardless of academic discipline or gender. By and large, we heard positive comments about search engines, specifically Google, and concluded that users are generally satisfied and exhibit high levels of use and dependence. Although we tried to focus on specific tasks supported in academic work, we found out that it is difficult to examine search engine use for a single purpose because information–seeking behavior is pervasive and involves all types of information needs, including academic research, recreation, health, and hobbies.
Similar to the related studies cited in the literature review, we also conclude that our informants prefer to use simple keyword searches and view only the first couple of results pages. Based on our interviews with the students and faculty informants, rather than contributing this trend to a single factor, we postulate that this behavior in rooted in several factors. First, as the principle of least effort indicates, our informants appear to seek satisfactory as opposed to optimal results, especially within the context of academic life. Second, they rely on their previous search experiences that what they need often appear within the first couple of pages due to the precision of search engine indexing and ranking algorithms. Third, as illustrated in the Areas for Improvement section, the preference for viewing only the first couple of pages of search results can be also attributed to trying to overcome some of the shortcomings of search engines by retrying to repeat the search using alternative words. Finally, we also observed that branding may be a factor, especially for the Google search engine users. Several of the informants during the interviews praised Google for its innovative and forward–looking information technology agenda.
We can interpret our data from both constructivist and constructionist perspectives. Constructivism as a theoretical stance proposes that each person, depending on her specific needs and local contingencies, makes various uses of information technologies such as a search engine (Leonardi and Barley, 2008). From a constructivist perspective, we can argue that the searchers’ specific needs and statuses determine how search engines are used as well as the resulting satisfaction levels. Such a constructivist analysis highlights the variances in use patterns and perceptions based on actual work supported.
On the other hand, the constructionist approach to research holds that people eventually construct and share similar perceptions and practices regardless of their differences (Leonardi and Barley, 2008). Therefore, from a constructionist perspective, we observe the homogenization of individual variations due to a common culture of search engine use and dependency. We noted, regardless of academic discipline or affiliation type, several common search behaviors and opinions. For example, findings are reviewed similarly across these categories. There appears to be a strong branding of search engines as well, especially of Google, as reliable and effective search tools that meet the needs of students and scholars. As Orlikowski (2000) suggests, asking questions from constructionist perspective leads to uncover taken–for–granted and embedded nature of new media such as search engines as they blend in our daily workflows.
Since search engines are becoming a preferred method for discovering, retrieving, and organizing scholarly information, it is critical that we understand the emerging trends. Search engines may have a tremendous impact on how scholarly information is discovered, retrieved, and used. Evolving search trends have significant implications for the future of knowledge discovery and creation processes. We believe that this is an important and critical domain of human–computer interaction, directly affecting the quality of learning, teaching, and research. One of the goals of our case study was to recommend design principles that may improve the functionality of Web search engines. Based on our findings and observations with respect to search engine behavior and perceptions, we suggest two design principles that should enhance how search engines support the academic work of students and faculty.
Information management. Although we observed a generally positive attitude towards search engines, our findings indicate that the vast amount of digital information available at searchers’ fingertips may also cause information overload and a sense of frustration due to cognitive strain. Despite the ubiquity and seamless integration of search engines in our daily lives, many of our informants noted that they felt inundated with information resources. As described in the literature review section, some studies conclude that search engine users are unwilling to invest additional effort to improve their strategies and that is why they often settle for simple keyword searches and view only the first results page. We question whether the preference for viewing only the first couple of pages of search results can be fully attributed to the principle of least effort. We postulate that search patterns also indicate how searchers cope with large information spaces in order to avoid cognitive overload . In addressing the challenge of information overload, we recommend the development of a personal information manager to facilitate the tracking of information found, including downloaded articles. Figure 7 shows a prototype search browser to illustrate how it would be possible to accompany the search process with an optional information manager that tracks information that is identified as relevant by searchers, including their locations.
Figure 7: A prototype search engine screen illustrates how the search process could be accompanied by an optional information manager to track articles found. The Web site addresses of the articles are included if they are available online. The PDF icons next to the citations are hotlinks to the articles that are downloaded on a searcher’s computer.
Reflective design. Our study confirms the conclusions of Rieh (2002) and Xie (2006) that, when searchers review retrieved results, they not only make relevance judgments but also have to make authority and quality judgments. As search engine use becomes a common practice and is increasingly embedded in our daily lives, the search process turns into a routine without questioning the quality of information found. Sengers, et al. (2005) argue that if we are concerned about the social and cultural implications of technologies, new media design practices should support both designers and users in ongoing critical reflection about technology and its relationship to our daily lives. A commonly used search engine, such as Google, should also support skepticism about its own operation without assuming full authority and reliability. How, for instance, can we encourage students to question the quality of information they are finding on the Web? How can be instill the skills needed when using and citing information ethically and legally? We do not have any specific graphical interface design ideas; we are proposing instead that search engines add design features or provide dynamic feedback so that search engine users perceive the search process as one of negotiation and communication rather than as a one–way authoritative and definitive source of information. We make a case that search engines should play a role in building ‘digital literacy’ in order to help searchers more effectively find, analyze, and use information. The goal is to encourage searchers to integrate information effectively and efficiently by evaluating the credibility of a source, and using and citing information ethically and legally.
Limitations of this study
Due to our research time framework and interest in experimenting with different data–gathering strategies, we designed our study as an explanatory case study, especially aimed at revealing patterns that can be subjected to examination in future studies. We recognize that our sample size is small, which bears significant implications for the validity and generalizability of our findings. Although we intended to test for variances in use patterns and perceptions based on affiliation types and academic discipline, our data analysis revealed only minor differences, which are reported in the Findings section. The results may indicate a convergence of search engine use behavior; however, we cannot ascertain such a conclusion due to the limitations of our sample size. Another shortcoming of the study is the inconsistencies in the sampling method. We recruited faculty members via e–mail requests whereas the student participants were invited to participate in the study by approaching them at University cafes. Also, we relied on the willingness of our informants to participate in the follow–up interviews and discuss with us their Google Web History pages.
Another potential limitation of our study is that in order to accommodate the privacy of the informants, we did not directly view the Google History Logs but relied on their interpretation of the information as they reported facts from their log histories. However, as noted earlier, we do not perceive this as one of the shortcomings of the study. We believe that informants provided us with sufficiently informative observations.
As we study new media such as search engines, we need to recognize that information–seeking happens everyday. Various searching tools are becoming increasingly seamless and integrated as they support different tasks, from checking the side effects of a medication to verifying climate statistics for a scholarly paper. Our findings suggest that students and faculty increasingly rely on search engines to support their work. This use is increasingly becoming an integral part of their lives, turning search engines into taken–for–granted background tools.
Although there is a rich body of literature on search engine use, the existing studies often focus on improving search engine algorithms to enhance the precision and recall of findings. Even though such research is essential for improving the quality of findings, it primarily supports the needs of marketers to ensure that their Web sites are ranked within the first pages of search results . This trend may have critical consequences, as the commercial relevancy of information from a marketing perspective may become a key benchmark for improving search engine quality. Search engine companies are commercial entities. Like all such entities, their main business imperative is to maximize their revenues. As academics’ reliance on search engines increases, however, these tools occupy a central position on the information landscape and we need to pay attention to both their positive and unintended consequences in facilitating information discovery and use. Spink’s (2002) user–centered approach to the Web search engine evaluation can be expanded to add a third dimension to the assessment process. In addition to exploring effectiveness and usability, Web search engine researchers must also address the negative and positive consequences of search engine use and potential implications for human–information interactions.
While existing search engines are instrumental in making searching for information much more efficient, our interviews revealed that there are concerns about the information management challenges associated with having access to large and diverse corpuses of digital information. Several of our informants described the problem as assessing and using the information found rather than the actual discovery process through search engines. The current search engine research agenda needs to be expanded beyond looking at the precision and viewing patterns of search engine findings to understanding how individuals are putting the information discovered into use to support their tasks. Case studies such as ours will add to our insights into users’ information–seeking contexts in order to support the development of search engines that take into consideration human–information interactions in a more holistic manner.
The primary contribution of our study to the search engine literature is in its methodological approach. It illustrates how to merge the strengths of different data gathering methods in understanding search engine use in everyday life context. Our case study offers a realistic glimpse into how students and faculty use search engines to support their learning, teaching, and research. The goal of using search engines is not only acquiring information but also utilizing the discovered information successfully to accomplish a certain task. We believe that studies such as ours will contribute to the search engine research literature by situating information–seeking behavior within the scope of searchers’ specific goals and associated tasks. Considering the search engine use patterns of specific user groups will facilitate more task–focused assessment and development of search engines. Also, such information is critical for information professionals such as librarians as they develop new services for students and researchers.
About the author
Oya Y. Rieger is associate university librarian for information technologies at Cornell University Library. She oversees the Library’s Web and repository development, digital preservation, electronic publishing, e–scholarship initiatives including the related organizational policies and business models. She is the coauthor of the award–winning Moving theory into practice: Digital imaging for libraries and archives (Research Libraries Group, 2000) and has served on several digital imaging and preservation working groups. Her 2008 publication by the Council on Library and Information Resources focuses on the digitization challenges presented by large–scale digitization projects. She has a B.S. in Economics, an M.P.A., and an M.S. in Information Systems. She is a Ph.D. candidate at Cornell’s Department of Communication, focusing on human–computer interaction. Her dissertation research involves investigating the role of information and communication technologies in supporting research and scholarly discourse by humanities scholars.
The genesis of this paper is a research project carried out in collaboration with Stephen Purpura (Ph.D. candidate at Cornell Information Science) during the Fall 2008 semester as part of a graduate course on advanced human–computer interaction taught by Professor Dan Cosley. I would like to thank Stephen for his contributions to the first phase of the research. One of our teammates, Ashim Jolly, assisted us with the administration of the survey. My appreciation also goes to Professor Dan Cosley and to Hronn Brynjarsdor from Cornell Information Science, who provides us with feedback during our research. Last but not least, I am grateful to Professor Geri Gay, Department of Communication, for her ongoing support and guidance.
1. Marchionini, 1995, p. 36.
2. Precision and recall are commonly used quantitative indicators of search quality in the information and library science literature. Precision is a measure of the usefulness of a findings list and measures how well a search engine performs in not returning non–relevant documents. Recall is a measure of the completeness of the list and of how well the search engine performs in finding relevant documents. Recall is 100 percent when every relevant document is retrieved.
3. Su’s model of user evaluation of Web search engines presents a set of measurement scales for evaluating user satisfaction with search engine performance. For our study, we use only the performance measure, “User’s judgment of overall success.” Also included in the satisfaction scale are response time, search interface, online documentation, output display, interaction, precision, and time saving.
4. Some of the interviews towards the end of the study were conducted specifically to validate some of the emerging themes, such as confidence in Google’s coverage or reactions to long retrieval lists.
6. The question about overall satisfaction was not included in the original Web–based survey. The satisfaction survey results are based on a sample size of 65 students and faculty. The data were gathered from April 2009 through June 2009.
7. The commonly used search engines provide options under “advanced search features” to limit search results to a couple of pages of the most relevant findings and also to enable the ranking of these findings by academic search criteria such as date of publication or publication type. However, we noted that these features that facilitate in distinguishing and filtering were seldom used by our informants.
8. For instance, the purpose of the 2006 iProspect search engine use behavior report is to advise search engine companies and their marketers on how to structure and tag their Web sites so that they will rank within the first three pages of search results.
A. Broder, 2002. “Taxonomy of Web search,” ACM Special Interest Group on Information Retrieval (SIGIR) Forum, volume 36, number 2, pp. 3–10.
J. Brophy and D. Bawden, 2005. “Is Google enough? Comparison of an Internet search engine with academic library resources,” Aslib Proceedings: New Information Perspectives, volume 57, number 6, pp. 498–512.
M. Chau, Z. Huang, J. Qin, Y. Zhou, and H. Chen, 2006. “Building a scientific knowledge Web portal: The NanoPort experience,” Decision Support Systems, volume 42, number 2, pp. 1,216–1,238.
C.W. Choo, B. Detlor, and D. Turnbull, 2000. “Information seeking on the Web: An integrated model of browsing and searching,” First Monday, volume 5, number 2, at http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/729/638, accessed 11 December 2008.
C. Cool and A. Spink, 2002. “Issues of context in information retrieval (IR): An introduction to the special issue,” Information Processing & Management, volume 38, number 5, pp. 605–611.
L. Granka, T. Joachims, and G. Gay, 2004. “Eye–tracking analysis of user behavior in WWW search,” Proceedings of 27th Annual ACM Conference on Research and Development in Information Retrieval, Special Interest Group on Information Retrieval (SIGIR ’04), Sheffield, U.K., pp. 478–479.
J.R. Griffiths and P. Brophy, 2005. “Student searching behavior and the Web: Use of academic resources and Google,” Library Trends, volume 53, number 4, pp. 539–554.
L. Haglund and P. Olsson, 2008. “The impact on university libraries of changes in information behaviour among academic researchers: A multiple case study,” Journal of Academic Librarianship, volume 34, number 1, pp. 52–59.
E. Hargittai, 2007. “The social, political, economic, and cultural dimensions of search engines: An introduction,” Journal of Computer–Mediated Communication, volume 12, number 3, pp. 769–777, and at http://jcmc.indiana.edu/vol12/issue3/hargittai.html, accessed 28 November 2009.
iProspect, 2006. “Search engine user behavior study,” at http://www.iprospect.com/premiumPDFs/WhitePaper_2006_SearchEngineUserBehavior.pdf, accessed 25 June 2009.
B.J. Jansen, 2006. “Search log analysis: What it is, what’s been done, how to do it,” Library & Information Science Research, volume 28, number 3, pp. 407–432.
B.J. Jansen and A. Spink, 2003. “An analysis of Web documents retrieved and viewed,” Proceedings of the Fourth International Conference on Internet Computing (Las Vegas, Nev.), pp. 65–69.
B.J. Jansen, M. Zhang, and C.D. Schultz, 2009. “Brand and its effect on user perception of search engine performance,” Journal of American Society for Information Science and Technology, volume 60, number 8, pp. 1,572–1,595.
B.J. Jansen, D.L. Booth, and A. Spink, 2008. “Determining the informational, navigational, and transactional intent of Web queries,” Information Processing & Management, volume 44, number 3, pp. 1,251–1,266.
T. Joachims and F. Radlinski, 2007. “Search engines that learn from implicit feedback,” IEEE Computer, volume 40, number 8, pp. 34–40.
J. Kim, 2009. “Describing and predicting information-seeking behavior on the Web,” Journal of the American Society for Information Science and Technology, volume 60, number 4, pp. 679–693.
O. Kurland and L. Lee, 2005. “PageRank without hyperlinks: Structural re–ranking using links induced by language models,” Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 306–313.
P.M. Leonardi and S.R. Barley, 2008. “Materiality and change: Challenges to building better theory about technology and organizing,” Information and Organization, volume 18, number 3, pp. 159–176.
Y.–C. Liao, 2008. “A weight–based approach to information retrieval and relevance feedback,” Expert Systems with Applications, volume 35, numbers 1–2, pp. 254–261.
G. Marchionini, 2008. “Human–information interaction research and development,” Library and Information Science Research, volume 30, pp. 165–174.
G. Marchionini, 1995. Information seeking in electronic environments. Cambridge: Cambridge University Press.
J. Mostafa 2005. “Seeking better Web searches,” Scientific American, volume 292, number 2, pp. 67–73.
M. O’Brien and M.T. Keane, 2006. “Modeling result–list searching in the World Wide Web: The role of relevance topologies and trust bias,” Proceedings of the 28th Annual Conference of the Cognitive Science Society, pp. 1,881–1,886; version at http://csjarchive.cogsci.rpi.edu/proceedings/2006/docs/p1881.pdf, accessed 28 November 2009.
OCLC, 2005. “Perceptions of libraries and information resources,” Dublin, Ohio: OCLC, at http://www.oclc.org/reports/2005perceptions.htm, accessed 12 March 2009.
W.J. Orlikowski, 2000. “Using technology and constituting structures: A practice lens for studying technology in organizations,” Organization Science, volume 11, number 4, pp. 404–428.
B. Pan, H. Hembrooke, T. Joachims, L. Lorigo, G. Gay, and L. Granka, 2007. “In Google we trust: Users’ decision on rank position, and relevance,” Journal of Computer–Mediated Communication, volume 12, number 3, at http://jcmc.indiana.edu/vol12/issue3/pan.html, accessed 15 July 2009.
S.Y. Rieh, 2002. “Judgment of information quality and cognitive authority in the Web,” Journal of the American Society for Information Science and Technology, volume 53, number 2, pp. 145–161.
D.E. Rose and D. Levinson, 2005. “Understanding user goals in Web search,” Proceedings of the 13th International Conference on World Wide Web (Chiba, Japan), pp. 391–400.
R. Schonfeld and K. Guthrie, 2006. “Survey of U.S. higher education faculty attitudes and behaviors,” New York: Ithaka, at http://dx.doi.org/10.3886/ICPSR22700, accessed 1 December 2008.
P. Sengers, K. Boehner, S. David, and J. Kaye, 2005. “Reflective design,” Proceedings of the 4th Decennial Conference on Critical Computing (Aarhus, Denmark), pp. 49–58.
A. Spink, 2002. “A user–centered approach to evaluating human interaction with Web search engines: An exploratory study,” Information Processing and Management, volume 38, number 3, pp. 401–426.
L.T. Su, 2003. “A comprehensive and systematic model of user evaluation of Web search engines: I. Theory and background,” Journal of the American Society for Information Science and Technology, volume 54, number 13, pp. 1,175–1,192.
A. Thatcher, 2008. “Web search strategies: The influence of Web experience and task type,” Information Processing and Management, volume 44, number 3, pp. 1,308–1,329.
H. Xie, 2006. “Understanding human–work domain interaction: Implications for the design of a corporate digital library,” Journal of the American Society for Information Science and Technology, volume 57, number 1, pp. 128–143.
Paper received 18 September 2009; revised 1 October 2009; accepted 10 November 2009.
Copyright © 2009, First Monday.
Copyright © 2009, Oya Y. Rieger.
Search engine use behavior of students and faculty: User perceptions and implications for future research
by Oya Y. Rieger.
First Monday, Volume 14, Number 12 - 7 December 2009
A Great Cities Initiative of the University of Illinois at Chicago University Library.
© First Monday, 1995-2013.