Where are the 'key' words? Optimizing multimedia textual attributes to improve viewership
First Monday

Where are the 'key' words? Optimizing multimedia textual attributes to improve viewership by Tatiana Pontes, Elizeu Santos-Neto,
Jussara Almeida, and Matei Ripeanu



Abstract
Multimedia content is central to our experience on the Web. Specifically, users frequently search and watch videos online. The textual features that accompany such content (e.g., title, description, and tags) can generally be optimized to attract more search traffic and ultimately to increase the advertisement-generated revenue.

This study investigates whether automating tag selection for online video content with the goal of increasing viewership is feasible. In summary, it shows that content producers can lower their operational costs for tag selection using a hybrid approach that combines dedicated personnel (often known as ‘channel managers’), crowdsourcing, and automatic tag suggestions. More concretely, this work provides the following insights: first, it offers evidence that existing tags for a sample of YouTube videos can be improved; second, this study shows that an automated tag recommendation process can be efficient in practice; and, finally it explores the impact of using information mined from various data sources associated with content items on the quality of the resulting tags.

Contents

1. Introduction
2. A pipeline for keyword recommendation
3. The ground truth
4. Experimental setup
5. Experimental results
6. Related work
7. Summary

 


 

1. Introduction

The ever-increasing volume of multimedia content available on the Web reflects how easy it has become to create this type of content. To facilitate users to sift through multimedia content items, and, in particular through online videos, these are generally accompanied by rich and diverse contextual information, which ranges from textual descriptors to online interactions and user feedback (e.g., in the form of ratings and comments).

Specialized content management companies help content owners with the publication and monetization tasks related to their online content, and the revenue is generally shared with the owner. As revenue is directly related to the number of ad prints each piece of content receives, content managers have an incentive to increase content popularity by promoting it among the relevant audience and optimizing its textual descriptors.

Although viewers may reach a video item starting from many ‘leads’ (e.g., an e-mail from a friend or a promotion campaign in an online social network), a large portion of viewers relies on keyword-based search and/or tag-based navigation to find videos. An argument supporting this assertion is the fact that 20.4 percent of the unique YouTube visitors come from google.com searches (http://www.alexa.com/siteinfo/youtube.com).

Consequently, the textual features of video content (e.g., its title, description, comments, and tags for YouTube) have a major impact on its popularity and, ultimately, on the advertising revenue it generates (Huang, et al., 2008; R. Zhou, et al., 2011). Experts can produce these textual features via manual inspection, a practice still used today. This solution, however, is manpower intensive and limits the scale at which content managers can operate. More importantly, even for well curated online videos, there is evidence that their textual features can be further improved to attract more search traffic (Santos-Neto, et al., 2014). Therefore, mechanisms to support this process (e.g., automating tag/title suggestions) are desirable.

The literature is rich in automatic tag recommendation strategies, which exploit various data sources to extract candidate keywords for a target content (including, for example, other pieces of textual attributes associated with the target content, such as its title and description). For movies, for example, there is a plethora of sources from which an automated tag recommendation method could extract keyword candidates: Wikipedia (a peer-produced encyclopedia), Movie Lens and Rotten Tomatoes (social networks where movie enthusiasts collaboratively catalog, rate, and annotate movies), New York Times movie review section, or even YouTube comments are potential sources of candidate keywords to annotate multimedia content. Nevertheless, there has been little effort to assess the relative quality of alternative data sources to choose tags with the ultimate goal to improve the popularity of multimedia content.

In this context, the following questions arise: To what extent the tags currently associated with existing video content on social video-sharing Web sites, as YouTube, are optimized to attract search traffic? How do different input data sources compare with regards to their impact on the performance of existing tag recommenders for boosting content popularity? Is the quality of the candidate terms extracted from a collaborative annotation source affected by the number of peers?

We discuss an experiment to address these questions by evaluating the quality of different sources of candidate terms used as input to existing tag recommenders aiming at raising the popularity of YouTube videos. We present how we built a ground truth, a challenge by itself, and discuss representative results as well as the main conclusions of our investigation. We provide the following insights: first, it offers evidence that existing tags for a sample of YouTube videos can be improved; second, this study shows that an automated tag recommendation process can be efficient in practice; and, finally it explores the impact of using information mined from various data sources associated with content items on the quality of the resulting tags.

 

++++++++++

2. A pipeline for keyword recommendation

Annotating a video with tags that match the terms users would use to search for it increases the chance that users view the video. This is because most information services, and search in particular, rely on tags as a means to describe multimedia content. Various textual sources related to a video and whose content can be automatically retrieved can be used as inputs for recommenders to suggest tags. Our study focuses on movies but it can easily be extended to other types of content.

Due to their vast reach, tweets have the capacity to connect with a larger audience (either real or imagined), as the platform does not restrict content to one’s friends list. Rather than “friending” someone (as you would on Facebook), Twitter users “follow” people. Usually, people will follow users who share a similar interest with them on a variety of topics (such as sports, fashion, or current events). As a result, Twitter has the capacity to create virtual communities where people collectively follow each other out of mutual interest (without ever having to have met the person off-line). This dynamic is what makes Twitter quite unique. As Dorsey explains,

A recommendation pipeline that implements this idea is schematically presented in Figure 1: data sources feed the pipeline with textual input data. Next, the textual data is pre-processed by filters to both clean and augment it (e.g., remove stopwords, detect named entities). This processing step provides candidate keywords for the recommenders. The recommendation step uses the candidate keywords (and their related statistics, e.g., frequency and co-occurrence) to produce a ranked list according to a scoring function implemented by a given recommender. As the space available for tags provided by video sharing Web sites is limited, the selection of the most valuable candidate keywords is constrained by a budget, often defined by the number of words or characters. Thus, the final step consists of solving an instance of the 0-1-knapsack problem that selects a set of tags from the list produced by the recommender.

 

The recommendation pipeline
 
Figure 1: The recommendation pipeline.

 

The recommendation pipeline is composed of four main elements: data sources, filters, tag recommenders, and knapsack solver:

  • Data sources. Provide the input textual data used by the tag recommenders. In particular, we are interested in peer-produced as well as expert-produced data sources. Details in §4.1.

  • Filters. The raw textual data extracted from a data source is filtered to minimize noise. We consider simple filters such as stopwords and punctuation removal, lowercasing, and named entity detection (we leverage OpenCalais.com). The goal is to both clean and augment the input data.

  • Tag recommender. Starting from a set of candidate keywords together with relevant statistics (e.g., frequency, co-occurrence), a recommender scores the candidate keywords. There are many ways of defining scoring functions; and, it is not our goal to advocate a specific scoring function or recommender. Instead, our intention is to investigate the outcomes of using different sources of information when recommending tags to boost video popularity. Discussion in §4.2.

  • Knapsack solver. After ranking candidate keywords, those that best fit the budget are selected. Here, the budget is expressed in terms of the number of characters, as done in YouTube. This step is formulated as an instance of the 0-1-knapsack problem, whose objective is to select a set of tags that maximizes the recommender score function while respecting the budget constraint.

Our goal is to evaluate how the tags suggested by this automated recommendation pipeline compare with a manually produced ground truth and with the ones present in YouTube in terms of raising video popularity. A second aspect to investigate is the impact of the choice of the data source used as input for the tag recommenders on the quality of the recommended tag-set. Next, we discuss how to build a ground truth that enables comparing data sources.

 

++++++++++

3. The ground truth

The ideal ground truth, which enables measuring the impact of tags on content popularity in YouTube, would consist of experiments that vary the set of tags associated to videos and capture their impact on the number of views attracted. However, collecting this ground truth requires having the publishing rights for the videos, and implies executing experiments over a considerable duration.

We thus use a different method: we start from the observation that, intuitively, a high-quality solution for the tag recommendation problem is a good approximation of the query terms users would use to search for a video. Thus, we built our ground truth by setting up a survey using the Amazon Mechanical Turk (AMT). The survey asks turkers (i.e., AMT workers) to watch a video and answer the question: ‘What query terms would you use to search for this video?’ The rationale is that these terms would, if used as tags to annotate the video, maximize its retrieval by the YouTube search engine (and indirectly maximize viewership) while still being relevant to the video.

Content selection. We asked turkers to watch movie trailers, not the actual movies. The reason is that the trailers are generally shorter, which encourages turkers to contribute more.

Our dataset consists of 382 movies selected to meet two constraints: first, their video trailers must be available on YouTube; second, to enable comparisons, the corresponding movies had to have reviews available via the New York Times movie reviews API, and records in the Movie Lens catalog.

 

Histogram of the genres of the movies in the ground truth
 
Figure 2: Histogram of the genres of the movies in the ground truth.

 

We categorized the movies into genres using a third party API, the OMDb (Open Movie Database), which enables access to IMDb data. In total, the resulting list consists of 22 partially overlapping genres (Figure 2). Our dataset is diverse presenting examples of all genres listed in IMDb, from science fiction (‘The Matrix’) to music (‘Moulin Rouge’). The most popular genre is drama (54 percent of the videos, as ‘Dead Poets Society’), while the six least popular genres have less than ten videos each.

For each video, we collected answers from three turkers who were asked to associate 3 to 10 keywords to each video (as search queries are typically of this length (He and Ounis, 2006)). Each participant was paid US$0.30 per task; we followed AMT pay guidelines: our pay rate amounts to an hourly rate of US$3/hour, which is significantly lower than hiring dedicated ‘channel managers’. We performed simple quality control by inspecting each answer to avoid accepting spam.

Ground truth characterization. Thirty-three turkers participated in our survey, where 19 turkers (58 percent) evaluated more than five videos, and four turkers (12 percent) evaluated more than 100 videos (the maximum reaching 333 videos). Although the task requested turkers to associate at least three keywords to each video, 82 percent of the evaluations provided more than the required minimum, which resulted in 96 percent of the videos with 10 or more different keywords.

Regarding the total number of characters in the set of unique keywords associated to each video, we notice that the length (in number of characters) of all keywords assigned to each video varies from 51 to 264 characters; where 32 percent of the videos have total length up to 100 characters. These values guided the choice of the budget parameter in our experiments, as we explain in §4.3.

Recall that turkers were asked to suggest the keywords they would use to search for the videos. It is reasonable to assume that the suggestions provided by the turkers are most probably driven by their individual perceptions about the content. Moreover, as discussed above, many videos can be classified into multiple genres (Figure 2). Therefore, one might expect that the keywords selected by different turkers for the same video might differ. Yet, we found that for 20 percent of the videos, at least 17 percent of the keywords were selected by all turkers, whereas 40 percent of the keywords were selected by two or more turkers. We also note that some of the observed differences actually correspond to synonyms or keywords with similar meaning (e.g., suspense and thriller; sci-fi and science fiction; animation and cartoon). Therefore, we still observe some degree of agreement among turkers, implying that our ground truth provides relevant terms to describe the content. Finally, considering that turkers were asked to provide terms they would use to search for the video, the ground truth also reflect the terms users would actually use to search for a video.

 

Tag cloud of the top-20 most frequent terms in the ground truth
 
Figure 3: Tag cloud of the top-20 most frequent terms in the ground truth (the keyword ‘trailer’ was omitted).

 

To gain an understanding on what types of keywords are suggested by turkers, we looked at the set of most popular terms (overall) in the ground truth. Figure 3 shows the tag cloud of the top-20 most frequent keywords, the size of them is a reflection of their popularity. The tag cloud is composed of genre-describing terms, cast, and release year of the corresponding movies.

Although words describing genres are the most frequent terms in the tag cloud showed, named entities (e.g., names of actors, directors, and producers) also have a strong presence. In fact, among the top-five percent (i.e., the top-123) most frequent terms provided by turkers, 52 percent of them are named entities, while about 23 percent are genre-describing. This suggests that a strategy that aims to boost popularity of videos by optimizing the tags associated with the content should use sources that provide named entities related to the video. It is important to note that we found some evidence that this happens to videos other than movie trailers. For example, in a smaller sample of Super Bowl video ads, we observed that terms users provide as keywords they would use to search for the ads are also dominated by named entities like brand names.

 

++++++++++

4. Experimental setup

This section presents the instances of data sources and tag recommenders used. We also describe the success metrics selected for assessing the quality of the recommended tag-set compared with the ground truth.

4.1. Data sources

We focused on data sources grouped into two categories: peer- and expert-produced to extract tags for video content promotion (Figure 4). The position of a data source in this spectrum depends on whether data is produced collaboratively or by a single expert. We describe each data source next.

Movie Lens is a Web system where users collaboratively build a catalog of movies. Users can create and update movie entries, annotate movies with tags, review and rate them. Based on previous users’ activities and ratings, Movie Lens suggests movies that a user may like to watch. For our evaluation, we used only the tags users produce while collaboratively annotating the movies. This data is a publicly available trace of tag assignments (http://www.grouplens.org/taxonomy/term/14).

Wikipedia is a free encyclopedia where users collaboratively write articles about a multitude of topics. Users in Wikipedia also edit and maintain pages related to specific movies. We leveraged these pages as the sources of candidate keywords for recommending tags for their respective movie trailers in our sample.

The New York Times is an American daily newspaper also available online. The movies section presents reviews written by critics who can be considered experts on movies. We leveraged the review page of a movie as the source of candidate keywords for the tag recommender. The reviews are collected via the New York Times API.

 

Space of data sources we explore according to their production mode
 
Figure 4: Space of data sources we explore according to their production mode.

 

Rotten Tomatoes is a portal devoted to movies. Users can rate and review movies, as well as access all credits information: actors, directors, synopsis, etc. The portal also links to critics’ reviews. This information can be considered as produced by experts (likely the film credits are obtained from the producers, while the critics’ reviews are similar to those from New York Times). This is available via the Rotten Tomatoes API. In the experiments, we divided Rotten Tomatoes into two data sources: Rotten Tomatoes (with the credits information); and, RT Reviews (with the critics’ reviews). While users can review the movies as well (qualified as peer-produced information), these were not accessible via the API at the time of our investigation.

YouTube is a video-sharing Web site. We collected the tags already assigned to the YouTube videos in our sample from the HTML source of each video’s page to test if they can be further optimized (a first step in our investigation). YouTube figures in the expert-produced end of the spectrum, as only the publisher can assign tags to the video. In this case, it is reasonable to assume that publishers aim to optimize videos’ textual features to attract more views.

Another potential dataset is IMDb (http://www.imdb.com/). However, since it did not have an API at the time we performed this study, we did not use it.

4.2. Tag recommenders

Our experiments used two tag recommendation algorithms that process the input provided by the data sources. In particular, we used frequency and random walk as they harness some fundamental aspects of the tag recommendation problem that state-of-the-art methods (Sigurbjörnsson and Zwol, 2008; Konstas, et al., 2009; Belém, et al., 2013) also use (i.e., tag frequency, and tag co-occurrence patterns).

The frequency recommender scores the candidate keywords based on how often each keyword appears in the data provided by a data source. Given a movie title, our pipeline finds the documents in the data source that match the title and extracts a list of candidate keywords. For example, in Wikipedia, the candidate keywords to a given movie are extracted from the Wikipedia page about the movie. Hence, the frequency of a keyword is the number of times it appears in that page. Similarly, in Movie Lens, the frequency is the number of times a tag is assigned to a movie.

The random walk recommender harnesses both the frequency and the co-occurrence between keywords. The co-occurrence is detected differently depending on the data source. In Movie Lens, two keywords co-occur if they are assigned to the movie by the same user, while in New York Times, Rotten Tomatoes, and Wikipedia two keywords co-occur if they appear in the same page related to the movie (i.e., review, movie record, and movie page, respectively). Random walk builds a graph where each keyword is a node, and an edge connects two keywords if they co-occur. The initial node score is proportional to the individual frequency of each keyword obtained from the data source. This recommender is executed until convergence, and the final node scores are used to rank the candidate keywords.

4.3. Budget adjustment

To make the comparison fairer, we adjusted the budget used by the knapsack solver to the size of the tag set of each movie in the ground truth. The reason for using a budget per video is that otherwise, a uniform larger budget would bias the F3-measure we use (see definition below), as the number of recommended tags would be always larger than the ground truth size.

4.4. Success metrics

The final step is to estimate, for each combination of videos, input data sources, and recommender, the quality of the recommended tag-set compared to the ground truth. To this end, we have used multiple metrics: the F3-measure, Generalized τ Distance, and the Normalized Discounted Cumulative Gain. Since the results are similar regardless of the metric used, we present only the results using the F3-measure. It measures the quality of the recommended tag-set Sv compared with the ground truth Tv for the video v. The precise definitions are:

 

equation
equation

 

 

++++++++++

5. Experimental results

This section presents our experimental results and the answers to the questions that motivate this study. First, we analyze the tags currently assigned to YouTube videos aiming at evaluating if they are optimized to attract search traffic (§5.1). Then, we explore the quality of the tags produced by various data sources (individually and combined) and recommendation algorithms (§5.2). Finally, we explore how factors like the number of peers or the type of a video impact the quality of the recommended tags (§5.3).

5.1. To what extent the tags currently associated with YouTube videos are optimized to attract search traffic? Is there room for improvement?

To address these questions, we compare the YouTube tags to the ground truth for each video, and find that there is a wide gap between them.

 

CCDF of F3-measure for YouTube tags and recommended tags
 
Figure 5: CCDF of F3-measure for YouTube tags and recommended tags.

 

To explore whether this gap can be covered, at least partially, by automated tag recommendations, we explore the performance of the tag recommenders using as inputs all data sources combined (Movie Lens, Rotten Tomatoes — credits and reviews, Wikipedia, and New York Times). The results are presented in Figure 5, which presents the Complementary Cumulative Distribution Function (CCDF) for the F3-measure. A point in the curve indicates the percentage of videos (on y-axis) for which the F3-measure is larger than the corresponding value on x-axis, thus, the closer the line is to the top-right corner, the better. The dotted (blue) line on the left represents the YouTube tags, while the solid (red) line is the result of the automated tag recommendation.

The figure shows that both tag recommenders produce higher F3-measure results compared to the tags currently assigned to the YouTube videos. We tested the (statistical) significance of these improvements by applying the Kolmogorov-Smirnov test, finding that the improvements of both recommenders are statistically significant at the 95 percent confidence level (frequency: D−=0.44, p-value =3.9×10-16; random walk: D−=0.43, p-value =5.5×10-15). This implies that the tags on YouTube videos can indeed be improved by automated tag recommenders to attract more search traffic, and, potentially boost video popularity.

5.2. How do different input data sources impact on the quality of the recommended tags? Is their production mode an important factor?

To this end, we compare the performance of different data sources of candidate tags for each recommender.

Figure 6 shows the CCDFs of the F3-measure for each individual data source. We note that Rotten Tomatoes provides significant improvements over the existing tags on YouTube. Also, Movie Lens is significantly better than the other three data sources (New York Times, RT Reviews, and Wikipedia), though it provides minor improvements compared to YouTube. Using a Kolmogorov-Smirnov test, we confirm that the difference in recommender’s performance for each pair of data sources is statistically significant and D- varies from 0.18 to 0.74 (depending on the data source, recommender, and metric).

 

CCDF of F3-measure for each data source
 
Figure 6: CCDF of F3-measure for each data source.

 

To put these results in perspective, we note that Rotten Tomatoes, besides providing expert-produced information, incorporates a schema for the information provided (i.e., names of actors and directors). A possible explanation for its good performance is that searchers tend to use names of entities related to the movie being searched. Therefore, it is more likely that a recommender is successful if it uses an input that is rich of highly accurate named entities.

Although it might be intuitive that accurate named entities improve recommendation, the good performance of Movie Lens compared to the others is surprising. In particular, one would expect that the candidate keywords extracted from expert-produced reviews (New York Times and RT Reviews) matched well popular search keywords, or even that peer-produced fact pages (Wikipedia) would provide more suitable tags. However, the relative superiority of Movie Lens suggests that keywords produced via collaborative annotation may be more effective than those produced by either collaborative text editing or by experts.

With this in mind, we investigate the relative performance of combinations of data sources (Figure 7). In particular, we consider two groups — peer-produced sources (Movie Lens + Wikipedia) and expert-produced (New York Reviews + RT Reviews). Additionally, we compare to Rotten Tomatoes (which provides the best individual result) and YouTube as baselines for comparison.

We observed that combining Wikipedia and Movie Lens into the peer-produced data source leads to better quality tags than using sources individually, as the results for the frequency recommender shows (Figure 6 vs. Figure 7). On the other hand, this combination seems to dilute important co-occurrence information that can be harnessed by the random walk recommender when using only Movie Lens, as the relative performance of random walk with the peer-produced source compared to YouTube suggests (Figure 6 vs. Figure 7).

We note that the peer-produced data source leads to significantly better tags than using the expert-produced source for both recommenders. We also find that, for the frequency recommender, the performance of the peer-produced data source is comparable to that of Rotten Tomatoes (which has the advantage of having highly accurate named entity information as discussed). Finally, we note that the peer-produced data source provides improvements relative to the tags currently assigned to the YouTube videos.

 

CCDF of F3-measure for grouped data sources
 
Figure 7: CCDF of F3-measure for grouped data sources.

 

5.3. Is the quality of the recommendation affected by the number of contributing peers or by the type of a video?

This section investigates whether the number of contributors (peers) that produce tags for a movie in the Movie Lens has predictive power in terms of the quality of the recommendation produced. To this end, we compute the correlation between the number of users who annotated a movie and the recommendation performance. A Spearman’s rank correlation between the number of users and the F3-measure of 0.31 indicates a mild positive correlation between these aspects. Thus, the number of contributors partially explains the value added by the Movie Lens to the recommenders’ performance.

Recall that we categorized the movies in our ground truth into 22 partially overlapping genres (Figure 2). We calculate the F3-measure for the top-five most popular genres in our ground truth using the Rotten Tomatoes data source (the data source with the best recommendation performance) as input to existing recommenders and we observe that the results are similar and barely affected by the genres.

 

++++++++++

6. Related work

The related literature falls into two broad categories: automated content annotation and tag value assessment. Most related efforts on automated content annotation focus on suggesting tags to annotate items such that they maximize the relevance of the tags given the content (Huang, et al., 2008; Wang, et al., 2006; Belém, et al., 2013), while a few authors propose to leverage other aspects such as diversity (Belém, et al., 2013). However, previous studies fail to account for the potential improvement on popularity of the annotated content — a valuable aspect to content managers and publishers, as they capitalize on their audience.

This work differs from previous tag recommendation studies as it concentrates on evaluating the impact of data source choice instead of designing new recommendation algorithms (Guan, et al., 2009; Jäschke, et al., 2007; Marinho and Schmidt-Thieme, 2008; Sigurbjörnsson and Zwol, 2008).

To the best of our knowledge, the study by Zhou, et al. (2008) is the closest to ours. They approach the problem of boosting content popularity by proposing ways to leverage the related video recommendations by connecting a video to already popular ones. Instead, we assess whether we can boost the search traffic to specific contents (videos) by improving the quality of tags associated with them.

Along the lines of comparing peer- and expert-produced tags, Lu, et al. (2010) study the value of collaboratively produced tags to create content indices. The authors compare tags to the Library of Congress Subject Headings regarding their ability to index and classify content efficiently. They show that tags can help extending expert-assigned subject terms by improving the accessibility of content in library catalogs. Our study is orthogonal to this one, as we evaluate how peer and expert-produced data sources compare as sources of candidate tags to boost content popularity.

 

++++++++++

7. Summary

A large portion of user traffic received by a video on the Web originates from keyword-based search. Consequently, the textual features of online content may directly impact the popularity of a particular item, and ultimately the ad-generated revenue. In this study, we investigate whether automatic tag recommendation can improve the tags already assigned to YouTube videos.

First, we use crowdsourcing to build a ground truth to evaluate data sources and techniques that aim to boost the popularity of multimedia content. Next, we provide evidence that tags currently assigned to a sample of YouTube videos can be further improved to increase search traffic. Moreover, our experimental results show that this requires only using relatively simple recommenders and a combination of data sources. We also compare the quality of the tags extracted from data sources grouped by their production mode (peer- versus expert-produced). We find that, in the context of popularity boosting, peer-produced sources have a higher performance. Finally, although we observed that the genre of a movie has little impact on the performance of tag recommendation, our experiments showed that the number of contributors in a peer-produced source partially explains its positive influence on the performance of tag recommendation for boosting content popularity. End of article

 

About the authors

Tatiana Pontes received her M.Sc. in computer science from the Universidade Federal de Minas Gerais. She developed an inference model to detect the home location of users in the location-based social network Foursquare based only in the public attributes shared. Now, she works at Zahpee, a startup specialized in monitoring and analysing social big data.
E-mail: tpontes [at] dcc [dot] ufmg [dot] br

Elizeu Santos-Neto received his Ph.D. in computer engineering from the University of British Columbia (http://blogs.ubc.ca/elizeu/about/) where he worked on the characterization and design of online peer production systems. He now works at Google.
E-mail: elizeus [at] ece [dot] ubc [dot] ca

Jussara M. Almeida (http://homepages.dcc.ufmg.br/~jussara/) holds a Ph.D. degree in computer science from the University of Wisconsin-Madison. She is currently an Associate Professor at Universidade Federal de Minas Gerais as well as Affiliated Member of the Brazilian Academy of Sciences. Her research is focused around understanding how users interact with different applications, characterizing and modeling the workload patterns that emerge from such interactions, and exploiting those patterns to enhance current applications and services on the Web. She is particularly interested in characterizing and modeling user behavior in online social networks, and social computing in general.
E-mail: jussara [at] dcc [dot] ufmg [dot] br

Matei Ripeanu (http://www.ece.ubc.ca/~matei/) is an Associate Professor at the University of British Columbia. Matei is broadly interested in experimental parallel and distributed systems research with a focus on massively parallel accelerators, data analytics, and storage systems. The Networked Systems Laboratory Web site (netsyslab.ece.ubc.ca) offers an up-to-date overview of the projects he works on together with a fantastic group of students.
E-mail: matei [at] ece [dot] ubc [dot] ca

 

References

Fabiano Belém, Eder Martins, Jussara Almeida, and Marcos Gonçalves, 2013. “Exploiting novelty and diversity in tag recommendation,” In: Pavel Serdyukov, Pavel Braslavski, Sergei O. Kuznetsov, Jaap Kamps, Stefan Ruüger, Eugene Agichtein, Ilya Segalovich, and Emine Yilmaz (editors). Advances in information retrieval: 35th European Conference on IR Research, ECIR 2013, Moscow, Russia, March 24-27, 2013: Proceedings. Lecture Notes in Computer Science, volume 7814. Berlin: Springer, pp. 380–391.
doi: http://dx.doi.org/10.1007/978-3-642-36973-5_32, accessed 19 February 2015.

Ziyu Guan, Jiajun Bu, Qiaozhu Mei, Chun Chen, and Can Wang, 2009. “Personalized tag recommendation using graph-based ranking on multi-type interrelated objects,” SIGIR ’09: Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 540–547.
doi: http://dx.doi.org/10.1145/1571941.1572034, accessed 19 February 2015.

Ben He and Iadh Ounis, 2006. “Query performance prediction,” Information Systems, volume 31, number 7, pp. 585–594.
doi: http://dx.doi.org/10.1016/j.is.2005.11.003, accessed 19 February 2015.

Shen Huang, Xiaoyuan Wu, and Alvaro Bolivar, 2008. “The effect of title term suggestion on e-commerce sites,” WIDM ’08: Proceedings of the 10th ACM Workshop on Web Information and Data Management, pp. 31–38.
doi: http://dx.doi.org/10.1145/1458502.1458508, accessed 19 February 2015.

Robert Jäschke, Leandro Marinho, Andreas Hotho, Lars Schmidt-Thieme, and Gerd Stumme, 2007. “Tag recommendations in folksonomies.” In: Joost Kok, Jacek Koronacki, Ramon Lopez de Mantaras, Stan Matwin, Dunja Mladenic, and Andrzej Skowron (editors). Knowledge Discovery in Databases: PKDD 2007: 11th European Conference on Principles and Practice of Knowledge Discovery in Databases, Warsaw, Poland, September 17–21, 2007: Proceedings. Lecture Notes in Computer Science, volume 4702. Berlin: Springer, pp. 506–514.
doi: http://dx.doi.org/10.1007/978-3-540-74976-9_52, accessed 19 February 2015.

Ioannis Konstas, Vassilios Stathopoulos, and Joemon M. Jose, 2009. “On social networks and collaborative recommendation,” SIGIR ’09: Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 195–202.
doi: http://dx.doi.org/10.1145/1571941.1571977, accessed 19 February 2015.

Caimei Lu, Jung-ran Park, and Xiaohua Hu, 2010. “User tags versus expert-assigned subject terms: A comparison of LibraryThing tags and Library of Congress Subject Headings,” Journal of Information Science, volume 36, number 6, pp. 763–779.
doi: http://dx.doi.org/10.1177/0165551510386173, accessed 19 February 2015.

Leandro B. Marinho and Lars Schmidt-Thieme, 2008. “Collaborative tag recommendations,” In: Christine Preisach, Hans Burkhardt, Lars Schmidt-Thieme, and Reinhold Decker (editors). Data analysis, machine learning and applications. Berlin: Springer, pp. 533–540.
doi: http://dx.doi.org/10.1007/978-3-540-78246-9_63, accessed 19 February 2015.

Elizeu Santos-Neto, Tatiana Pontes, Jussara Almeida, and Matei Ripeanu, 2014. “Towards boosting video popularity via tag selection,” Proceedings of the 1st International Workshop on Social Multimedia and Storytelling (SoMuS 2014), Glasgow, Scotland, 1 April 2014, at http://ceur-ws.org/Vol-1198/santosneto.pdf, accessed 19 February 2015.

Börkur Sigurbjörnsson and Roelof van Zwol, 2008. “Flickr tag recommendation based on collective knowledge,” WWW ’08: Proceedings of the 17th International Conference on World Wide Web, pp. 327–336.
doi: http://dx.doi.org/10.1145/1367497.1367542, accessed 19 February 2015.

Changhu Wang, Feng Jing, Lei Zhang, and Hong-Jiang Zhang, 2006. “Image annotation refinement using random walk with restarts,” MULTIMEDIA ’06: Proceedings of the 14th Annual ACM International Conference on Multimedia, pp. 647–650.
doi: http://dx.doi.org/10.1145/1180639.1180774, accessed 19 February 2015.

Ding Zhou, Jiang Bian, Shuyi Zheng, Hongyuan Zha, and C. Lee Giles, 2008. “Exploring social annotations for information retrieval,” WWW ’08: Proceedings of the 17th International Conference on World Wide Web, pp. 715–724.
doi: http://dx.doi.org/10.1145/1367497.1367594, accessed 19 February 2015.

Renjie Zhou, Samamon Khemmarat, Lixin Gao, and Huiqiang Wang, 2011. “Boosting video popularity through recommendation systems,” DBSocial ’11: Databases and Social Networks, pp. 13–18.
doi: http://dx.doi.org/10.1145/1996413.1996416, accessed 19 February 2015.

 


Editorial history

Received 27 December 2014; accepted 15 February 2015.


Creative Commons License
This paper is licensed under a Creative Commons Public Domain License.

Where are the ‘key’ words? Optimizing multimedia textual attributes to improve viewership
by Tatiana Pontes, Elizeu Santos-Neto, Jussara Almeida, and Matei Ripeanu.
First Monday, Volume 20, Number 3 - 2 March 2015
http://www.firstmonday.org/ojs/index.php/fm/article/view/5628/4324
doi: http://dx.doi.org/10.5210/fm.v20i3.5628





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2017. ISSN 1396-0466.