828 resultados para User generated contents (UGC)
Resumo:
This dissertation research points out major challenging problems with current Knowledge Organization (KO) systems, such as subject gateways or web directories: (1) the current systems use traditional knowledge organization systems based on controlled vocabulary which is not very well suited to web resources, and (2) information is organized by professionals not by users, which means it does not reflect intuitively and instantaneously expressed users’ current needs. In order to explore users’ needs, I examined social tags which are user-generated uncontrolled vocabulary. As investment in professionally-developed subject gateways and web directories diminishes (support for both BUBL and Intute, examined in this study, is being discontinued), understanding characteristics of social tagging becomes even more critical. Several researchers have discussed social tagging behavior and its usefulness for classification or retrieval; however, further research is needed to qualitatively and quantitatively investigate social tagging in order to verify its quality and benefit. This research particularly examined the indexing consistency of social tagging in comparison to professional indexing to examine the quality and efficacy of tagging. The data analysis was divided into three phases: analysis of indexing consistency, analysis of tagging effectiveness, and analysis of tag attributes. Most indexing consistency studies have been conducted with a small number of professional indexers, and they tended to exclude users. Furthermore, the studies mainly have focused on physical library collections. This dissertation research bridged these gaps by (1) extending the scope of resources to various web documents indexed by users and (2) employing the Information Retrieval (IR) Vector Space Model (VSM) - based indexing consistency method since it is suitable for dealing with a large number of indexers. As a second phase, an analysis of tagging effectiveness with tagging exhaustivity and tag specificity was conducted to ameliorate the drawbacks of consistency analysis based on only the quantitative measures of vocabulary matching. Finally, to investigate tagging pattern and behaviors, a content analysis on tag attributes was conducted based on the FRBR model. The findings revealed that there was greater consistency over all subjects among taggers compared to that for two groups of professionals. The analysis of tagging exhaustivity and tag specificity in relation to tagging effectiveness was conducted to ameliorate difficulties associated with limitations in the analysis of indexing consistency based on only the quantitative measures of vocabulary matching. Examination of exhaustivity and specificity of social tags provided insights into particular characteristics of tagging behavior and its variation across subjects. To further investigate the quality of tags, a Latent Semantic Analysis (LSA) was conducted to determine to what extent tags are conceptually related to professionals’ keywords and it was found that tags of higher specificity tended to have a higher semantic relatedness to professionals’ keywords. This leads to the conclusion that the term’s power as a differentiator is related to its semantic relatedness to documents. The findings on tag attributes identified the important bibliographic attributes of tags beyond describing subjects or topics of a document. The findings also showed that tags have essential attributes matching those defined in FRBR. Furthermore, in terms of specific subject areas, the findings originally identified that taggers exhibited different tagging behaviors representing distinctive features and tendencies on web documents characterizing digital heterogeneous media resources. These results have led to the conclusion that there should be an increased awareness of diverse user needs by subject in order to improve metadata in practical applications. This dissertation research is the first necessary step to utilize social tagging in digital information organization by verifying the quality and efficacy of social tagging. This dissertation research combined both quantitative (statistics) and qualitative (content analysis using FRBR) approaches to vocabulary analysis of tags which provided a more complete examination of the quality of tags. Through the detailed analysis of tag properties undertaken in this dissertation, we have a clearer understanding of the extent to which social tagging can be used to replace (and in some cases to improve upon) professional indexing.
Resumo:
The primary goals of this study are to: embed sustainable concepts of energy consumption into certain part of existing Computer Science curriculum for English schools; investigate how to motivate 7-to-11 years old kids to learn these concepts; promote responsive ICT (Information and Communications Technology) use by these kids in their daily life; raise their awareness of today’s ecological challenges. Sustainability-related ICT lessons developed aim to provoke computational thinking and creativity to foster understanding of environmental impact of ICT and positive environmental impact of small changes in user energy consumption behaviour. The importance of including sustainability into the Computer Science curriculum is due to the fact that ICT is both a solution and one of the causes of current world ecological problems. This research follows Agile software development methodology. In order to achieve the aforementioned goals, sustainability requirements, curriculum requirements and technical requirements are firstly analysed. Secondly, the web-based user interface is designed. In parallel, a set of three online lessons (video, slideshow and game) is created for the website GreenICTKids.com taking into account several green design patterns. Finally, the evaluation phase involves the collection of adults’ and kids’ feedback on the following: user interface; contents; user interaction; impacts on the kids’ sustainability awareness and on the kids’ behaviour with technologies. In conclusion, a list of research outcomes is as follows: 92% of the adults learnt more about energy consumption; 80% of the kids are motivated to learn about energy consumption and found the website easy to use; 100% of the kids understood the contents and liked website’s visual aspect; 100% of the kids will try to apply in their daily life what they learnt through the online lessons.
Resumo:
The Web 2.0 has resulted in a shift as to how users consume and interact with the information, and has introduced a wide range of new textual genres, such as reviews or microblogs, through which users communicate, exchange, and share opinions. The exploitation of all this user-generated content is of great value both for users and companies, in order to assist them in their decision-making processes. Given this context, the analysis and development of automatic methods that can help manage online information in a quicker manner are needed. Therefore, this article proposes and evaluates a novel concept-level approach for ultra-concise opinion abstractive summarization. Our approach is characterized by the integration of syntactic sentence simplification, sentence regeneration and internal concept representation into the summarization process, thus being able to generate abstractive summaries, which is one the most challenging issues for this task. In order to be able to analyze different settings for our approach, the use of the sentence regeneration module was made optional, leading to two different versions of the system (one with sentence regeneration and one without). For testing them, a corpus of 400 English texts, gathered from reviews and tweets belonging to two different domains, was used. Although both versions were shown to be reliable methods for generating this type of summaries, the results obtained indicate that the version without sentence regeneration yielded to better results, improving the results of a number of state-of-the-art systems by 9%, whereas the version with sentence regeneration proved to be more robust to noisy data.
Resumo:
I sistemi di Social Media Monitoring hanno l'obiettivo di analizzare dati provenienti da social media come social network, forum e blog (detti User-Generated Content) per trarre un quadro generale delle opinioni degli utenti a proposito di un particolare argomento. Il progetto di tesi si pone l'obiettivo di progettare e creare un prototipo per un sistema di Social Media Monitoring concentrato in particolare sull'analisi di contenuti provenienti da Twitter.
Resumo:
Social plugins for sharing news through Facebook and Twitter have become increasingly salient features on news sites. Together with the user comment feature, social plugins are the most common way for users to contribute. The wide use of multiple features has opened new areas to comprehensively study users’ participatory practices. However, how do these opportunities to participate vary between the participatory spaces that news sites affiliated with local, national broadsheet and tabloid news constitute? How are these opportunities appropriated by users in terms of participatory practices such as commenting and sharing news through Facebook and Twitter? In addition, what differences are there between news sites in these respects? To answer these questions, a quantitative content analysis has been conducted on 3,444 articles from nine Swedish online newspapers. Local newspapers are more likely to allow users to comment on articles than are national newspapers. Tweeting news is appropriated only on news sites affiliated with evening tabloids and national morning newspapers. Sharing news through Facebook is 20 times more common than tweeting news or commenting. The majority of news items do not attract any user interaction.
Resumo:
The realization of the Semantic Web is constrained by a knowledge acquisition bottleneck, i.e. the problem of how to add RDF mark-up to the millions of ordinary web pages that already exist. Information Extraction (IE) has been proposed as a solution to the annotation bottleneck. In the task based evaluation reported here, we compared the performance of users without access to annotation, users working with annotations which had been produced from manually constructed knowledge bases, and users working with annotations augmented using IE. We looked at retrieval performance, overlap between retrieved items and the two sets of annotations, and usage of annotation options. Automatically generated annotations were found to add value to the browsing experience in the scenario investigated. Copyright 2005 ACM.
Resumo:
Chemical cross-linking has emerged as a powerful approach for the structural characterization of proteins and protein complexes. However, the correct identification of covalently linked (cross-linked or XL) peptides analyzed by tandem mass spectrometry is still an open challenge. Here we present SIM-XL, a software tool that can analyze data generated through commonly used cross-linkers (e.g., BS3/DSS). Our software introduces a new paradigm for search-space reduction, which ultimately accounts for its increase in speed and sensitivity. Moreover, our search engine is the first to capitalize on reporter ions for selecting tandem mass spectra derived from cross-linked peptides. It also makes available a 2D interaction map and a spectrum-annotation tool unmatched by any of its kind. We show SIM-XL to be more sensitive and faster than a competing tool when analyzing a data set obtained from the human HSP90. The software is freely available for academic use at http://patternlabforproteomics.org/sim-xl. A video demonstrating the tool is available at http://patternlabforproteomics.org/sim-xl/video. SIM-XL is the first tool to support XL data in the mzIdentML format; all data are thus available from the ProteomeXchange consortium (identifier PXD001677).
Resumo:
One of the e-learning environment goal is to attend the individual needs of students during the learning process. The adaptation of contents, activities and tools into different visualization or in a variety of content types is an important feature of this environment, bringing to the user the sensation that there are suitable workplaces to his profile in the same system. Nevertheless, it is important the investigation of student behaviour aspects, considering the context where the interaction happens, to achieve an efficient personalization process. The paper goal is to present an approach to identify the student learning profile analyzing the context of interaction. Besides this, the learning profile could be analyzed in different dimensions allows the system to deal with the different focus of the learning.
Resumo:
The calcium carbonate industry generates solid waste products which, because of their high alkaline content (CaO, CaCO(3) and Ca (OH)(2)), have a substantial impact on the environment. The objectives of this study are to characterize and classify the solid waste products, which are generated during the hydration process of the calcium carbonate industry, according to ABNT`s NBR 10.000 series, and to determine the potential and efficiency of using these solid residues to correct soil acidity. Initially, the studied residue was submitted to gross mass, leaching, solubility, pH. X-ray Diffractometry, Inductive Coupled Plasma - Atomic Emission Spectrometry (ICP-AES), granularity and humidity analyses. The potential and efficiency of the residue for correcting soil acidity was determined by analysis of the quality attributes for soil correctives (PN, PRNT, Ca and Mg contents, granularity). Consequently, the results show that the studied residue may be used as a soil acidity corrective, considering that a typical corrective compound is recommended for each different type of soil. Additionally, the product must be further treated (dried and ground) to suit the specific requirements of the consumer market.
Resumo:
In the context of an e ort to develop methodologies to support the evaluation of interactive system, this paper investigates an approach to detect graphical user interface bad smells. Our approach consists in detecting user interface bad smells through model-based reverse engineering from source code. Models are used to de ne which widgets are present in the interface, when can particular graphical user interface (GUI) events occur, under which conditions, which system actions are executed, and which GUI state is generated next.
Resumo:
A educação é uma área bastante importante no desenvolvimento humano e tem vindo a adaptar-se às novas tecnologias. Tentam-se encontrar novas maneiras de ensinar de modo a obter um rendimento cada vez maior na aprendizagem das pessoas. Com o aparecimento de novas tecnologias como os computadores e a Internet, a concepção de aplicações digitais educativas cresceu e a necessidade de instruir cada vez melhor os alunos leva a que estas aplicações precisem de um interface que consiga leccionar de uma maneira rápida e eficiente. A combinação entre o ensino com o auxílio dessas novas tecnologias e a educação à distância deu origem ao e-Learning (ensino à distância). Através do ensino à distância, as possibilidades de aumento de conhecimento dos alunos aumentaram e a informação necessária tornou-se disponível a qualquer hora em qualquer lugar com acesso à Internet. Mas os cursos criados online tinham custos altos e levavam muito tempo a preparar o que gerou um problema para quem os criava. Para recuperar o investimento realizado decidiu-se dividir os conteúdos em módulos capazes de serem reaproveitados em diferentes contextos e diferentes tipos de utilizadores. Estes conteúdos modulares foram denominados Objectos de Aprendizagem. Nesta tese, é abordado o estudo dos Objectos de Aprendizagem e a sua evolução ao longo dos tempos em termos de interface com o utilizador. A concepção de um interface que seja natural e simples de utilizar nem sempre é fácil e independentemente do contexto em que se insere, requer algum conhecimento de regras que façam com que o utilizador que use determinada aplicação consiga trabalhar com um mínimo de desempenho. Na concepção de Objectos de Aprendizagem, áreas de complexidade elevada como a Medicina levam a que professores ou doutores sintam alguma dificuldade em criar um interface com conteúdos educativos capaz de ensinar com eficiência os alunos, devido ao facto de grande parte deles desconhecerem as técnicas e regras que levam ao desenvolvimento de um interface de uma aplicação. Através do estudo dessas regras e estilos de interacção torna-se mais fácil a criação de um bom interface e ao longo desta tese será estudado e proposto uma ferramenta que ajude tanto na criação de Objectos de Aprendizagem como na concepção do respectivo interface.
Resumo:
This paper presents a tool for the analysis and regeneration of Web contents, implemented through XML and Java. At the moment, the Web content delivery from server to clients is carried out without taking into account clients' characteristics. Heterogeneous and diverse characteristics, such as user's preferences, different capacities of the client's devices, different types of access, state of the network and current load on the server, directly affect the behavior of Web services. On the other hand, the growing use of multimedia objects in the design of Web contents is made without taking into account this diversity and heterogeneity. It affects, even more, the appropriate content delivery. Thus, the objective of the presented tool is the treatment of Web pages taking into account the mentioned heterogeneity and adapting contents in order to improve the performance on the Web
Resumo:
In 2006 the Library of the Andalusian Public Health System (BVSSPA) is constituted as a Virtual Library which provides Resources and Services that are accessed through inter-hospital local area network (corporate intranet) and Internet. On the other hand, the Hospital de la Axarquia still did not have any institutional or self-presence on the Internet and the librarian ask the need to create a space for communication with the "digital users" of the Library of the area through a website. MATERIALS AND METHODS. The reasons why we opted for a blog were: -It was necessary to make no financial outlay for its establishment. It allowed for great versatility, both in its administration and in its management by users. -The ability to compile on the same platform different Web 2.0 communication tools. Between different options available we chose Blogger Google Inc. The blog allowed entry to the 2.0 services or Social Web in the library. The benefits offered were many, especially the visibility of the service and communication with the user. 2.0 tools that have been incorporated into the library are: content syndication (RSS) which allowed users to stay informed about updates to the blog. Share documents and other multimedia as presentations through SlideShare, images through Flickr or Picasa, or videos (YouTube). And the presence on social network like Facebook and Twitter. RESULTS. The analysis of the activity we has been traking by Google Analytics tool, helping to determine the number of blog visits. Since its stablishment, on November 17th 2006, until November 29th 2010 the blog has received 15,787 visitors, 38,422 page views were consulted, at each visit on average 2.4 pages were consulted and each visit has an average stay at the site of 4'31''. DISCUSSION. The blog has served as a communication and information tool with the user. Since the creation of the blog we have incorporated technologies and tools to interact with the user. With all the tools used we have applied the concept of "open source" and the contents were generated from the activities organized in the Knowledge Management Unit from the anatomo-clinical sessions, the training activities, dissemination events, etc. The result has been the customization of library services, contextualized in the Knowledge Management Unit - Axarquia. In social networks we have shared information and files with the professionals and the community. CONCLUSIONS. The blog has allowed us to explore technologies that allow us to communicate with the user and the community, disseminate information and documents with the participation of users and become the "Interactive Library" we aspire to be.
Resumo:
The explosive growth of Internet during the last years has been reflected in the ever-increasing amount of the diversity and heterogeneity of user preferences, types and features of devices and access networks. Usually the heterogeneity in the context of the users which request Web contents is not taken into account by the servers that deliver them implying that these contents will not always suit their needs. In the particular case of e-learning platforms this issue is especially critical due to the fact that it puts at stake the knowledge acquired by their users. In the following paper we present a system that aims to provide the dotLRN e-learning platform with the capability to adapt to its users context. By integrating dotLRN with a multi-agent hypermedia system, online courses being undertaken by students as well as their learning environment are adapted in real time