883 resultados para User-generated content
Resumo:
Traditional content-based filtering methods usually utilize text extraction and classification techniques for building user profiles as well as for representations of contents, i.e. item profiles. These methods have some disadvantages e.g. mismatch between user profile terms and item profile terms, leading to low performance. Some of the disadvantages can be overcome by incorporating a common ontology which enables representing both the users' and the items' profiles with concepts taken from the same vocabulary. We propose a new content-based method for filtering and ranking the relevancy of items for users, which utilizes a hierarchical ontology. The method measures the similarity of the user's profile to the items' profiles, considering the existing of mutual concepts in the two profiles, as well as the existence of "related" concepts, according to their position in the ontology. The proposed filtering algorithm computes the similarity between the users' profiles and the items' profiles, and rank-orders the relevant items according to their relevancy to each user. The method is being implemented in ePaper, a personalized electronic newspaper project, utilizing a hierarchical ontology designed specifically for classification of News items. It can, however, be utilized in other domains and extended to other ontologies.
Resumo:
This paper describes the followed methodology to automatically generate titles for a corpus of questions that belong to sociological opinion polls. Titles for questions have a twofold function: (1) they are the input of user searches and (2) they inform about the whole contents of the question and possible answer options. Thus, generation of titles can be considered as a case of automatic summarization. However, the fact that summarization had to be performed over very short texts together with the aforementioned quality conditions imposed on new generated titles led the authors to follow knowledge-rich and domain-dependent strategies for summarization, disregarding the more frequent extractive techniques for summarization.
Resumo:
Content creation and presentation are key activities in a multimedia digital library (MDL). The proper design and intelligent implementation of these services provide a stable base for overall MDL functionality. This paper presents the framework and the implementation of these services in the latest version of the “Virtual Encyclopaedia of Bulgarian Iconography” multimedia digital library. For the semantic description of the iconographical objects a tree-based annotation template is implemented. It provides options for autocompletion, reuse of values, bilingual entering of data, automated media watermarking, resizing and conversing. The paper describes in detail the algorithm for automated appearance of dependent values for different characteristics of an iconographical object. An algorithm for avoiding duplicate image objects is also included. The service for automated appearance of new objects in a collection after their entering is included as an important part of the content presentation. The paper also presents the overall service-based architecture of the library, covering its main service panels, repositories and their relationships. The presented vision is based on a long-term observation of the users’ preferences, cognitive goals, and needs, aiming to find an optimal functionality solution for the end users.
Resumo:
The paper presents a different vision for personalization of the user’s stay in a cultural heritage digital library that models services for personalized content marking, commenting and analyzing that doesn’t require strict user profile, but aims at adjusting the user’s individual needs. The solution is borrowed from real work and studying of traditional written content sources (incl. books, manuals), where the user mainly performs activities such as underlining the important parts of the content, writing notes and inferences, selecting and marking zones of their interest in pictures, etc. In the paper a special attention is paid to the ability to execute learning analysis allowing different ways for the user to experience the digital library content with more creative settings.
Resumo:
One of the greatest concerns related to the popularity of GPS-enabled devices and applications is the increasing availability of the personal location information generated by them and shared with application and service providers. Moreover, people tend to have regular routines and be characterized by a set of “significant places”, thus making it possible to identify a user from his/her mobility data. In this paper we present a series of techniques for identifying individuals from their GPS movements. More specifically, we study the uniqueness of GPS information for three popular datasets, and we provide a detailed analysis of the discriminatory power of speed, direction and distance of travel. Most importantly, we present a simple yet effective technique for the identification of users from location information that are not included in the original dataset used for training, thus raising important privacy concerns for the management of location datasets.
Resumo:
We propose a fibre-based approach for generation of optical frequency combs (OFCs) with the aim of calibration of astronomical spectrographs in the low and medium-resolution range. This approach includes two steps: in the first step, an appropriate state of optical pulses is generated and subsequently moulded in the second step delivering the desired OFC. More precisely, the first step is realised by injection of two continuous-wave (CW) lasers into a conventional single-mode fibre, whereas the second step generates a broad OFC by using the optical solitons generated in step one as initial condition. We investigate the conversion of a bichromatic input wave produced by two initial CW lasers into a train of optical solitons, which happens in the fibre used as step one. Especially, we are interested in the soliton content of the pulses created in this fibre. For that, we study different initial conditions (a single cosine-hump, an Akhmediev breather, and a deeply modulated bichromatic wave) by means of soliton radiation beat analysis and compare the results to draw conclusion about the soliton content of the state generated in the first step. In case of a deeply modulated bichromatic wave, we observed the formation of a collective soliton crystal for low input powers and the appearance of separated solitons for high input powers. An intermediate state showing the features of both, the soliton crystal and the separated solitons, turned out to be most suitable for the generation of OFC for the purpose of calibration of astronomical spectrographs.
Resumo:
A framework that aims to best utilize the mobile network resources for video applications is presented in this paper. The main contribution of the work proposed is the QoE-driven optimization method that can maintain a desired trade-off between fairness and efficiency in allocating resources in terms of data rates to video streaming users in LTE networks. This method is concerned with the control of the user satisfaction level from the service continuity's point of view and applies appropriate QoE metrics (Pause Intensity and variations) to determine the scheduling strategies in combination with the mechanisms used for adaptive video streaming such as 3GP/MPEG-DASH. The superiority of the proposed algorithms are demonstrated, showing how the resources of a mobile network can be optimally utilized by using quantifiable QoE measurements. This approach can also find the best match between demand and supply in the process of network resource distribution.
Resumo:
This thesis addressed the problem of risk analysis in mental healthcare, with respect to the GRiST project at Aston University. That project provides a risk-screening tool based on the knowledge of 46 experts, captured as mind maps that describe relationships between risks and patterns of behavioural cues. Mind mapping, though, fails to impose control over content, and is not considered to formally represent knowledge. In contrast, this thesis treated GRiSTs mind maps as a rich knowledge base in need of refinement; that process drew on existing techniques for designing databases and knowledge bases. Identifying well-defined mind map concepts, though, was hindered by spelling mistakes, and by ambiguity and lack of coverage in the tools used for researching words. A novel use of the Edit Distance overcame those problems, by assessing similarities between mind map texts, and between spelling mistakes and suggested corrections. That algorithm further identified stems, the shortest text string found in related word-forms. As opposed to existing approaches’ reliance on built-in linguistic knowledge, this thesis devised a novel, more flexible text-based technique. An additional tool, Correspondence Analysis, found patterns in word usage that allowed machines to determine likely intended meanings for ambiguous words. Correspondence Analysis further produced clusters of related concepts, which in turn drove the automatic generation of novel mind maps. Such maps underpinned adjuncts to the mind mapping software used by GRiST; one such new facility generated novel mind maps, to reflect the collected expert knowledge on any specified concept. Mind maps from GRiST are stored as XML, which suggested storing them in an XML database. In fact, the entire approach here is ”XML-centric”, in that all stages rely on XML as far as possible. A XML-based query language allows user to retrieve information from the mind map knowledge base. The approach, it was concluded, will prove valuable to mind mapping in general, and to detecting patterns in any type of digital information.
Resumo:
The purpose of this study was to determine if an experimental context-based delivery format for mathematics would be more effective than a traditional model for increasing the performance in mathematics of at-risk students in a public high school of choice, as evidenced by significant gains in achievement on the standards-based Mathematics subtest of the FCAT and final academic grades in Algebra I. The guiding rationale for this approach is captured in the Secretary's Commission on Achieving Necessary Skills (SCANS) report of 1992 that resulted in school-to-work initiatives (United States Department of Labor). Also, the charge for educational reform has been codified at the state level as Educational Accountability Act of 1971 (Florida Statutes, 1995) and at the national level as embodied in the No Child Left Behind Act of 2001. A particular focus of educational reform is low performing, at-risk students. ^ This dissertation explored the effects of a context-based curricular reform designed to enhance the content of Algebra I content utilizing a research design consisting of two delivery models: a traditional content-based course; and, a thematically structured, content-based course. In this case, the thematic element was business education as there are many advocates in career education who assert that this format engages students who are often otherwise disinterested in mathematics in a relevant, SCANS skills setting. The subjects in each supplementary course were ninth grade students who were both low performers in eighth grade mathematics and who had not passed the eighth grade administration of the standards-based FCAT Mathematics subtest. The sample size was limited to two groups of 25 students and two teachers. The site for this study was a public charter school. Student-generated performance data were analyzed using descriptive statistics. ^ Results indicated that contrary to the beliefs held by many, contextual presentation of content did not cause significant gains in either academic performance or test performance for those in the experimental treatment group. Further, results indicated that there was no meaningful difference in performance between the two groups. ^
Resumo:
The rapid growth of the Internet and the advancements of the Web technologies have made it possible for users to have access to large amounts of on-line music data, including music acoustic signals, lyrics, style/mood labels, and user-assigned tags. The progress has made music listening more fun, but has raised an issue of how to organize this data, and more generally, how computer programs can assist users in their music experience. An important subject in computer-aided music listening is music retrieval, i.e., the issue of efficiently helping users in locating the music they are looking for. Traditionally, songs were organized in a hierarchical structure such as genre->artist->album->track, to facilitate the users’ navigation. However, the intentions of the users are often hard to be captured in such a simply organized structure. The users may want to listen to music of a particular mood, style or topic; and/or any songs similar to some given music samples. This motivated us to work on user-centric music retrieval system to improve users’ satisfaction with the system. The traditional music information retrieval research was mainly concerned with classification, clustering, identification, and similarity search of acoustic data of music by way of feature extraction algorithms and machine learning techniques. More recently the music information retrieval research has focused on utilizing other types of data, such as lyrics, user-access patterns, and user-defined tags, and on targeting non-genre categories for classification, such as mood labels and styles. This dissertation focused on investigating and developing effective data mining techniques for (1) organizing and annotating music data with styles, moods and user-assigned tags; (2) performing effective analysis of music data with features from diverse information sources; and (3) recommending music songs to the users utilizing both content features and user access patterns.
Resumo:
Elemental and isotopic composition of leaves of the seagrassThalassia testudinum was highly variable across the 10,000 km2 and 8 years of this study. The data reported herein expand the reported range in carbon:nitrogen (C:N) and carbon:phosphorus (C:P) ratios and δ13C and δ15N values reported for this species worldwide; 13.2–38.6 for C:N and 411–2,041 for C:P. The 981 determinations in this study generated a range of −13.5‰ to −5.2‰ for δ13C and −4.3‰ to 9.4‰ for δ15N. The elemental and isotope ratios displayed marked seasonality, and the seasonal patterns could be described with a simple sine wave model. C:N, C:P, δ13C, and δ15N values all had maxima in the summer and minima in the winter. Spatial patterns in the summer maxima of these quantities suggest there are large differences in the relative availability of N and P across the study area and that there are differences in the processing and the isotopic composition of C and N. This work calls into question the interpretation of studies about nutrient cycling and food webs in estuaries based on few samples collected at one time, since we document natural variability greater than the signal often used to imply changes in the structure or function of ecosystems. The data and patterns presented in this paper make it clear that there is no threshold δ15N value for marine plants that can be used as an unambiguous indicator of human sewage pollution without a thorough understanding of local temporal and spatial variability.
Resumo:
The design of interfaces to facilitate user search has become critical for search engines, ecommercesites, and intranets. This study investigated the use of targeted instructional hints to improve search by measuring the quantitative effects of users' performance and satisfaction. The effects of syntactic, semantic and exemplar search hints on user behavior were evaluated in an empirical investigation using naturalistic scenarios. Combining the three search hint components, each with two levels of intensity, in a factorial design generated eight search engine interfaces. Eighty participants participated in the study and each completed six realistic search tasks. Results revealed that the inclusion of search hints improved user effectiveness, efficiency and confidence when using the search interfaces, but with complex interactions that require specific guidelines for search interface designers. These design guidelines will allow search designers to create more effective interfaces for a variety of searchapplications.
Resumo:
Drilling a transect of holes across the Costa Rica forearc during ODP Leg 170 demonstrated the margin wedge to be of continental, non accretionary origin, which is intersected by permeable thrust faults. Pore waters from four drillholes, two of which penetrated the décollement zone and reached the underthrust lower plate sedimentary sequence of the Cocos Plate, were examined for boron contents and boron isotopic signatures. The combined results show dilution of the uppermost sedimentary cover of the forearc, with boron contents lower than half of the present-day seawater values. Pore fluid "refreshening" suggests that gas hydrate water has been mixed with the sediment interstitial water, without profoundly affecting the d11B values. Fault-related flux of a deeply generated fluid is inferred from high B concentration in the interval beneath the décollement, being released from the underthrust sequence with incipient burial. First-order fluid budget calculations over a cross-section across the Costa Rica forearc indicate that no significant fluid transfer from the lower to the upper plate is inferred from boron fluid profiles, at least within the frontal 40 km studied. Expulsed lower plate pore water, which is estimated to be 0.26-0.44 km3 per km trench, is conducted efficiently along and just beneath the décollement zone, indicating effective shear-enhanced compaction. In the upper plate forearc wedge, dewatering occurs as diffuse transport as well as channelled flow. A volume of approximately 2 km3 per km trench is expulsed due to compaction and, to a lesser extent, lateral shortening. Pore water chemistry is influenced by gas hydrate instability, so that it remains unknown whether deep processes like mineral dehydration or hydrocarbon formation may play a considerable role towards the hinterland.
Resumo:
El de hoy es un entorno en el que todos los días los medios sufren cambios estructurales y funcionales que los obligan a replantear su accionar y reinventar sus usos y esquemas de comunicación, de ahí la importancia de esta investigación mixta, cuantitativa y cualitativa, que recurrió a seguimiento en redes sociales y análisis de contenidos; detectando el cómo los medios responden al estar inmersos en un mundo en el que se pasó de una cultura de imprenta a una cultura de la pantalla donde surgen nuevas prácticas sociales que crean dispositivos a la medida de cada uno, elementos en los que conjugan el texto, el audio y el video configurando nuevos medios en los que se generan otras formas de interacción, y se replantea el quehacer profesional del Comunicador Social Periodista.
Resumo:
Advertising investment and audience figures indicate that television continues to lead as a mass advertising medium. However, its effectiveness is questioned due to problems such as zapping, saturation and audience fragmentation. This has favoured the development of non-conventional advertising formats. This study provides empirical evidence for the theoretical development. This investigation analyzes the recall generated by four non-conventional advertising formats in a real environment: short programme (branded content), television sponsorship, internal and external telepromotion versus the more conventional spot. The methodology employed has integrated secondary data with primary data from computer assisted telephone interviewing (CATI) were performed ad-hoc on a sample of 2000 individuals, aged 16 to 65, representative of the total television audience. Our findings show that non-conventional advertising formats are more effective at a cognitive level, as they generate higher levels of both unaided and aided recall, in all analyzed formats when compared to the spot.