949 resultados para user generated content


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Cenomanian/Turonian (C/T) intervals at DSDP Sites 105 and 603B from the northern part of the proto-North Atlantic show high amplitude, short-term cyclic variations in total organic carbon (TOC) content. The more pronounced changes in TOC are also reflected by changes in lithology from green claystones (TOC<1%) to black claystones (TOC>1%). Although their depositional history was different, the individual TOC cycles at Sites 105 and 603B can be correlated using stable carbon isotope stratigraphy. Sedimentation rates obtained from the isotope stratigraphy and spectral analyses indicate that these cycles were predominately precession controlled. The coinciding variations in HI, OI, delta13Corg and the abundance of marine relative to terrestrial biomarkers, as well as the low abundance of lignin pyrolysis products generated from the kerogen of the black claystones, indicate that these cyclic variations reflect changes in the contribution of marine organic matter (OM). The cooccurrence of lamination, enrichment of redox-sensitive trace metals and presence of molecular fossils of pigments from green sulfur bacteria indicate that the northern proto-North Atlantic Ocean water column was periodically euxinic from the bottom to at least the base of the photic zone (<150 m) during the deposition of the black claystones. In contrast, the green claystones are bioturbated, are enriched in Mn, do not show enrichments in redox-sensitive trace metals and show biomarker distributions indicative of long oxygen exposure times, indicating more oxic water conditions. At the same time, there is evidence (e.g., abundance of biogenic silica and significant 13C-enrichment for OC of phytoplanktic origin) for enhanced primary productivity during the deposition of the black claystones. We propose that increased primary productivity periodically overwhelmed the oxic OM remineralisation potential of the bottom waters resulting in the deposition of OM-rich black claystones. Because the amount of oxygen used for OM remineralisation exceeded the amount supplied by diffusion and deep-water circulation, the northern proto-North Atlantic became euxinic during these periods. Both Sites 105 and 603B show trends of continually increasing TOC contents and HI values of the black claystones up section, which most likely resulted from both enhanced preservation due to increased anoxia and increased production of marine OM during oceanic anoxic event 2 (OAE2).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Collaborative recommendation is one of widely used recommendation systems, which recommend items to visitor on a basis of referring other's preference that is similar to current user. User profiling technique upon Web transaction data is able to capture such informative knowledge of user task or interest. With the discovered usage pattern information, it is likely to recommend Web users more preferred content or customize the Web presentation to visitors via collaborative recommendation. In addition, it is helpful to identify the underlying relationships among Web users, items as well as latent tasks during Web mining period. In this paper, we propose a Web recommendation framework based on user profiling technique. In this approach, we employ Probabilistic Latent Semantic Analysis (PLSA) to model the co-occurrence activities and develop a modified k-means clustering algorithm to build user profiles as the representatives of usage patterns. Moreover, the hidden task model is derived by characterizing the meaningful latent factor space. With the discovered user profiles, we then choose the most matched profile, which possesses the closely similar preference to current user and make collaborative recommendation based on the corresponding page weights appeared in the selected user profile. The preliminary experimental results performed on real world data sets show that the proposed approach is capable of making recommendation accurately and efficiently.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The contributions in this research are split in to three distinct, but related, areas. The focus of the work is based on improving the efficiency of video content distribution in the networks that are liable to packet loss, such as the Internet. Initially, the benefits and limitations of content distribution using Forward Error Correction (FEC) in conjunction with the Transmission Control Protocol (TCP) is presented. Since added FEC can be used to reduce the number of retransmissions, the requirement for TCP to deal with any losses is greatly reduced. When real-time applications are needed, delay must be kept to a minimum, and retransmissions not desirable. A balance, therefore, between additional bandwidth and delays due to retransmissions must be struck. This is followed by the proposal of a hybrid transport, specifically for H.264 encoded video, as a compromise between the delay-prone TCP and the loss-prone UDP. It is argued that the playback quality at the receiver often need not be 100% perfect, providing a certain level is assured. Reliable TCP is used to transmit and guarantee delivery of the most important packets. The delay associated with the proposal is measured, and the potential for use as an alternative to the conventional methods of transporting video by either TCP or UDP alone is demonstrated. Finally, a new objective measurement is investigated for assessing the playback quality of video transported using TCP. A new metric is defined to characterise the quality of playback in terms of its continuity. Using packet traces generated from real TCP connections in a lossy environment, simulating the playback of a video is possible, whilst monitoring buffer behaviour to calculate pause intensity values. Subjective tests are conducted to verify the effectiveness of the metric introduced and show that the results of objective and subjective scores made are closely correlated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A variety of content-based image retrieval systems exist which enable users to perform image retrieval based on colour content - i.e., colour-based image retrieval. For the production of media for use in television and film, colour-based image retrieval is useful for retrieving specifically coloured animations, graphics or videos from large databases (by comparing user queries to the colour content of extracted key frames). It is also useful to graphic artists creating realistic computer-generated imagery (CGI). Unfortunately, current methods for evaluating colour-based image retrieval systems have 2 major drawbacks. Firstly, the relevance of images retrieved during the task cannot be measured reliably. Secondly, existing methods do not account for the creative design activity known as reflection-in-action. Consequently, the development and application of novel and potentially more effective colour-based image retrieval approaches, better supporting the large number of users creating media for use in television and film productions, is not possible as their efficacy cannot be reliably measured and compared to existing technologies. As a solution to the problem, this paper introduces the Mosaic Test. The Mosaic Test is a user-based evaluation approach in which participants complete an image mosaic of a predetermined target image, using the colour-based image retrieval system that is being evaluated. In this paper, we introduce the Mosaic Test and report on a user evaluation. The findings of the study reveal that the Mosaic Test overcomes the 2 major drawbacks associated with existing evaluation methods and does not require expert participants. © 2012 Springer Science+Business Media, LLC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The projected decline in fossil fuel availability, environmental concerns, and security of supply attract increased interest in renewable energy derived from biomass. Fast pyrolysis is a possible thermochemical conversion route for the production of bio-oil, with promising advantages. The purpose of the experiments reported in this thesis was to extend our understanding of the fast pyrolysis process for straw, perennial grasses and hardwoods, and the implications of selective pyrolysis, crop harvest and storage on the thermal decomposition products. To this end, characterisation and laboratory-scale fast pyrolysis were conducted on the available feedstocks, and their products were compared. The variation in light and medium volatile decomposition products was investigated at different pyrolysis temperatures and heating rates, and a comparison of fast and slow pyrolysis products was conducted. Feedstocks from different harvests, storage durations and locations were characterised and compared in terms of their fuel and chemical properties. A range of analytical (e.g. Py-GC-MS and TGA) and processing equipment (0.3 kg/h and 1.0 kg/h fast pyrolysis reactors and 0.15 kg slow pyrolysis reactor) was used. Findings show that the high bio-oil and char heating value, and low water content of willow short rotation coppice (SRC) make this crop attractive for fast pyrolysis processing compared to the other investigated feedstocks in this project. From the analytical sequential investigation of willow SRC, it was found that the volatile product distribution can be tailored to achieve a better final product, by a variation of the heating rate and temperature. Time of harvest was most influential on the fuel properties of miscanthus; overall the late harvest produced the best fuel properties (high HHV, low moisture content, high volatile content, low ash content), and storage of the feedstock reduced the moisture and acid content.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A large number of studies have been devoted to modeling the contents and interactions between users on Twitter. In this paper, we propose a method inspired from Social Role Theory (SRT), which assumes that a user behaves differently in different roles in the generation process of Twitter content. We consider the two most distinctive social roles on Twitter: originator and propagator, who respectively posts original messages and retweets or forwards the messages from others. In addition, we also consider role-specific social interactions, especially implicit interactions between users who share some common interests. All the above elements are integrated into a novel regularized topic model. We evaluate the proposed method on real Twitter data. The results show that our method is more effective than the existing ones which do not distinguish social roles. Copyright 2013 ACM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Teallach project has adapted model-based user-interface development techniques to the systematic creation of user-interfaces for object-oriented database applications. Model-based approaches aim to provide designers with a more principled approach to user-interface development using a variety of underlying models, and tools which manipulate these models. Here we present the results of the Teallach project, describing the tools developed and the flexible design method supported. Distinctive features of the Teallach system include provision of database-specific constructs, comprehensive facilities for relating the different models, and support for a flexible design method in which models can be constructed and related by designers in different orders and in different ways, to suit their particular design rationales. The system then creates the desired user-interface as an independent, fully functional Java application, with automatically generated help facilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning user interests from online social networks helps to better understand user behaviors and provides useful guidance to design user-centric applications. Apart from analyzing users' online content, it is also important to consider users' social connections in the social Web. Graph regularization methods have been widely used in various text mining tasks, which can leverage the graph structure information extracted from data. Previously, graph regularization methods operate under the cluster assumption that nearby nodes are more similar and nodes on the same structure (typically referred to as a cluster or a manifold) are likely to be similar. We argue that learning user interests from complex, sparse, and dynamic social networks should be based on the link structure assumption under which node similarities are evaluated based on the local link structures instead of explicit links between two nodes. We propose a regularization framework based on the relation bipartite graph, which can be constructed from any type of relations. Using Twitter as our case study, we evaluate our proposed framework from social networks built from retweet relations. Both quantitative and qualitative experiments show that our proposed method outperforms a few competitive baselines in learning user interests over a set of predefined topics. It also gives superior results compared to the baselines on retweet prediction and topical authority identification. © 2014 ACM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we explore the idea of social role theory (SRT) and propose a novel regularized topic model which incorporates SRT into the generative process of social media content. We assume that a user can play multiple social roles, and each social role serves to fulfil different duties and is associated with a role-driven distribution over latent topics. In particular, we focus on social roles corresponding to the most common social activities on social networks. Our model is instantiated on microblogs, i.e., Twitter and community question-answering (cQA), i.e., Yahoo! Answers, where social roles on Twitter include "originators" and "propagators", and roles on cQA are "askers" and "answerers". Both explicit and implicit interactions between users are taken into account and modeled as regularization factors. To evaluate the performance of our proposed method, we have conducted extensive experiments on two Twitter datasets and two cQA datasets. Furthermore, we also consider multi-role modeling for scientific papers where an author's research expertise area is considered as a social role. A novel application of detecting users' research interests through topical keyword labeling based on the results of our multi-role model has been presented. The evaluation results have shown the feasibility and effectiveness of our model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent UK government initiatives aim to increase user involvement in the National Health Service (NHS) in two ways: by encouraging service users to take an active role in making decisions about their own care; and by establishing opportunities for wider public participation in service development. The purpose of this study was to examine how UK cancer service users understand and relate to the concept of user involvement. The data were collected through in-depth interviews, which were analysed for content according to the principles of grounded theory. The results highlight the role of information and communication in effective user involvement. Perhaps more importantly, this study suggests that the concept of user involvement is unclear to many cancer service users. This paper argues the need for increased awareness and understanding of what user involvement is and how it can work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional content-based filtering methods usually utilize text extraction and classification techniques for building user profiles as well as for representations of contents, i.e. item profiles. These methods have some disadvantages e.g. mismatch between user profile terms and item profile terms, leading to low performance. Some of the disadvantages can be overcome by incorporating a common ontology which enables representing both the users' and the items' profiles with concepts taken from the same vocabulary. We propose a new content-based method for filtering and ranking the relevancy of items for users, which utilizes a hierarchical ontology. The method measures the similarity of the user's profile to the items' profiles, considering the existing of mutual concepts in the two profiles, as well as the existence of "related" concepts, according to their position in the ontology. The proposed filtering algorithm computes the similarity between the users' profiles and the items' profiles, and rank-orders the relevant items according to their relevancy to each user. The method is being implemented in ePaper, a personalized electronic newspaper project, utilizing a hierarchical ontology designed specifically for classification of News items. It can, however, be utilized in other domains and extended to other ontologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the followed methodology to automatically generate titles for a corpus of questions that belong to sociological opinion polls. Titles for questions have a twofold function: (1) they are the input of user searches and (2) they inform about the whole contents of the question and possible answer options. Thus, generation of titles can be considered as a case of automatic summarization. However, the fact that summarization had to be performed over very short texts together with the aforementioned quality conditions imposed on new generated titles led the authors to follow knowledge-rich and domain-dependent strategies for summarization, disregarding the more frequent extractive techniques for summarization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Content creation and presentation are key activities in a multimedia digital library (MDL). The proper design and intelligent implementation of these services provide a stable base for overall MDL functionality. This paper presents the framework and the implementation of these services in the latest version of the “Virtual Encyclopaedia of Bulgarian Iconography” multimedia digital library. For the semantic description of the iconographical objects a tree-based annotation template is implemented. It provides options for autocompletion, reuse of values, bilingual entering of data, automated media watermarking, resizing and conversing. The paper describes in detail the algorithm for automated appearance of dependent values for different characteristics of an iconographical object. An algorithm for avoiding duplicate image objects is also included. The service for automated appearance of new objects in a collection after their entering is included as an important part of the content presentation. The paper also presents the overall service-based architecture of the library, covering its main service panels, repositories and their relationships. The presented vision is based on a long-term observation of the users’ preferences, cognitive goals, and needs, aiming to find an optimal functionality solution for the end users.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper presents a different vision for personalization of the user’s stay in a cultural heritage digital library that models services for personalized content marking, commenting and analyzing that doesn’t require strict user profile, but aims at adjusting the user’s individual needs. The solution is borrowed from real work and studying of traditional written content sources (incl. books, manuals), where the user mainly performs activities such as underlining the important parts of the content, writing notes and inferences, selecting and marking zones of their interest in pictures, etc. In the paper a special attention is paid to the ability to execute learning analysis allowing different ways for the user to experience the digital library content with more creative settings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the greatest concerns related to the popularity of GPS-enabled devices and applications is the increasing availability of the personal location information generated by them and shared with application and service providers. Moreover, people tend to have regular routines and be characterized by a set of “significant places”, thus making it possible to identify a user from his/her mobility data. In this paper we present a series of techniques for identifying individuals from their GPS movements. More specifically, we study the uniqueness of GPS information for three popular datasets, and we provide a detailed analysis of the discriminatory power of speed, direction and distance of travel. Most importantly, we present a simple yet effective technique for the identification of users from location information that are not included in the original dataset used for training, thus raising important privacy concerns for the management of location datasets.