828 resultados para User generated contents (UGC)
Resumo:
This paper describes the followed methodology to automatically generate titles for a corpus of questions that belong to sociological opinion polls. Titles for questions have a twofold function: (1) they are the input of user searches and (2) they inform about the whole contents of the question and possible answer options. Thus, generation of titles can be considered as a case of automatic summarization. However, the fact that summarization had to be performed over very short texts together with the aforementioned quality conditions imposed on new generated titles led the authors to follow knowledge-rich and domain-dependent strategies for summarization, disregarding the more frequent extractive techniques for summarization.
Resumo:
One of the greatest concerns related to the popularity of GPS-enabled devices and applications is the increasing availability of the personal location information generated by them and shared with application and service providers. Moreover, people tend to have regular routines and be characterized by a set of “significant places”, thus making it possible to identify a user from his/her mobility data. In this paper we present a series of techniques for identifying individuals from their GPS movements. More specifically, we study the uniqueness of GPS information for three popular datasets, and we provide a detailed analysis of the discriminatory power of speed, direction and distance of travel. Most importantly, we present a simple yet effective technique for the identification of users from location information that are not included in the original dataset used for training, thus raising important privacy concerns for the management of location datasets.
Resumo:
Because some Web users will be able to design a template to visualize information from scratch, while other users need to automatically visualize information by changing some parameters, providing different levels of customization of the information is a desirable goal. Our system allows the automatic generation of visualizations given the semantics of the data, and the static or pre-specified visualization by creating an interface language. We address information visualization taking into consideration the Web, where the presentation of the retrieved information is a challenge. ^ We provide a model to narrow the gap between the user's way of expressing queries and database manipulation languages (SQL) without changing the system itself thus improving the query specification process. We develop a Web interface model that is integrated with the HTML language to create a powerful language that facilitates the construction of Web-based database reports. ^ As opposed to other papers, this model offers a new way of exploring databases focusing on providing Web connectivity to databases with minimal or no result buffering, formatting, or extra programming. We describe how to easily connect the database to the Web. In addition, we offer an enhanced way on viewing and exploring the contents of a database, allowing users to customize their views depending on the contents and the structure of the data. Current database front-ends typically attempt to display the database objects in a flat view making it difficult for users to grasp the contents and the structure of their result. Our model narrows the gap between databases and the Web. ^ The overall objective of this research is to construct a model that accesses different databases easily across the net and generates SQL, forms, and reports across all platforms without requiring the developer to code a complex application. This increases the speed of development. In addition, using only the Web browsers, the end-user can retrieve data from databases remotely to make necessary modifications and manipulations of data using the Web formatted forms and reports, independent of the platform, without having to open different applications, or learn to use anything but their Web browser. We introduce a strategic method to generate and construct SQL queries, enabling inexperienced users that are not well exposed to the SQL world to build syntactically and semantically a valid SQL query and to understand the retrieved data. The generated SQL query will be validated against the database schema to ensure harmless and efficient SQL execution. (Abstract shortened by UMI.)^
Resumo:
The design of interfaces to facilitate user search has become critical for search engines, ecommercesites, and intranets. This study investigated the use of targeted instructional hints to improve search by measuring the quantitative effects of users' performance and satisfaction. The effects of syntactic, semantic and exemplar search hints on user behavior were evaluated in an empirical investigation using naturalistic scenarios. Combining the three search hint components, each with two levels of intensity, in a factorial design generated eight search engine interfaces. Eighty participants participated in the study and each completed six realistic search tasks. Results revealed that the inclusion of search hints improved user effectiveness, efficiency and confidence when using the search interfaces, but with complex interactions that require specific guidelines for search interface designers. These design guidelines will allow search designers to create more effective interfaces for a variety of searchapplications.
Resumo:
We have preliminarily generated the downcore records of total organic carbon (TOC) content, total alkenone concentration, alkenone unsaturation index, and the estimated sea-surface temperature (SST) in the northern three sites (Sites 1175, 1176, and 1178) of the Muroto Transect, Nankai Trough. The TOC content will be used for the evaluation of the burial of organic matter, which plays a role in the generation of natural gas and the formation of gas hydrate in this region. The downcore records of alkenone SST will benefit studies for the paleoceanography of the northwestern Pacific. Because those sites are located in the main path of the Kuroshio Current, the records provide the temperature change of the Kuroshio water, which is an end-member water mass in the northwestern Pacific.
Resumo:
Advertising investment and audience figures indicate that television continues to lead as a mass advertising medium. However, its effectiveness is questioned due to problems such as zapping, saturation and audience fragmentation. This has favoured the development of non-conventional advertising formats. This study provides empirical evidence for the theoretical development. This investigation analyzes the recall generated by four non-conventional advertising formats in a real environment: short programme (branded content), television sponsorship, internal and external telepromotion versus the more conventional spot. The methodology employed has integrated secondary data with primary data from computer assisted telephone interviewing (CATI) were performed ad-hoc on a sample of 2000 individuals, aged 16 to 65, representative of the total television audience. Our findings show that non-conventional advertising formats are more effective at a cognitive level, as they generate higher levels of both unaided and aided recall, in all analyzed formats when compared to the spot.
Resumo:
Authentication plays an important role in how we interact with computers, mobile devices, the web, etc. The idea of authentication is to uniquely identify a user before granting access to system privileges. For example, in recent years more corporate information and applications have been accessible via the Internet and Intranet. Many employees are working from remote locations and need access to secure corporate files. During this time, it is possible for malicious or unauthorized users to gain access to the system. For this reason, it is logical to have some mechanism in place to detect whether the logged-in user is the same user in control of the user's session. Therefore, highly secure authentication methods must be used. We posit that each of us is unique in our use of computer systems. It is this uniqueness that is leveraged to "continuously authenticate users" while they use web software. To monitor user behavior, n-gram models are used to capture user interactions with web-based software. This statistical language model essentially captures sequences and sub-sequences of user actions, their orderings, and temporal relationships that make them unique by providing a model of how each user typically behaves. Users are then continuously monitored during software operations. Large deviations from "normal behavior" can possibly indicate malicious or unintended behavior. This approach is implemented in a system called Intruder Detector (ID) that models user actions as embodied in web logs generated in response to a user's actions. User identification through web logs is cost-effective and non-intrusive. We perform experiments on a large fielded system with web logs of approximately 4000 users. For these experiments, we use two classification techniques; binary and multi-class classification. We evaluate model-specific differences of user behavior based on coarse-grain (i.e., role) and fine-grain (i.e., individual) analysis. A specific set of metrics are used to provide valuable insight into how each model performs. Intruder Detector achieves accurate results when identifying legitimate users and user types. This tool is also able to detect outliers in role-based user behavior with optimal performance. In addition to web applications, this continuous monitoring technique can be used with other user-based systems such as mobile devices and the analysis of network traffic.
Resumo:
Modern software application testing, such as the testing of software driven by graphical user interfaces (GUIs) or leveraging event-driven architectures in general, requires paying careful attention to context. Model-based testing (MBT) approaches first acquire a model of an application, then use the model to construct test cases covering relevant contexts. A major shortcoming of state-of-the-art automated model-based testing is that many test cases proposed by the model are not actually executable. These \textit{infeasible} test cases threaten the integrity of the entire model-based suite, and any coverage of contexts the suite aims to provide. In this research, I develop and evaluate a novel approach for classifying the feasibility of test cases. I identify a set of pertinent features for the classifier, and develop novel methods for extracting these features from the outputs of MBT tools. I use a supervised logistic regression approach to obtain a model of test case feasibility from a randomly selected training suite of test cases. I evaluate this approach with a set of experiments. The outcomes of this investigation are as follows: I confirm that infeasibility is prevalent in MBT, even for test suites designed to cover a relatively small number of unique contexts. I confirm that the frequency of infeasibility varies widely across applications. I develop and train a binary classifier for feasibility with average overall error, false positive, and false negative rates under 5\%. I find that unique event IDs are key features of the feasibility classifier, while model-specific event types are not. I construct three types of features from the event IDs associated with test cases, and evaluate the relative effectiveness of each within the classifier. To support this study, I also develop a number of tools and infrastructure components for scalable execution of automated jobs, which use state-of-the-art container and continuous integration technologies to enable parallel test execution and the persistence of all experimental artifacts.
Resumo:
The rural electrification is characterized by geographical dispersion of the population, low consumption, high investment by consumers and high cost. Moreover, solar radiation constitutes an inexhaustible source of energy and in its conversion into electricity photovoltaic panels are used. In this study, equations were adjusted to field conditions presented by the manufacturer for current and power of small photovoltaic systems. The mathematical analysis was performed on the photovoltaic rural system I- 100 from ISOFOTON, with power 300 Wp, located at the Experimental Farm Lageado of FCA/UNESP. For the development of such equations, the circuitry of photovoltaic cells has been studied to apply iterative numerical methods for the determination of electrical parameters and possible errors in the appropriate equations in the literature to reality. Therefore, a simulation of a photovoltaic panel was proposed through mathematical equations that were adjusted according to the data of local radiation. The results have presented equations that provide real answers to the user and may assist in the design of these systems, once calculated that the maximum power limit ensures a supply of energy generated. This real sizing helps establishing the possible applications of solar energy to the rural producer and informing the real possibilities of generating electricity from the sun.
Resumo:
This clinical study has investigated the antigenic activity of bacterial contents from exudates of acute apical abscesses (AAAs) and their paired root canal contents regarding the stimulation capacity by levels of interleukin (IL)-1 beta and tumor necrosis factor alpha (TNF-α) throughout the root canal treatment against macrophage cells. Paired samples of infected root canals and exudates of AAAs were collected from 10 subjects. Endodontic contents were sampled before (root canal sample [RCS] 1) and after chemomechanical preparation (RCS2) and after 30 days of intracanal medication with calcium hydroxide + chlorhexidine gel (Ca[OH]2 + CHX gel) (RCS3). Polymerase chain reaction (16S rDNA) was used for detection of the target bacteria, whereas limulus amebocyte lysate was used to measure endotoxin levels. Raw 264.7 macrophages were stimulated with AAA exudates from endodontic contents sampled in different moments of root canal treatment. Enzyme-linked immunosorbent assays were used to measure the levels of TNF-α and IL-1 beta. Parvimonas micra, Porphyromonas endodontalis, Dialister pneumosintes, and Prevotella nigrescens were the most frequently detected species. Higher levels of endotoxins were found in samples from periapical exudates at RCS1 (P < .005). In fact, samples collected from periapical exudates showed a higher stimulation capacity at RCS1 (P < .05). A positive correlation was found between endotoxins from exudates with IL-1 beta (r = 0.97) and TNF-α (r = 0.88) production (P < .01). The significant reduction of endotoxins and bacterial species achieved by chemomechanical procedures (RCS2) resulted in a lower capacity of root canal contents to stimulate the cells compared with that at RCS1 (P < .05). The use of Ca(OH)2 + CHX gel as an intracanal medication (RCS3) improved the removal of endotoxins and bacteria from infected root canals (P < .05) whose contents induced a lower stimulation capacity against macrophages cells at RCS1, RCS2, and RCS3 (P < .05). AAA exudates showed higher levels of endotoxins and showed a greater capacity of macrophage stimulation than the paired root canal samples. Moreover, the use of intracanal medication improved the removal of bacteria and endotoxins from infected root canals, which may have resulted in the reduction of the inflammatory potential of the root canal content.
Resumo:
Silver nanoparticles have attracted considerable attention due to their beneficial properties. But toxicity issues associated with them are also rising. The reports in the past suggested health hazards of silver nanoparticles at the cellular, molecular, or whole organismal level in eukaryotes. Whereas, there is also need to examine the exposure effects of silver nanoparticle to the microbes, which are beneficial to humans as well as environment. The available literature suggests the harmful effects of physically and chemically synthesised silver nanoparticles. The toxicity of biogenically synthesized nanoparticles has been less studied than physically and chemically synthesised nanoparticles. Hence, there is a greater need to study the toxic effects of biologically synthesised silver nanoparticles in general and mycosynthesized nanoparticles in particular. In the present study, attempts have been made to assess the risk associated with the exposure of mycosynthesized silver nanoparticles on a beneficial soil microbe Pseudomonas putida. KT2440. The study demonstrates mycosynthesis of silver nanoparticles and their characterisation by UV-vis spectrophotometry, FTIR, X-ray diffraction, nanosight LM20 - a particle size distribution analyzer and TEM. Silver nanoparticles obtained herein were found to exert the hazardous effect at the concentration of 0.4μg/ml, which warrants further detailed investigations concerning toxicity.
Resumo:
Solar radiation, especially ultraviolet A (UVA) and ultraviolet B (UVB), can cause damage to the human body, and exposure to the radiation may vary according to the geographical location, time of year and other factors. The effects of UVA and UVB radiation on organisms range from erythema formation, through tanning and reduced synthesis of macromolecules such as collagen and elastin, to carcinogenic DNA mutations. Some studies suggest that, in addition to the radiation emitted by the sun, artificial sources of radiation, such as commercial lamps, can also generate small amounts of UVA and UVB radiation. Depending on the source intensity and on the distance from the source, this radiation can be harmful to photosensitive individuals. In healthy subjects, the evidence on the danger of this radiation is still far from conclusive.
Os saberes cotidianos de alunos nas aulas de educação fisica : implicações para a pratica pedagogica
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas. Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física