896 resultados para Web log analysis
Resumo:
An analysis of print Holocaust denial literature as it compares to internet Holocaust denial, with a focus on how the transition from print literature to the internet has affected Holocaust denial.
Resumo:
There are significant levels of concern about the relevance and the difficulty of learning some issues on Strength of Materials and Structural Analysis. Most students of Continuum Mechanics and Structural Analysis in Civil Engineering usually point out some key learning aspects as especially difficult for acquiring specific skills. These key concepts entail comprehension difficulties but ease access and applicability to structural analysis in more advanced subjects. Likewise, some elusive but basic structural concepts, such as flexibility, stiffness or influence lines, are paramount for developing further skills required for advanced structural design: tall buildings, arch-type structures as well as bridges. As new curricular itineraries are currently being implemented, it appears appropriate to devise a repository of interactive web-based applications for training in those basic concepts. That will hopefully train the student to understand the complexity of such concepts, to develop intuitive knowledge on actual structural response and to improve their preparation for exams. In this work, a web-based learning assistant system for influence lines on continuous beams is presented. It consists of a collection of interactive user-friendly applications accessible via Web. It is performed in both Spanish and English languages. Rather than a “black box” system, the procedure involves open interaction with the student, who can simulate and virtually envisage the structural response. Thus, the student is enabled to set the geometric, topologic and mechanic layout of a continuous beam and to change or shift the loading and the support conditions. Simultaneously, the changes in the beam response prompt on the screen, so that the effects of the several issues involved in structural analysis become apparent. The system is performed through a set of web pages which encompasses interactive exercises and problems, written in JavaScript under JQuery and DyGraphs frameworks, given that their efficiency and graphic capabilities are renowned. Students can freely boost their self-study on this subject in order to face their exams more confidently. Besides, this collection is expected to be added to the "Virtual Lab of Continuum Mechanics" of the UPM, launched in 2013 (http://serviciosgate.upm.es/laboratoriosvirtuales/laboratorios/medios-continuos-en-construcci%C3%B3n)
Resumo:
This thesis is the result of a project whose objective has been to develop and deploy a dashboard for sentiment analysis of football in Twitter based on web components and D3.js. To do so, a visualisation server has been developed in order to present the data obtained from Twitter and analysed with Senpy. This visualisation server has been developed with Polymer web components and D3.js. Data mining has been done with a pipeline between Twitter, Senpy and ElasticSearch. Luigi have been used in this process because helps building complex pipelines of batch jobs, so it has analysed all tweets and stored them in ElasticSearch. To continue, D3.js has been used to create interactive widgets that make data easily accessible, this widgets will allow the user to interact with them and �filter the most interesting data for him. Polymer web components have been used to make this dashboard according to Google's material design and be able to show dynamic data in widgets. As a result, this project will allow an extensive analysis of the social network, pointing out the influence of players and teams and the emotions and sentiments that emerge in a lapse of time.
Resumo:
Arabidopsis thaliana, a small annual plant belonging to the mustard family, is the subject of study by an estimated 7000 researchers around the world. In addition to the large body of genetic, physiological and biochemical data gathered for this plant, it will be the first higher plant genome to be completely sequenced, with completion expected at the end of the year 2000. The sequencing effort has been coordinated by an international collaboration, the Arabidopsis Genome Initiative (AGI). The rationale for intensive investigation of Arabidopsis is that it is an excellent model for higher plants. In order to maximize use of the knowledge gained about this plant, there is a need for a comprehensive database and information retrieval and analysis system that will provide user-friendly access to Arabidopsis information. This paper describes the initial steps we have taken toward realizing these goals in a project called The Arabidopsis Information Resource (TAIR) (www.arabidopsis.org).
Resumo:
The discovery that the epsilon 4 allele of the apolipoprotein E (apoE) gene is a putative risk factor for Alzheimer disease (AD) in the general population has highlighted the role of genetic influences in this extremely common and disabling illness. It has long been recognized that another genetic abnormality, trisomy 21 (Down syndrome), is associated with early and severe development of AD neuropathological lesions. It remains a challenge, however, to understand how these facts relate to the pathological changes in the brains of AD patients. We used computerized image analysis to examine the size distribution of one of the characteristic neuropathological lesions in AD, deposits of A beta peptide in senile plaques (SPs). Surprisingly, we find that a log-normal distribution fits the SP size distribution quite well, motivating a porous model of SP morphogenesis. We then analyzed SP size distribution curves in genotypically defined subgroups of AD patients. The data demonstrate that both apoE epsilon 4/AD and trisomy 21/AD lead to increased amyloid deposition, but by apparently different mechanisms. The size distribution curve is shifted toward larger plaques in trisomy 21/AD, probably reflecting increased A beta production. In apoE epsilon 4/AD, the size distribution is unchanged but the number of SP is increased compared to apoE epsilon 3, suggesting increased probability of SP initiation. These results demonstrate that subgroups of AD patients defined on the basis of molecular characteristics have quantitatively different neuropathological phenotypes.
Resumo:
At head of title: Antenna Laboratory.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Web transaction data between Web visitors and Web functionalities usually convey user task-oriented behavior pattern. Mining such type of click-stream data will lead to capture usage pattern information. Nowadays Web usage mining technique has become one of most widely used methods for Web recommendation, which customizes Web content to user-preferred style. Traditional techniques of Web usage mining, such as Web user session or Web page clustering, association rule and frequent navigational path mining can only discover usage pattern explicitly. They, however, cannot reveal the underlying navigational activities and identify the latent relationships that are associated with the patterns among Web users as well as Web pages. In this work, we propose a Web recommendation framework incorporating Web usage mining technique based on Probabilistic Latent Semantic Analysis (PLSA) model. The main advantages of this method are, not only to discover usage-based access pattern, but also to reveal the underlying latent factor as well. With the discovered user access pattern, we then present user more interested content via collaborative recommendation. To validate the effectiveness of proposed approach, we conduct experiments on real world datasets and make comparisons with some existing traditional techniques. The preliminary experimental results demonstrate the usability of the proposed approach.
Resumo:
This article explores consumer Web-search satisfaction. It commences with a brief overview of the concepts consumer information search and consumer satisfaction. Consumer Web adoption issues are then briefly discussed and the importance of consumer search satisfaction is highlighted in relation to the adoption of the Web as an additional source of consumer information. Research hypotheses are developed and the methodology of a large scale consumer experiment to record consumer Web search behaviour is described. The hypotheses are tested and the data explored in relation to post-Web-search satisfaction. The results suggest that consumer post-Web-search satisfaction judgments may be derived from subconscious judgments of Web search efficiency, an empirical calculation of which is problematic in unlimited information environments such as the Web. The results are discussed and a future research agenda is briefly outlined.
Resumo:
This thesis provides a set of tools for managing uncertainty in Web-based models and workflows.To support the use of these tools, this thesis firstly provides a framework for exposing models through Web services. An introduction to uncertainty management, Web service interfaces,and workflow standards and technologies is given, with a particular focus on the geospatial domain.An existing specification for exposing geospatial models and processes, theWeb Processing Service (WPS), is critically reviewed. A processing service framework is presented as a solutionto usability issues with the WPS standard. The framework implements support for Simple ObjectAccess Protocol (SOAP), Web Service Description Language (WSDL) and JavaScript Object Notation (JSON), allowing models to be consumed by a variety of tools and software. Strategies for communicating with models from Web service interfaces are discussed, demonstrating the difficultly of exposing existing models on the Web. This thesis then reviews existing mechanisms for uncertainty management, with an emphasis on emulator methods for building efficient statistical surrogate models. A tool is developed to solve accessibility issues with such methods, by providing a Web-based user interface and backend to ease the process of building and integrating emulators. These tools, plus the processing service framework, are applied to a real case study as part of the UncertWeb project. The usability of the framework is proved with the implementation of aWeb-based workflow for predicting future crop yields in the UK, also demonstrating the abilities of the tools for emulator building and integration. Future directions for the development of the tools are discussed.
Resumo:
We estimated trophic position and carbon source for three consumers (Florida gar, Lepisosteus platyrhincus; eastern mosquitofish, Gambusia holbrooki; and riverine grass shrimp, Palaemonetes paludosus) from 20 sites representing gradients of productivity and hydrological disturbance in the southern Florida Everglades, U.S.A. We characterized gross primary productivity at each site using light/dark bottle incubation and stem density of emergent vascular plants. We also documented nutrient availability as total phosphorus (TP) in floc and periphyton, and the density of small fishes. Hydrological disturbance was characterized as the time since a site was last dried and the average number of days per year the sites were inundated for the previous 10 years. Food-web attributes were estimated in both the wet and dry seasons by analysis of δ15N (trophic position) and δ13C (food-web carbon source) from 702 samples of aquatic consumers. An index of carbon source was derived from a two-member mixing model with Seminole ramshorn snails (Planorbella duryi) as a basal grazing consumer and scuds (amphipods Hyallela azteca) as a basal detritivore. Snails yielded carbon isotopic values similar to green algae and diatoms, while carbon values of scuds were similar to bulk periphyton and floc; carbon isotopic values of cyanobacteria were enriched in C13compared to all consumers examined. A carbon source similar to scuds dominated at all but one study site, and though the relative contribution of scud-like and snail-like carbon sources was variable, there was no evidence that these contributions were a function of abiotic factors or season. Gar consistently displayed the highest estimated trophic position of the consumers studied, with mosquitofish feeding at a slightly lower level, and grass shrimp feeding at the lowest level. Trophic position was not correlated with any nutrient or productivity parameter, but did increase for grass shrimp and mosquitofish as the time following droughts increased. Trophic position of Florida gar was positively correlated with emergent plant stem density.
Resumo:
Hydrophobicity as measured by Log P is an important molecular property related to toxicity and carcinogenicity. With increasing public health concerns for the effects of Disinfection By-Products (DBPs), there are considerable benefits in developing Quantitative Structure and Activity Relationship (QSAR) models capable of accurately predicting Log P. In this research, Log P values of 173 DBP compounds in 6 functional classes were used to develop QSAR models, by applying 3 molecular descriptors, namely, Energy of the Lowest Unoccupied Molecular Orbital (ELUMO), Number of Chlorine (NCl) and Number of Carbon (NC) by Multiple Linear Regression (MLR) analysis. The QSAR models developed were validated based on the Organization for Economic Co-operation and Development (OECD) principles. The model Applicability Domain (AD) and mechanistic interpretation were explored. Considering the very complex nature of DBPs, the established QSAR models performed very well with respect to goodness-of-fit, robustness and predictability. The predicted values of Log P of DBPs by the QSAR models were found to be significant with a correlation coefficient R2 from 81% to 98%. The Leverage Approach by Williams Plot was applied to detect and remove outliers, consequently increasing R 2 by approximately 2% to 13% for different DBP classes. The developed QSAR models were statistically validated for their predictive power by the Leave-One-Out (LOO) and Leave-Many-Out (LMO) cross validation methods. Finally, Monte Carlo simulation was used to assess the variations and inherent uncertainties in the QSAR models of Log P and determine the most influential parameters in connection with Log P prediction. The developed QSAR models in this dissertation will have a broad applicability domain because the research data set covered six out of eight common DBP classes, including halogenated alkane, halogenated alkene, halogenated aromatic, halogenated aldehyde, halogenated ketone, and halogenated carboxylic acid, which have been brought to the attention of regulatory agencies in recent years. Furthermore, the QSAR models are suitable to be used for prediction of similar DBP compounds within the same applicability domain. The selection and integration of various methodologies developed in this research may also benefit future research in similar fields.
Resumo:
Internet and the Web have changed the way that companies communicate with their publics, improving relations between them. Also providing substantial benefits for organizations. This has led to small and medium enterprises (SMEs) to develop corporate sites to establish relationships with their audiences. This paper, applying the methodology of content analysis, analyzes the main factors and tools that make the Websites usable and intuitive sites that promote better relations between SMEs and their audiences. Also, it has developed an index to measure the effectiveness of Webs from the perspective of usability. The results indicate that the Websites have, in general, appropriate levels of usability.