917 resultados para Web Log Data
Resumo:
The applications market is heated and has grown in the recent years, every day thousands of apps are downloaded and many of these are electronic games and are intended for smartphones and tablets. The electronic games are often integrated with an online system which has the function of providing extra functionality to the players. This project proposes the development of online support system to a game designed for mobile devices, consisting of a website and a database which has the function of storing data online beyond a system called the back end which has the function to integrate all the modules mentioned previously
Resumo:
With the rapid growth of the use of Web applications in various fields of knowledge, the term Web service enter into evidence in the current scenario, which refers to services from different origins and purpose, offered through local networks and also available in some cases, on the Internet. The architecture of this type of application offers data processing on server side thereby, running applications and complex and slow processes is very interesting, which is the case with most algorithms involving visualization. The VTK is a library intended for visualization, and features a large variety of methods and algorithms for this purpose, but with a graphics engine that requires processing capacity. The union of these two resources can bring interesting results and contribute for performance improvements in the VTK library. This study is discussed in this project, through testing and communication overhead analysis
Resumo:
Pós-graduação em Educação - FCT
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Given the importance the concept of productive efficiency has on analyzing the human development process, which is complex and multidimensional, this study conducts a literature review on the research works that have used the data envelopment analysis (DEA) to measure and analyze the development process. Therefore, we researched the databases of Scopus and Web of Science, and considered the following analysis dimensions: bibliometrics, scope, DEA models and extensions used, interfaces with other techniques, units analyzed and depth of analysis. In addition to a brief summary, the main gaps in each analysis dimension were assessed, which may serve to guide future researches. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
Pós-graduação em Ciência da Informação - FFC
Resumo:
End-user programmers are increasingly relying on web authoring environments to create web applications. Although often consisting primarily of web pages, such applications are increasingly going further, harnessing the content available on the web through “programs” that query other web applications for information to drive other tasks. Unfortunately, errors can be pervasive in web applications, impacting their dependability. This paper reports the results of an exploratory study of end-user web application developers, performed with the aim of exposing prevalent classes of errors. The results suggest that end-users struggle the most with the identification and manipulation of variables when structuring requests to obtain data from other web sites. To address this problem, we present a family of techniques that help end user programmers perform this task, reducing possible sources of error. The techniques focus on simplification and characterization of the data that end-users must analyze while developing their web applications. We report the results of an empirical study in which these techniques are applied to several popular web sites. Our results reveal several potential benefits for end-users who wish to “engineer” dependable web applications.
Resumo:
Mashups are becoming increasingly popular as end users are able to easily access, manipulate, and compose data from several web sources. To support end users, communities are forming around mashup development environments that facilitate sharing code and knowledge. We have observed, however, that end user mashups tend to suffer from several deficiencies, such as inoperable components or references to invalid data sources, and that those deficiencies are often propagated through the rampant reuse in these end user communities. In this work, we identify and specify ten code smells indicative of deficiencies we observed in a sample of 8,051 pipe-like web mashups developed by thousands of end users in the popular Yahoo! Pipes environment. We show through an empirical study that end users generally prefer pipes that lack those smells, and then present eleven specialized refactorings that we designed to target and remove the smells. Our refactorings reduce the complexity of pipes, increase their abstraction, update broken sources of data and dated components, and standardize pipes to fit the community development patterns. Our assessment on the sample of mashups shows that smells are present in 81% of the pipes, and that the proposed refactorings can reduce that number to 16%, illustrating the potential of refactoring to support thousands of end users developing pipe-like mashups.
Resumo:
Hundreds of Terabytes of CMS (Compact Muon Solenoid) data are being accumulated for storage day by day at the University of Nebraska-Lincoln, which is one of the eight US CMS Tier-2 sites. Managing this data includes retaining useful CMS data sets and clearing storage space for newly arriving data by deleting less useful data sets. This is an important task that is currently being done manually and it requires a large amount of time. The overall objective of this study was to develop a methodology to help identify the data sets to be deleted when there is a requirement for storage space. CMS data is stored using HDFS (Hadoop Distributed File System). HDFS logs give information regarding file access operations. Hadoop MapReduce was used to feed information in these logs to Support Vector Machines (SVMs), a machine learning algorithm applicable to classification and regression which is used in this Thesis to develop a classifier. Time elapsed in data set classification by this method is dependent on the size of the input HDFS log file since the algorithmic complexities of Hadoop MapReduce algorithms here are O(n). The SVM methodology produces a list of data sets for deletion along with their respective sizes. This methodology was also compared with a heuristic called Retention Cost which was calculated using size of the data set and the time since its last access to help decide how useful a data set is. Accuracies of both were compared by calculating the percentage of data sets predicted for deletion which were accessed at a later instance of time. Our methodology using SVMs proved to be more accurate than using the Retention Cost heuristic. This methodology could be used to solve similar problems involving other large data sets.
Resumo:
Pós-graduação em Ciência da Informação - FFC
Resumo:
The objective of the present study was to evaluate the plasticity of the hunting behavior of the spider Nephilengys cruentata (Araneae: Nephilidae) facing different species of social wasps. Considering that wasps can consume various species of spiders and that their poison can be used as defense against many predators, the effect of the corporal size of the prey was evaluated in the behavior of N. cruentata. Predation experiments were conducted using three species of social wasps of different sizes and the data registered in this research were compiled through annotations and filming of the hunting behavior of each spider, in relation to the offered prey. The results revealed that the size of the wasp and the sequential offer of prey change the hunting behavior of the spider, and prey of large size have high influence on this behavior.
Resumo:
The present study raised the hypothesis that the trophic status in a tropical coastal food web from southeastern Brazil can be measured by the relation between total mercury (THg) and nitrogen isotope (delta(15)N) in their components. The analysed species were grouped into six trophic positions: primary producer (phytoplankton), primary consumer (zooplankton), consumer 1 (omnivore shrimp), consumer 2 (pelagic carnivores represented by squid and fish species), consumer 3 (demersal carnivores represented by fish species) and consumer 4 (pelagic-demersal top carnivore represented by the fish Trichiurus lepturus). The values of THg, delta(15)N, and trophic level (TLv) increased significantly from primary producer toward top carnivore. Our data regarding trophic magnification (6.84) and biomagnification powers (0.25 for delta(15)N and 0.83 for TLv) indicated that Hg biomagnification throughout trophic positions is high in this tropical food web, which could be primarily related to the quality of the local water.
Resumo:
This paper introduces a skewed log-Birnbaum-Saunders regression model based on the skewed sinh-normal distribution proposed by Leiva et al. [A skewed sinh-normal distribution and its properties and application to air pollution, Comm. Statist. Theory Methods 39 (2010), pp. 426-443]. Some influence methods, such as the local influence and generalized leverage, are presented. Additionally, we derived the normal curvatures of local influence under some perturbation schemes. An empirical application to a real data set is presented in order to illustrate the usefulness of the proposed model.
Resumo:
Patterns of species interactions affect the dynamics of food webs. An important component of species interactions that is rarely considered with respect to food webs is the strengths of interactions, which may affect both structure and dynamics. In natural systems, these strengths are variable, and can be quantified as probability distributions. We examined how variation in strengths of interactions can be described hierarchically, and how this variation impacts the structure of species interactions in predator-prey networks, both of which are important components of ecological food webs. The stable isotope ratios of predator and prey species may be particularly useful for quantifying this variability, and we show how these data can be used to build probabilistic predator-prey networks. Moreover, the distribution of variation in strengths among interactions can be estimated from a limited number of observations. This distribution informs network structure, especially the key role of dietary specialization, which may be useful for predicting structural properties in systems that are difficult to observe. Finally, using three mammalian predator-prey networks ( two African and one Canadian) quantified from stable isotope data, we show that exclusion of link-strength variability results in biased estimates of nestedness and modularity within food webs, whereas the inclusion of body size constraints only marginally increases the predictive accuracy of the isotope-based network. We find that modularity is the consequence of strong link-strengths in both African systems, while nestedness is not significantly present in any of the three predator-prey networks.
Resumo:
Traditional supervised data classification considers only physical features (e. g., distance or similarity) of the input data. Here, this type of learning is called low level classification. On the other hand, the human (animal) brain performs both low and high orders of learning and it has facility in identifying patterns according to the semantic meaning of the input data. Data classification that considers not only physical attributes but also the pattern formation is, here, referred to as high level classification. In this paper, we propose a hybrid classification technique that combines both types of learning. The low level term can be implemented by any classification technique, while the high level term is realized by the extraction of features of the underlying network constructed from the input data. Thus, the former classifies the test instances by their physical features or class topologies, while the latter measures the compliance of the test instances to the pattern formation of the data. Our study shows that the proposed technique not only can realize classification according to the pattern formation, but also is able to improve the performance of traditional classification techniques. Furthermore, as the class configuration's complexity increases, such as the mixture among different classes, a larger portion of the high level term is required to get correct classification. This feature confirms that the high level classification has a special importance in complex situations of classification. Finally, we show how the proposed technique can be employed in a real-world application, where it is capable of identifying variations and distortions of handwritten digit images. As a result, it supplies an improvement in the overall pattern recognition rate.