921 resultados para Web data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The tool proposed, known as WSPControl, enables remote monitoring of computers across the Internet using distributed applications. Through a Web Services architecture is possible the communication between these distributed applications across heterogeneous platforms, also eliminates the need for additional settings in computer networks, such as release of ports or proxy. The tool is divided into three modules, namely: • Client Interface: developed in C Sharp, is responsible for capturing data on performance of the monitored computer also connects to the Web Services to report this data. • Web Services Interface: developed in PHP using the PHP SOAP library, is responsible for facilitating the communication between internet applications and client. • Internet Interface: developed in PHP, is responsible for reading and interpreting the information captured these available on the Internet

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work, entitled Websislapam: People Rating System Based on Web Technologies, allows the creation of questionnaires, and the organization of entities and people who participate in evaluations. Entities collect data from people with the help of resources that reduce typing mistakes. The Websislapam maintains a database and provides graphical reporting, which enable the analysis of those tested. Developed using Web technologies such as PHP, Javascript, CSS, and others. Developed with the paradigm of object-oriented programming and the MySQL DBMS. For the theoretical basis, research in the areas of System Database, Web Technologies and Web Engineering were performed. It was determined the evaluation process, systems and Web-based applications, Web and System Engineering Database. Technologies applied in the implementation of Websislapam been described. In a separate chapter presented the main features and artifacts used in the development of Websislapam. A case study demonstrates the practical use of the system

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The applications market is heated and has grown in the recent years, every day thousands of apps are downloaded and many of these are electronic games and are intended for smartphones and tablets. The electronic games are often integrated with an online system which has the function of providing extra functionality to the players. This project proposes the development of online support system to a game designed for mobile devices, consisting of a website and a database which has the function of storing data online beyond a system called the back end which has the function to integrate all the modules mentioned previously

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the rapid growth of the use of Web applications in various fields of knowledge, the term Web service enter into evidence in the current scenario, which refers to services from different origins and purpose, offered through local networks and also available in some cases, on the Internet. The architecture of this type of application offers data processing on server side thereby, running applications and complex and slow processes is very interesting, which is the case with most algorithms involving visualization. The VTK is a library intended for visualization, and features a large variety of methods and algorithms for this purpose, but with a graphics engine that requires processing capacity. The union of these two resources can bring interesting results and contribute for performance improvements in the VTK library. This study is discussed in this project, through testing and communication overhead analysis

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given the importance the concept of productive efficiency has on analyzing the human development process, which is complex and multidimensional, this study conducts a literature review on the research works that have used the data envelopment analysis (DEA) to measure and analyze the development process. Therefore, we researched the databases of Scopus and Web of Science, and considered the following analysis dimensions: bibliometrics, scope, DEA models and extensions used, interfaces with other techniques, units analyzed and depth of analysis. In addition to a brief summary, the main gaps in each analysis dimension were assessed, which may serve to guide future researches. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

End-user programmers are increasingly relying on web authoring environments to create web applications. Although often consisting primarily of web pages, such applications are increasingly going further, harnessing the content available on the web through “programs” that query other web applications for information to drive other tasks. Unfortunately, errors can be pervasive in web applications, impacting their dependability. This paper reports the results of an exploratory study of end-user web application developers, performed with the aim of exposing prevalent classes of errors. The results suggest that end-users struggle the most with the identification and manipulation of variables when structuring requests to obtain data from other web sites. To address this problem, we present a family of techniques that help end user programmers perform this task, reducing possible sources of error. The techniques focus on simplification and characterization of the data that end-users must analyze while developing their web applications. We report the results of an empirical study in which these techniques are applied to several popular web sites. Our results reveal several potential benefits for end-users who wish to “engineer” dependable web applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mashups are becoming increasingly popular as end users are able to easily access, manipulate, and compose data from several web sources. To support end users, communities are forming around mashup development environments that facilitate sharing code and knowledge. We have observed, however, that end user mashups tend to suffer from several deficiencies, such as inoperable components or references to invalid data sources, and that those deficiencies are often propagated through the rampant reuse in these end user communities. In this work, we identify and specify ten code smells indicative of deficiencies we observed in a sample of 8,051 pipe-like web mashups developed by thousands of end users in the popular Yahoo! Pipes environment. We show through an empirical study that end users generally prefer pipes that lack those smells, and then present eleven specialized refactorings that we designed to target and remove the smells. Our refactorings reduce the complexity of pipes, increase their abstraction, update broken sources of data and dated components, and standardize pipes to fit the community development patterns. Our assessment on the sample of mashups shows that smells are present in 81% of the pipes, and that the proposed refactorings can reduce that number to 16%, illustrating the potential of refactoring to support thousands of end users developing pipe-like mashups.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of the present study was to evaluate the plasticity of the hunting behavior of the spider Nephilengys cruentata (Araneae: Nephilidae) facing different species of social wasps. Considering that wasps can consume various species of spiders and that their poison can be used as defense against many predators, the effect of the corporal size of the prey was evaluated in the behavior of N. cruentata. Predation experiments were conducted using three species of social wasps of different sizes and the data registered in this research were compiled through annotations and filming of the hunting behavior of each spider, in relation to the offered prey. The results revealed that the size of the wasp and the sequential offer of prey change the hunting behavior of the spider, and prey of large size have high influence on this behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study raised the hypothesis that the trophic status in a tropical coastal food web from southeastern Brazil can be measured by the relation between total mercury (THg) and nitrogen isotope (delta(15)N) in their components. The analysed species were grouped into six trophic positions: primary producer (phytoplankton), primary consumer (zooplankton), consumer 1 (omnivore shrimp), consumer 2 (pelagic carnivores represented by squid and fish species), consumer 3 (demersal carnivores represented by fish species) and consumer 4 (pelagic-demersal top carnivore represented by the fish Trichiurus lepturus). The values of THg, delta(15)N, and trophic level (TLv) increased significantly from primary producer toward top carnivore. Our data regarding trophic magnification (6.84) and biomagnification powers (0.25 for delta(15)N and 0.83 for TLv) indicated that Hg biomagnification throughout trophic positions is high in this tropical food web, which could be primarily related to the quality of the local water.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Patterns of species interactions affect the dynamics of food webs. An important component of species interactions that is rarely considered with respect to food webs is the strengths of interactions, which may affect both structure and dynamics. In natural systems, these strengths are variable, and can be quantified as probability distributions. We examined how variation in strengths of interactions can be described hierarchically, and how this variation impacts the structure of species interactions in predator-prey networks, both of which are important components of ecological food webs. The stable isotope ratios of predator and prey species may be particularly useful for quantifying this variability, and we show how these data can be used to build probabilistic predator-prey networks. Moreover, the distribution of variation in strengths among interactions can be estimated from a limited number of observations. This distribution informs network structure, especially the key role of dietary specialization, which may be useful for predicting structural properties in systems that are difficult to observe. Finally, using three mammalian predator-prey networks ( two African and one Canadian) quantified from stable isotope data, we show that exclusion of link-strength variability results in biased estimates of nestedness and modularity within food webs, whereas the inclusion of body size constraints only marginally increases the predictive accuracy of the isotope-based network. We find that modularity is the consequence of strong link-strengths in both African systems, while nestedness is not significantly present in any of the three predator-prey networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional supervised data classification considers only physical features (e. g., distance or similarity) of the input data. Here, this type of learning is called low level classification. On the other hand, the human (animal) brain performs both low and high orders of learning and it has facility in identifying patterns according to the semantic meaning of the input data. Data classification that considers not only physical attributes but also the pattern formation is, here, referred to as high level classification. In this paper, we propose a hybrid classification technique that combines both types of learning. The low level term can be implemented by any classification technique, while the high level term is realized by the extraction of features of the underlying network constructed from the input data. Thus, the former classifies the test instances by their physical features or class topologies, while the latter measures the compliance of the test instances to the pattern formation of the data. Our study shows that the proposed technique not only can realize classification according to the pattern formation, but also is able to improve the performance of traditional classification techniques. Furthermore, as the class configuration's complexity increases, such as the mixture among different classes, a larger portion of the high level term is required to get correct classification. This feature confirms that the high level classification has a special importance in complex situations of classification. Finally, we show how the proposed technique can be employed in a real-world application, where it is capable of identifying variations and distortions of handwritten digit images. As a result, it supplies an improvement in the overall pattern recognition rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background The search for enriched (aka over-represented or enhanced) ontology terms in a list of genes obtained from microarray experiments is becoming a standard procedure for a system-level analysis. This procedure tries to summarize the information focussing on classification designs such as Gene Ontology, KEGG pathways, and so on, instead of focussing on individual genes. Although it is well known in statistics that association and significance are distinct concepts, only the former approach has been used to deal with the ontology term enrichment problem. Results BayGO implements a Bayesian approach to search for enriched terms from microarray data. The R source-code is freely available at http://blasto.iq.usp.br/~tkoide/BayGO in three versions: Linux, which can be easily incorporated into pre-existent pipelines; Windows, to be controlled interactively; and as a web-tool. The software was validated using a bacterial heat shock response dataset, since this stress triggers known system-level responses. Conclusion The Bayesian model accounts for the fact that, eventually, not all the genes from a given category are observable in microarray data due to low intensity signal, quality filters, genes that were not spotted and so on. Moreover, BayGO allows one to measure the statistical association between generic ontology terms and differential expression, instead of working only with the common significance analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Transcript enumeration methods such as SAGE, MPSS, and sequencing-by-synthesis EST "digital northern", are important high-throughput techniques for digital gene expression measurement. As other counting or voting processes, these measurements constitute compositional data exhibiting properties particular to the simplex space where the summation of the components is constrained. These properties are not present on regular Euclidean spaces, on which hybridization-based microarray data is often modeled. Therefore, pattern recognition methods commonly used for microarray data analysis may be non-informative for the data generated by transcript enumeration techniques since they ignore certain fundamental properties of this space. Results Here we present a software tool, Simcluster, designed to perform clustering analysis for data on the simplex space. We present Simcluster as a stand-alone command-line C package and as a user-friendly on-line tool. Both versions are available at: http://xerad.systemsbiology.net/simcluster. Conclusion Simcluster is designed in accordance with a well-established mathematical framework for compositional data analysis, which provides principled procedures for dealing with the simplex space, and is thus applicable in a number of contexts, including enumeration-based gene expression data.