836 resultados para Data fusion applications


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The bare nucleus S(E) factors for the (2)H(d, p)(3)H and (2)H(d.n)(3)He reactions have been measured for the first time via the Trojan Horse Method off the proton in (3)He from 1.5 MeV down to 2 key. This range overlaps with the relevant region for Standard Big Bang Nucleosynthesis as well as with the thermal energies of future fusion reactors and deuterium burning in the Pre-Main-Sequence phase of stellar evolution. This is the first pioneering experiment in quasi free regime where the charged spectator is detected. Both the energy dependence and the absolute value of the S(E) factors deviate by more than 15% from available direct data with new S(0) values of 57.4 +/- 1.8 MeVb for (3)H + p and 60.1 +/- 1.9 MeV b for (3)He + n. None of the existing fitting curves is able to provide the correct slope of the new data in the full range, thus calling for a revision of the theoretical description. This has consequences in the calculation of the reaction rates with more than a 25% increase at the temperatures of future fusion reactors. (C) 2011 Elsevier By. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Specific choices about how to represent complex networks can have a substantial impact on the execution time required for the respective construction and analysis of those structures. In this work we report a comparison of the effects of representing complex networks statically by adjacency matrices or dynamically by adjacency lists. Three theoretical models of complex networks are considered: two types of Erdos-Renyi as well as the Barabasi-Albert model. We investigated the effect of the different representations with respect to the construction and measurement of several topological properties (i.e. degree, clustering coefficient, shortest path length, and betweenness centrality). We found that different forms of representation generally have a substantial effect on the execution time, with the sparse representation frequently resulting in remarkably superior performance. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a novel approach in order to increase the recognition power of Multiscale Fractal Dimension (MFD) techniques, when applied to image classification. The proposal uses Functional Data Analysis (FDA) with the aim of enhancing the MFD technique precision achieving a more representative descriptors vector, capable of recognizing and characterizing more precisely objects in an image. FDA is applied to signatures extracted by using the Bouligand-Minkowsky MFD technique in the generation of a descriptors vector from them. For the evaluation of the obtained improvement, an experiment using two datasets of objects was carried out. A dataset was used of characters shapes (26 characters of the Latin alphabet) carrying different levels of controlled noise and a dataset of fish images contours. A comparison with the use of the well-known methods of Fourier and wavelets descriptors was performed with the aim of verifying the performance of FDA method. The descriptor vectors were submitted to Linear Discriminant Analysis (LDA) classification method and we compared the correctness rate in the classification process among the descriptors methods. The results demonstrate that FDA overcomes the literature methods (Fourier and wavelets) in the processing of information extracted from the MFD signature. In this way, the proposed method can be considered as an interesting choice for pattern recognition and image classification using fractal analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers an extension to the skew-normal model through the inclusion of an additional parameter which can lead to both uni- and bi-modal distributions. The paper presents various basic properties of this family of distributions and provides a stochastic representation which is useful for obtaining theoretical properties and to simulate from the distribution. Moreover, the singularity of the Fisher information matrix is investigated and maximum likelihood estimation for a random sample with no covariates is considered. The main motivation is thus to avoid using mixtures in fitting bimodal data as these are well known to be complicated to deal with, particularly because of identifiability problems. Data-based illustrations show that such model can be useful. Copyright (C) 2009 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In [H. Brezis, A. Friedman, Nonlinear parabolic equations involving measures as initial conditions, J. Math. Pure Appl. (9) (1983) 73-97.] Brezis and Friedman prove that certain nonlinear parabolic equations, with the delta-measure as initial data, have no solution. However in [J.F. Colombeau, M. Langlais, Generalized solutions of nonlinear parabolic equations with distributions as initial conditions, J. Math. Anal. Appl (1990) 186-196.] Colombeau and Langlais prove that these equations have a unique solution even if the delta-measure is substituted by any Colombeau generalized function of compact support. Here we generalize Colombeau and Langlais` result proving that we may take any generalized function as the initial data. Our approach relies on recent algebraic and topological developments of the theory of Colombeau generalized functions and results from [J. Aragona, Colombeau generalized functions on quasi-regular sets, Publ. Math. Debrecen (2006) 371-399.]. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measurement error models often arise in epidemiological and clinical research. Usually, in this set up it is assumed that the latent variable has a normal distribution. However, the normality assumption may not be always correct. Skew-normal/independent distribution is a class of asymmetric thick-tailed distributions which includes the Skew-normal distribution as a special case. In this paper, we explore the use of skew-normal/independent distribution as a robust alternative to null intercept measurement error model under a Bayesian paradigm. We assume that the random errors and the unobserved value of the covariate (latent variable) follows jointly a skew-normal/independent distribution, providing an appealing robust alternative to the routine use of symmetric normal distribution in this type of model. Specific distributions examined include univariate and multivariate versions of the skew-normal distribution, the skew-t distributions, the skew-slash distributions and the skew contaminated normal distributions. The methods developed is illustrated using a real data set from a dental clinical trial. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ordered intermetallic phases of Pt with several transition metals have been prepared and their electrocatalytic properties studied. In light of these tests it is proposed that these catalysts could be used as electrodes in fuel cells, as they combine an excellent capacity to adsorb organic fuels at the Pt sites with low susceptibility to being poisoned by intermediates and reaction products at the transition-metal sites. An experimental procedure used to obtain the four intermetallic phases Pt-M (M = Mn, Pb, Sb and Sn) is described. The phases thus produced were characterized by X-ray diffraction, scanning electron microscopy with surface analysis by energy-dispersive X-ray spectrometry, scanning tunneling microscopy and X-ray photoelectron spectroscopy. The data thus obtained support the conclusion that the method described here is highly effective for the preparation of Pt-M phases featuring a range of structural and electronic modifications that will allow a useful relation to be established between their physicochemical properties and predicted electrocatalytic activity. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cytochrome P450 (CYP450) is a class of enzymes where the substrate identification is particularly important to know. It would help medicinal chemists to design drugs with lower side effects due to drug-drug interactions and to extensive genetic polymorphism. Herein, we discuss the application of the 2D and 3D-similarity searches in identifying reference Structures with higher capacity to retrieve Substrates of three important CYP enzymes (CYP2C9, CYP2D6, and CYP3A4). On the basis of the complementarities of multiple reference structures selected by different similarity search methods, we proposed the fusion of their individual Tanimoto scores into a consensus Tanimoto score (T(consensus)). Using this new score, true positive rates of 63% (CYP2C9) and 81% (CYP2D6) were achieved with false positive rates of 4% for the CYP2C9-CYP2D6 data Set. Extended similarity searches were carried out oil a validation data set, and the results showed that by using the T(consensus) score, not only the area of a ROC graph increased, but also more substrates were recovered at the beginning of a ranked list.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data mining is a relatively new field of research that its objective is to acquire knowledge from large amounts of data. In medical and health care areas, due to regulations and due to the availability of computers, a large amount of data is becoming available [27]. On the one hand, practitioners are expected to use all this data in their work but, at the same time, such a large amount of data cannot be processed by humans in a short time to make diagnosis, prognosis and treatment schedules. A major objective of this thesis is to evaluate data mining tools in medical and health care applications to develop a tool that can help make rather accurate decisions. In this thesis, the goal is finding a pattern among patients who got pneumonia by clustering of lab data values which have been recorded every day. By this pattern we can generalize it to the patients who did not have been diagnosed by this disease whose lab values shows the same trend as pneumonia patients does. There are 10 tables which have been extracted from a big data base of a hospital in Jena for my work .In ICU (intensive care unit), COPRA system which is a patient management system has been used. All the tables and data stored in German Language database.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The advancement of GPS technology enables GPS devices not only to be used as orientation and navigation tools, but also to track travelled routes. GPS tracking data provides essential information for a broad range of urban planning applications such as transportation routing and planning, traffic management and environmental control. This paper describes on processing the data that was collected by tracking the cars of 316 volunteers over a seven-week period. The detailed information is extracted. The processed data is further connected to the underlying road network by means of maps. Geographical maps are applied to check how the car-movements match the road network. The maps capture the complexity of the car-movements in the urban area. The results show that 90% of the trips on the plane match the road network within a tolerance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

GPS tracking of mobile objects provides spatial and temporal data for a broad range of applications including traffic management and control, transportation routing and planning. Previous transport research has focused on GPS tracking data as an appealing alternative to travel diaries. Moreover, the GPS based data are gradually becoming a cornerstone for real-time traffic management. Tracking data of vehicles from GPS devices are however susceptible to measurement errors – a neglected issue in transport research. By conducting a randomized experiment, we assess the reliability of GPS based traffic data on geographical position, velocity, and altitude for three types of vehicles; bike, car, and bus. We find the geographical positioning reliable, but with an error greater than postulated by the manufacturer and a non-negligible risk for aberrant positioning. Velocity is slightly underestimated, whereas altitude measurements are unreliable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single-page applications have historically been subject to strong market forces driving fast development and deployment in lieu of quality control and changeable code, which are important factors for maintainability. In this report we develop two functionally equivalent applications using AngularJS and React and compare their maintainability as defined by ISO/IEC 9126. AngularJS and React represent two distinct approaches to web development, with AngularJS being a general framework providing rich base functionality and React a small specialized library for efficient view rendering. The quality comparison was accomplished by calculating Maintainability Index for each application. Version control analysis was used to determine quality indicators during development and subsequent maintenance where new functionality was added in two steps.   The results show no major differences in maintainability in the initial applications. As more functionality is added the Maintainability Index decreases faster in the AngularJS application, indicating a steeper increase in complexity compared to the React application. Source code analysis reveals that changes in data flow requires significantly larger modifications of the AngularJS application due to its inherent architecture for data flow. We conclude that frameworks are useful when they facilitate development of known requirements but less so when applications and systems grow in size.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The practitioners of bioinformatics require increasing sophistication from their software tools to take into account the particular characteristics that make their domain complex. For example, there is a great variation of experience of researchers, from novices who would like guidance from experts in the best resources to use to experts that wish to take greater management control of the tools used in their experiments. Also, the range of available, and conflicting, data formats is growing and there is a desire to automate the many trivial manual stages of in-silico experiments. Agent-oriented software development is one approach to tackling the design of complex applications. In this paper, we argue that, in fact, agent-oriented development is a particularly well-suited approach to developing bioinformatics tools that take into account the wider domain characteristics. To illustrate this, we design a data curation tool, which manages the format of experimental data, extend it to better account for the extra requirements placed by the domain characteristics, and show how the characteristics lead to a system well suited to an agent-oriented view.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The practitioners of bioinformatics require increasing sophistication from their software tools to take into account the particular characteristics that make their domain complex. For example, there is a great variation of experience of researchers, from novices who would like guidance from experts in the best resources to use to experts that wish to take greater management control of the tools used in their experiments. Also, the range of available, and conflicting, data formats is growing and there is a desire to automate the many trivial manual stages of in-silico experiments. Agent-oriented software development is one approach to tackling the design of complex applications. In this paper, we argue that, in fact, agent-oriented development is a particularly well-suited approach to developing bioinformatics tools that take into account the wider domain characteristics. To illustrate this, we design a data curation tool, which manages the format of experimental data, extend it to better account for the extra requirements placed by the domain characteristics, and show how the characteristics lead to a system well suited to an agent-oriented view.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Determining the provenance of data, i.e. the process that led to that data, is vital in many disciplines. For example, in science, the process that produced a given result must be demonstrably rigorous for the result to be deemed reliable. A provenance system supports applications in recording adequate documentation about process executions to answer queries regarding provenance, and provides functionality to perform those queries. Several provenance systems are being developed, but all focus on systems in which the components are textitreactive, for example Web Services that act on the basis of a request, job submission system, etc. This limitation means that questions regarding the motives of autonomous actors, or textitagents, in such systems remain unanswerable in the general case. Such questions include: who was ultimately responsible for a given effect, what was their reason for initiating the process and does the effect of a process match what was intended to occur by those initiating the process? In this paper, we address this limitation by integrating two solutions: a generic, re-usable framework for representing the provenance of data in service-oriented architectures and a model for describing the goal-oriented delegation and engagement of agents in multi-agent systems. Using these solutions, we present algorithms to answer common questions regarding responsibility and success of a process and evaluate the approach with a simulated healthcare example.