950 resultados para Model accuracy
Resumo:
We analyze the dynamical behavior of a quantum system under the actions of two counteracting baths: the inevitable energy draining reservoir and, in opposition, exciting the system, an engineered Glauber's amplifier. We follow the system dynamics towards equilibrium to map its distinctive behavior arising from the interplay of attenuation and amplification. Such a mapping, with the corresponding parameter regimes, is achieved by calculating the evolution of both the excitation and the Glauber-Sudarshan P function. Techniques to compute the decoherence and the fidelity of quantum states under the action of both counteracting baths, based on the Wigner function rather than the density matrix, are also presented. They enable us to analyze the similarity of the evolved state vector of the system with respect to the original one, for all regimes of parameters. Applications of this attenuation-amplification interplay are discussed.
Resumo:
The efficacy of fluorescence spectroscopy to detect squamous cell carcinoma is evaluated in an animal model following laser excitation at 442 and 532 nm. Lesions are chemically induced with a topical DMBA application at the left lateral tongue of Golden Syrian hamsters. The animals are investigated every 2 weeks after the 4th week of induction until a total of 26 weeks. The right lateral tongue of each animal is considered as a control site (normal contralateral tissue) and the induced lesions are analyzed as a set of points covering the entire clinically detectable area. Based on fluorescence spectral differences, four indices are determined to discriminate normal and carcinoma tissues, based on intraspectral analysis. The spectral data are also analyzed using a multivariate data analysis and the results are compared with histology as the diagnostic gold standard. The best result achieved is for blue excitation using the KNN (K-nearest neighbor, a interspectral analysis) algorithm with a sensitivity of 95.7% and a specificity of 91.6%. These high indices indicate that fluorescence spectroscopy may constitute a fast noninvasive auxiliary tool for diagnostic of cancer within the oral cavity. (C) 2008 Society of Photo-Optical Instrumentation Engineers.
Resumo:
The existence of juxtaposed regions of distinct cultures in spite of the fact that people's beliefs have a tendency to become more similar to each other's as the individuals interact repeatedly is a puzzling phenomenon in the social sciences. Here we study an extreme version of the frequency-dependent bias model of social influence in which an individual adopts the opinion shared by the majority of the members of its extended neighborhood, which includes the individual itself. This is a variant of the majority-vote model in which the individual retains its opinion in case there is a tie among the neighbors' opinions. We assume that the individuals are fixed in the sites of a square lattice of linear size L and that they interact with their nearest neighbors only. Within a mean-field framework, we derive the equations of motion for the density of individuals adopting a particular opinion in the single-site and pair approximations. Although the single-site approximation predicts a single opinion domain that takes over the entire lattice, the pair approximation yields a qualitatively correct picture with the coexistence of different opinion domains and a strong dependence on the initial conditions. Extensive Monte Carlo simulations indicate the existence of a rich distribution of opinion domains or clusters, the number of which grows with L(2) whereas the size of the largest cluster grows with ln L(2). The analysis of the sizes of the opinion domains shows that they obey a power-law distribution for not too large sizes but that they are exponentially distributed in the limit of very large clusters. In addition, similarly to other well-known social influence model-Axelrod's model-we found that these opinion domains are unstable to the effect of a thermal-like noise.
Resumo:
Thanks to recent advances in molecular biology, allied to an ever increasing amount of experimental data, the functional state of thousands of genes can now be extracted simultaneously by using methods such as cDNA microarrays and RNA-Seq. Particularly important related investigations are the modeling and identification of gene regulatory networks from expression data sets. Such a knowledge is fundamental for many applications, such as disease treatment, therapeutic intervention strategies and drugs design, as well as for planning high-throughput new experiments. Methods have been developed for gene networks modeling and identification from expression profiles. However, an important open problem regards how to validate such approaches and its results. This work presents an objective approach for validation of gene network modeling and identification which comprises the following three main aspects: (1) Artificial Gene Networks (AGNs) model generation through theoretical models of complex networks, which is used to simulate temporal expression data; (2) a computational method for gene network identification from the simulated data, which is founded on a feature selection approach where a target gene is fixed and the expression profile is observed for all other genes in order to identify a relevant subset of predictors; and (3) validation of the identified AGN-based network through comparison with the original network. The proposed framework allows several types of AGNs to be generated and used in order to simulate temporal expression data. The results of the network identification method can then be compared to the original network in order to estimate its properties and accuracy. Some of the most important theoretical models of complex networks have been assessed: the uniformly-random Erdos-Renyi (ER), the small-world Watts-Strogatz (WS), the scale-free Barabasi-Albert (BA), and geographical networks (GG). The experimental results indicate that the inference method was sensitive to average degree k variation, decreasing its network recovery rate with the increase of k. The signal size was important for the inference method to get better accuracy in the network identification rate, presenting very good results with small expression profiles. However, the adopted inference method was not sensible to recognize distinct structures of interaction among genes, presenting a similar behavior when applied to different network topologies. In summary, the proposed framework, though simple, was adequate for the validation of the inferred networks by identifying some properties of the evaluated method, which can be extended to other inference methods.
Resumo:
Background: The inference of gene regulatory networks (GRNs) from large-scale expression profiles is one of the most challenging problems of Systems Biology nowadays. Many techniques and models have been proposed for this task. However, it is not generally possible to recover the original topology with great accuracy, mainly due to the short time series data in face of the high complexity of the networks and the intrinsic noise of the expression measurements. In order to improve the accuracy of GRNs inference methods based on entropy (mutual information), a new criterion function is here proposed. Results: In this paper we introduce the use of generalized entropy proposed by Tsallis, for the inference of GRNs from time series expression profiles. The inference process is based on a feature selection approach and the conditional entropy is applied as criterion function. In order to assess the proposed methodology, the algorithm is applied to recover the network topology from temporal expressions generated by an artificial gene network (AGN) model as well as from the DREAM challenge. The adopted AGN is based on theoretical models of complex networks and its gene transference function is obtained from random drawing on the set of possible Boolean functions, thus creating its dynamics. On the other hand, DREAM time series data presents variation of network size and its topologies are based on real networks. The dynamics are generated by continuous differential equations with noise and perturbation. By adopting both data sources, it is possible to estimate the average quality of the inference with respect to different network topologies, transfer functions and network sizes. Conclusions: A remarkable improvement of accuracy was observed in the experimental results by reducing the number of false connections in the inferred topology by the non-Shannon entropy. The obtained best free parameter of the Tsallis entropy was on average in the range 2.5 <= q <= 3.5 (hence, subextensive entropy), which opens new perspectives for GRNs inference methods based on information theory and for investigation of the nonextensivity of such networks. The inference algorithm and criterion function proposed here were implemented and included in the DimReduction software, which is freely available at http://sourceforge.net/projects/dimreduction and http://code.google.com/p/dimreduction/.
Resumo:
We study a general stochastic rumour model in which an ignorant individual has a certain probability of becoming a stifler immediately upon hearing the rumour. We refer to this special kind of stifler as an uninterested individual. Our model also includes distinct rates for meetings between two spreaders in which both become stiflers or only one does, so that particular cases are the classical Daley-Kendall and Maki-Thompson models. We prove a Law of Large Numbers and a Central Limit Theorem for the proportions of those who ultimately remain ignorant and those who have heard the rumour but become uninterested in it.
Resumo:
A mechanism for the kinetic instabilities observed in the galvanostatic electro-oxidation of methanol is suggested and a model developed. The model is investigated using stoichiometric network analysis as well as concepts from algebraic geometry (polynomial rings and ideal theory) revealing the occurrence of a Hopf and a saddle-node bifurcation. These analytical solutions are confirmed by numerical integration of the system of differential equations. (C) 2010 American Institute of Physics
Resumo:
Currently there is a trend for the expansion of the area cropped with sugarcane (Saccharum officinarum L.), driven by an increase in the world demand for biofuels, due to economical, environmental, and geopolitical issues. Although sugarcane is traditionally harvested by burning dried leaves and tops, the unburned, mechanized harvest has been progressively adopted. The use of process based models is useful in understanding the effects of plant litter in soil C dynamics. The objective of this work was to use the CENTURY model in evaluating the effect of sugarcane residue management in the temporal dynamics of soil C. The approach taken in this work was to parameterize the CENTURY model for the sugarcane crop, to simulate the temporal dynamics of soil C, validating the model through field experiment data, and finally to make predictions in the long term regarding soil C. The main focus of this work was the comparison of soil C stocks between the burned and unburned litter management systems, but the effect of mineral fertilizer and organic residue applications were also evaluated. The simulations were performed with data from experiments with different durations, from 1 to 60 yr, in Goiana and Timbauba, Pernambuco, and Pradopolis, Sao Paulo, all in Brazil; and Mount Edgecombe, Kwazulu-Natal, South Africa. It was possible to simulate the temporal dynamics of soil C (R(2) = 0.89). The predictions made with the model revealed that there is, in the long term, a trend for higher soil C stocks with the unburned management. This increase is conditioned by factors such as climate, soil texture, time of adoption of the unburned system, and N fertilizer management.
Resumo:
The application of laser induced breakdown spectrometry (LIBS) aiming the direct analysis of plant materials is a great challenge that still needs efforts for its development and validation. In this way, a series of experimental approaches has been carried out in order to show that LIBS can be used as an alternative method to wet acid digestions based methods for analysis of agricultural and environmental samples. The large amount of information provided by LIBS spectra for these complex samples increases the difficulties for selecting the most appropriated wavelengths for each analyte. Some applications have suggested that improvements in both accuracy and precision can be achieved by the application of multivariate calibration in LIBS data when compared to the univariate regression developed with line emission intensities. In the present work, the performance of univariate and multivariate calibration, based on partial least squares regression (PLSR), was compared for analysis of pellets of plant materials made from an appropriate mixture of cryogenically ground samples with cellulose as the binding agent. The development of a specific PLSR model for each analyte and the selection of spectral regions containing only lines of the analyte of interest were the best conditions for the analysis. In this particular application, these models showed a similar performance. but PLSR seemed to be more robust due to a lower occurrence of outliers in comparison to the univariate method. Data suggests that efforts dealing with sample presentation and fitness of standards for LIBS analysis must be done in order to fulfill the boundary conditions for matrix independent development and validation. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Single interface flow systems (SIFA) present some noteworthy advantages when compared to other flow systems, such as a simpler configuration, a more straightforward operation and control and an undemanding optimisation routine. Moreover, the plain reaction zone establishment, which relies strictly on the mutual inter-dispersion of the adjoining solutions, could be exploited to set up multiple sequential reaction schemes providing supplementary information regarding the species under determination. In this context, strategies for accuracy assessment could be favourably implemented. To this end, the sample could be processed by two quasi-independent analytical methods and the final result would be calculated after considering the two different methods. Intrinsically more precise and accurate results would be then gathered. In order to demonstrate the feasibility of the approach, a SIFA system with spectrophotometric detection was designed for the determination of lansoprazole in pharmaceutical formulations. Two reaction interfaces with two distinct pi-acceptors, chloranilic acid (CIA) and 2,3-dichloro-5,6-dicyano-p-benzoquinone (DDQ) were implemented. Linear working concentration ranges between 2.71 x 10(-4) to 8.12 x 10(-4) mol L(-1) and 2.17 x 10(-4) to 8.12 x 10(-4) mol L(-1) were obtained for DDQ and CIA methods, respectively. When compared with the results furnished by the reference procedure, the results showed relative deviations lower than 2.7%. Furthermore. the repeatability was good, with r.s.d. lower than 3.8% and 4.7% for DDQ and CIA methods, respectively. Determination rate was about 30 h(-1). (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Inductively coupled plasma optical emission spectrometers (ICP DES) allow fast simultaneous measurements of several spectral lines for multiple elements. The combination of signal intensities of two or more emission lines for each element may bring such advantages as improvement of the precision, the minimization of systematic errors caused by spectral interferences and matrix effects. In this work, signal intensities for several spectral lines were combined for the determination of Al, Cd, Co, Cr, Mn, Pb, and Zn in water. Afterwards, parameters for evaluation of the calibration model were calculated to select the combination of emission lines leading to the best accuracy (lowest values of PRESS-Predicted error sum of squares and RMSEP-Root means square error of prediction). Limits of detection (LOD) obtained using multiple lines were 7.1, 0.5, 4.4, 0.042, 3.3, 28 and 6.7 mu g L(-1) (n = 10) for Al, Cd. Co, Cr, Mn, Pb and Zn, respectively, in the presence of concomitants. On the other hand, the LOD established for the most intense emission line were 16. 0.7, 8.4, 0.074. 23, 26 and 9.6 mu g L(-1) (n = 10) for these same elements in the presence of concomitants. The accuracy of the developed procedure was demonstrated using water certified reference material. The use of multiple lines improved the sensitivity making feasible the determination of these analytes according to the target values required for the current environmental legislation for water samples and it was also demonstrated that measurements in multiple lines can also be employed as a tool to verify the accuracy of an analytical procedure in ICP DES. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The search for more realistic modeling of financial time series reveals several stylized facts of real markets. In this work we focus on the multifractal properties found in price and index signals. Although the usual minority game (MG) models do not exhibit multifractality, we study here one of its variants that does. We show that the nonsynchronous MG models in the nonergodic phase is multifractal and in this sense, together with other stylized facts, constitute a better modeling tool. Using the structure function (SF) approach we detected the stationary and the scaling range of the time series generated by the MG model and, from the linear (non-linear) behavior of the SF we identified the fractal (multifractal) regimes. Finally, using the wavelet transform modulus maxima (WTMM) technique we obtained its multifractal spectrum width for different dynamical regimes. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
We study the dynamics of the adoption of new products by agents with continuous opinions and discrete actions (CODA). The model is such that the refusal in adopting a new idea or product is increasingly weighted by neighbor agents as evidence against the product. Under these rules, we study the distribution of adoption times and the final proportion of adopters in the population. We compare the cases where initial adopters are clustered to the case where they are randomly scattered around the social network and investigate small world effects on the final proportion of adopters. The model predicts a fat tailed distribution for late adopters which is verified by empirical data. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The Random Parameter model was proposed to explain the structure of the covariance matrix in problems where most, but not all, of the eigenvalues of the covariance matrix can be explained by Random Matrix Theory. In this article, we explore the scaling properties of the model, as observed in the multifractal structure of the simulated time series. We use the Wavelet Transform Modulus Maxima technique to obtain the multifractal spectrum dependence with the parameters of the model. The model shows a scaling structure compatible with the stylized facts for a reasonable choice of the parameter values. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
In this work we study an agent based model to investigate the role of asymmetric information degrees for market evolution. This model is quite simple and may be treated analytically since the consumers evaluate the quality of a certain good taking into account only the quality of the last good purchased plus her perceptive capacity beta. As a consequence, the system evolves according to a stationary Markov chain. The value of a good offered by the firms increases along with quality according to an exponent alpha, which is a measure of the technology. It incorporates all the technological capacity of the production systems such as education, scientific development and techniques that change the productivity rates. The technological level plays an important role to explain how the asymmetry of information may affect the market evolution in this model. We observe that, for high technological levels, the market can detect adverse selection. The model allows us to compute the maximum asymmetric information degree before the market collapses. Below this critical point the market evolves during a limited period of time and then dies out completely. When beta is closer to 1 (symmetric information), the market becomes more profitable for high quality goods, although high and low quality markets coexist. The maximum asymmetric information level is a consequence of an ergodicity breakdown in the process of quality evaluation. (C) 2011 Elsevier B.V. All rights reserved.