922 resultados para convergence of numerical methods
Resumo:
The level of ab initio theory which is necessary to compute reliable values for the static and dynamic (hyper)polarizabilities of three medium size π-conjugated organic nonlinear optical (NLO) molecules is investigated. With the employment of field-induced coordinates in combination with a finite field procedure, the calculations were made possible. It is stated that to obtain reasonable values for the various individual contributions to the (hyper)polarizability, it is necessary to include electron correlation. Based on the results, the convergence of the usual perturbation treatment for vibrational anharmonicity was examined
Resumo:
Analytical results harmonisation is investigated in this study to provide an alternative to the restrictive approach of analytical methods harmonisation which is recommended nowadays for making possible the exchange of information and then for supporting the fight against illicit drugs trafficking. Indeed, the main goal of this study is to demonstrate that a common database can be fed by a range of different analytical methods, whatever the differences in levels of analytical parameters between these latter ones. For this purpose, a methodology making possible the estimation and even the optimisation of results similarity coming from different analytical methods was then developed. In particular, the possibility to introduce chemical profiles obtained with Fast GC-FID in a GC-MS database is studied in this paper. By the use of the methodology, the similarity of results coming from different analytical methods can be objectively assessed and the utility in practice of database sharing by these methods can be evaluated, depending on profiling purposes (evidential vs. operational perspective tool). This methodology can be regarded as a relevant approach for database feeding by different analytical methods and puts in doubt the necessity to analyse all illicit drugs seizures in one single laboratory or to implement analytical methods harmonisation in each participating laboratory.
Resumo:
Recently, kernel-based Machine Learning methods have gained great popularity in many data analysis and data mining fields: pattern recognition, biocomputing, speech and vision, engineering, remote sensing etc. The paper describes the use of kernel methods to approach the processing of large datasets from environmental monitoring networks. Several typical problems of the environmental sciences and their solutions provided by kernel-based methods are considered: classification of categorical data (soil type classification), mapping of environmental and pollution continuous information (pollution of soil by radionuclides), mapping with auxiliary information (climatic data from Aral Sea region). The promising developments, such as automatic emergency hot spot detection and monitoring network optimization are discussed as well.
Resumo:
The present paper proposes a model for the persistence of abnormal returnsboth at firm and industry levels, when longitudinal data for the profitsof firms classiffied as industries are available. The model produces a two-way variance decomposition of abnormal returns: (a) at firm versus industrylevels, and (b) for permanent versus transitory components. This variancedecomposition supplies information on the relative importance of thefundamental components of abnormal returns that have been discussed in theliterature. The model is applied to a Spanish sample of firms, obtainingresults such as: (a) there are significant and permanent differences betweenprofit rates both at industry and firm levels; (b) variation of abnormal returnsat firm level is greater than at industry level; and (c) firm and industry levelsdo not differ significantly regarding rates of convergence of abnormal returns.
Resumo:
A new debate over the speed of convergence in per capita income across economies is going on. Cross sectional estimates support the idea of slow convergence of about two percent per year. Panel data estimates support the idea of fast convergence of five, ten or even twenty percent per year. This paper shows that, if you ``do it right'', even the panel data estimation method yields the result of slow convergence of about two percent per year.
Resumo:
Nowadays, genome-wide association studies (GWAS) and genomic selection (GS) methods which use genome-wide marker data for phenotype prediction are of much potential interest in plant breeding. However, to our knowledge, no studies have been performed yet on the predictive ability of these methods for structured traits when using training populations with high levels of genetic diversity. Such an example of a highly heterozygous, perennial species is grapevine. The present study compares the accuracy of models based on GWAS or GS alone, or in combination, for predicting simple or complex traits, linked or not with population structure. In order to explore the relevance of these methods in this context, we performed simulations using approx 90,000 SNPs on a population of 3,000 individuals structured into three groups and corresponding to published diversity grapevine data. To estimate the parameters of the prediction models, we defined four training populations of 1,000 individuals, corresponding to these three groups and a core collection. Finally, to estimate the accuracy of the models, we also simulated four breeding populations of 200 individuals. Although prediction accuracy was low when breeding populations were too distant from the training populations, high accuracy levels were obtained using the sole core-collection as training population. The highest prediction accuracy was obtained (up to 0.9) using the combined GWAS-GS model. We thus recommend using the combined prediction model and a core-collection as training population for grapevine breeding or for other important economic crops with the same characteristics.
Resumo:
BACKGROUND: Protein-energy malnutrition is highly prevalent in aged populations. Associated clinical, economic, and social burden is important. A valid screening method that would be robust and precise, but also easy, simple, and rapid to apply, is essential for adequate therapeutic management. OBJECTIVES: To compare the interobserver variability of 2 methods measuring food intake: semiquantitative visual estimations made by nurses versus calorie measurements performed by dieticians on the basis of standardized color digital photographs of servings before and after consumption. DESIGN: Observational monocentric pilot study. SETTING/PARTICIPANTS: A geriatric ward. The meals were randomly chosen from the meal tray. The choice was anonymous with respect to the patients who consumed them. MEASUREMENTS: The test method consisted of the estimation of calorie consumption by dieticians on the basis of standardized color digital photographs of servings before and after consumption. The reference method was based on direct visual estimations of the meals by nurses. Food intake was expressed in the form of a percentage of the serving consumed and calorie intake was then calculated by a dietician based on these percentages. The methods were applied with no previous training of the observers. Analysis of variance was performed to compare their interobserver variability. RESULTS: Of 15 meals consumed and initially examined, 6 were assessed with each method. Servings not consumed at all (0% consumption) or entirely consumed by the patient (100% consumption) were not included in the analysis so as to avoid systematic error. The digital photography method showed higher interobserver variability in calorie intake estimations. The difference between the compared methods was statistically significant (P < .03). CONCLUSIONS: Calorie intake measures for geriatric patients are more concordant when estimated in a semiquantitative way. Digital photography for food intake estimation without previous specific training of dieticians should not be considered as a reference method in geriatric settings, as it shows no advantages in terms of interobserver variability.
Resumo:
BACKGROUND: Finding genes that are differentially expressed between conditions is an integral part of understanding the molecular basis of phenotypic variation. In the past decades, DNA microarrays have been used extensively to quantify the abundance of mRNA corresponding to different genes, and more recently high-throughput sequencing of cDNA (RNA-seq) has emerged as a powerful competitor. As the cost of sequencing decreases, it is conceivable that the use of RNA-seq for differential expression analysis will increase rapidly. To exploit the possibilities and address the challenges posed by this relatively new type of data, a number of software packages have been developed especially for differential expression analysis of RNA-seq data. RESULTS: We conducted an extensive comparison of eleven methods for differential expression analysis of RNA-seq data. All methods are freely available within the R framework and take as input a matrix of counts, i.e. the number of reads mapping to each genomic feature of interest in each of a number of samples. We evaluate the methods based on both simulated data and real RNA-seq data. CONCLUSIONS: Very small sample sizes, which are still common in RNA-seq experiments, impose problems for all evaluated methods and any results obtained under such conditions should be interpreted with caution. For larger sample sizes, the methods combining a variance-stabilizing transformation with the 'limma' method for differential expression analysis perform well under many different conditions, as does the nonparametric SAMseq method.
Resumo:
Functionally relevant large scale brain dynamics operates within the framework imposed by anatomical connectivity and time delays due to finite transmission speeds. To gain insight on the reliability and comparability of large scale brain network simulations, we investigate the effects of variations in the anatomical connectivity. Two different sets of detailed global connectivity structures are explored, the first extracted from the CoCoMac database and rescaled to the spatial extent of the human brain, the second derived from white-matter tractography applied to diffusion spectrum imaging (DSI) for a human subject. We use the combination of graph theoretical measures of the connection matrices and numerical simulations to explicate the importance of both connectivity strength and delays in shaping dynamic behaviour. Our results demonstrate that the brain dynamics derived from the CoCoMac database are more complex and biologically more realistic than the one based on the DSI database. We propose that the reason for this difference is the absence of directed weights in the DSI connectivity matrix.
Resumo:
Many traits and/or strategies expressed by organisms are quantitative phenotypes. Because populations are of finite size and genomes are subject to mutations, these continuously varying phenotypes are under the joint pressure of mutation, natural selection and random genetic drift. This article derives the stationary distribution for such a phenotype under a mutation-selection-drift balance in a class-structured population allowing for demographically varying class sizes and/or changing environmental conditions. The salient feature of the stationary distribution is that it can be entirely characterized in terms of the average size of the gene pool and Hamilton's inclusive fitness effect. The exploration of the phenotypic space varies exponentially with the cumulative inclusive fitness effect over state space, which determines an adaptive landscape. The peaks of the landscapes are those phenotypes that are candidate evolutionary stable strategies and can be determined by standard phenotypic selection gradient methods (e.g. evolutionary game theory, kin selection theory, adaptive dynamics). The curvature of the stationary distribution provides a measure of the stability by convergence of candidate evolutionary stable strategies, and it is evaluated explicitly for two biological scenarios: first, a coordination game, which illustrates that, for a multipeaked adaptive landscape, stochastically stable strategies can be singled out by letting the size of the gene pool grow large; second, a sex-allocation game for diploids and haplo-diploids, which suggests that the equilibrium sex ratio follows a Beta distribution with parameters depending on the features of the genetic system.
Resumo:
22q11.2 deletion syndrome (22q11DS) is associated with an increased susceptibility to develop schizophrenia. Despite a large body of literature documenting abnormal brain structure in 22q11DS, cerebral changes associated with brain maturation in 22q11DS remained largely unexplored. To map cortical maturation from childhood to adulthood in 22q11.2 deletion syndrome, we used cerebral MRI from 59 patients with 22q11DS, aged 6 to 40, and 80 typically developing controls; three year follow-up assessments were also available for 32 patients and 31 matched controls. Cross-sectional cortical thickness trajectories during childhood and adolescence were approximated in age bins. Repeated-measures were also conducted with the longitudinal data. Within the group of patients with 22q11DS, exploratory measures of cortical thickness differences related to COMT polymorphism, IQ, and schizophrenia were also conducted. We observed deviant trajectories of cortical thickness changes with age in patients with 22q11DS. In affected preadolescents, larger prefrontal thickness was observed compared to age-matched controls. Afterward, we observed greater cortical loss in 22q11DS with a convergence of cortical thickness values by the end of adolescence. No compelling evidence for an effect of COMT polymorphism on cortical maturation was observed. Within 22q11DS, significant differences in cortical thickness were related to cognitive level in children and adolescents, and to schizophrenia in adults. Deviant trajectories of cortical thickness from childhood to adulthood provide strong in vivo cues for a defect in the programmed synaptic elimination, which in turn may explain the susceptibility of patients with 22q11DS to develop psychosis.
Resumo:
OBJECTIVES: Elevated plasma levels of the elastase alpha 1-proteinase inhibitor complex (E-alpha 1 PI) have been proposed as a marker of bacterial infection and neutrophil activation. Liberation of elastase from neutrophils after collection of blood may cause falsely elevated results. Collection methods have not been validated for critically ill neonates and children. We evaluated the influence of preanalytical methods on E-alpha 1 PI results including the recommended collection into EDTA tubes. DESIGN AND METHODS: First, we compared varying acceleration speeds and centrifugation times. Centrifugation at 1550 g for 3 min resulted in reliable preparation of leukocyte free plasma. Second, we evaluated all collection tubes under consideration for absorption of E-alpha 1 PI. Finally, 12 sets of samples from healthy adults and 42 sets obtained from critically ill neonates and children were distributed into the various sampling tubes. Samples were centrifuged within 15 min of collection and analyzed with a new turbidimetric assay adapted to routine laboratory analyzers. RESULTS: One of the two tubes containing a plasma-cell separation gel absorbed 22.1% of the E-alpha 1 PI content. In the remaining tubes without absorption of E-alpha 1 PI no differences were observed for samples from healthy adult patients. However, in samples from critically ill neonates or children, significantly higher results were obtained for plain Li-heparin tubes (mean = 183 micrograms/L), EDTA tubes (mean = 93 micrograms/L), and citrate tubes (mean = 88.5 micrograms/L) than for the Li-hep tube with cell-plasma separation gel and no absorption of E-alpha 1 PI (mean = 62.4 micrograms/L, p < 0.01). CONCLUSION: Contrary to healthy adults, E-alpha 1 PI results in plasma samples from critically ill neonates and children depend on the type of collection tube.
Resumo:
Ground-penetrating radar (GPR) and microgravimetric surveys have been conducted in the southern Jura mountains of western Switzerland in order to map subsurface karstic features. The study site, La Grande Rolaz cave, is an extensive system in which many portions have been mapped. By using small station spacing and careful processing for the geophysical data, and by modeling these data with topographic information from within the cave, accurate interpretations have been achieved. The constraints on the interpreted geologic models are better when combining the geophysical methods than when using only one of the methods, despite the general limitations of two-dimensional (2D) profiling. For example, microgravimetry can complement GPR methods for accurately delineating a shallow cave section approximately 10 X 10 mt in size. Conversely, GPR methods can be complementary in determining cavity depths and in verifying the presence of off-line features and numerous areas of small cavities and fractures, which may be difficult to resolve in microgravimetric data.
Resumo:
We present a weakly nonlinear analysis of the interface dynamics in a radial Hele-Shaw cell driven by both injection and rotation. We extend the systematic expansion introduced in [E. Alvarez-Lacalle et al., Phys. Rev. E 64, 016302 (2001)] to the radial geometry, and compute explicitly the first nonlinear contributions. We also find the necessary and sufficient condition for the uniform convergence of the nonlinear expansion. Within this region of convergence, the analytical predictions at low orders are compared satisfactorily to exact solutions and numerical integration of the problem. This is particularly remarkable in configurations (with no counterpart in the channel geometry) for which the interplay between injection and rotation allows that condition to be satisfied at all times. In the case of the purely centrifugal forcing we demonstrate that nonlinear couplings make the interface more unstable for lower viscosity contrast between the fluids.
Resumo:
We present a phase-field model for the dynamics of the interface between two inmiscible fluids with arbitrary viscosity contrast in a rectangular Hele-Shaw cell. With asymptotic matching techniques we check the model to yield the right Hele-Shaw equations in the sharp-interface limit, and compute the corrections to these equations to first order in the interface thickness. We also compute the effect of such corrections on the linear dispersion relation of the planar interface. We discuss in detail the conditions on the interface thickness to control the accuracy and convergence of the phase-field model to the limiting Hele-Shaw dynamics. In particular, the convergence appears to be slower for high viscosity contrasts.