957 resultados para Machine Approach
Resumo:
Aims. An analytical solution for the discrepancy between observed core-like profiles and predicted cusp profiles in dark matter halos is studied. Methods. We calculate the distribution function for Navarro-Frenk-White halos and extract energy from the distribution, taking into account the effects of baryonic physics processes. Results. We show with a simple argument that we can reproduce the evolution of a cusp to a flat density profile by a decrease of the initial potential energy.
Resumo:
The dynamic polarizability and optical absorption spectrum of liquid water in the 6-15 eV energy range are investigated by a sequential molecular dynamics (MD)/quantum mechanical approach. The MD simulations are based on a polarizable model for liquid water. Calculation of electronic properties relies on time-dependent density functional and equation-of-motion coupled-cluster theories. Results for the dynamic polarizability, Cauchy moments, S(-2), S(-4), S(-6), and dielectric properties of liquid water are reported. The theoretical predictions for the optical absorption spectrum of liquid water are in good agreement with experimental information.
Resumo:
The electronic properties of liquid ammonia are investigated by a sequential molecular dynamics/quantum mechanics approach. Quantum mechanics calculations for the liquid phase are based on a reparametrized hybrid exchange-correlation functional that reproduces the electronic properties of ammonia clusters [(NH(3))(n); n=1-5]. For these small clusters, electron binding energies based on Green's function or electron propagator theory, coupled cluster with single, double, and perturbative triple excitations, and density functional theory (DFT) are compared. Reparametrized DFT results for the dipole moment, electron binding energies, and electronic density of states of liquid ammonia are reported. The calculated average dipole moment of liquid ammonia (2.05 +/- 0.09 D) corresponds to an increase of 27% compared to the gas phase value and it is 0.23 D above a prediction based on a polarizable model of liquid ammonia [Deng , J. Chem. Phys. 100, 7590 (1994)]. Our estimate for the ionization potential of liquid ammonia is 9.74 +/- 0.73 eV, which is approximately 1.0 eV below the gas phase value for the isolated molecule. The theoretical vertical electron affinity of liquid ammonia is predicted as 0.16 +/- 0.22 eV, in good agreement with the experimental result for the location of the bottom of the conduction band (-V(0)=0.2 eV). Vertical ionization potentials and electron affinities correlate with the total dipole moment of ammonia aggregates. (c) 2008 American Institute of Physics.
Resumo:
We develop a combined hydro-kinetic approach which incorporates a hydrodynamical expansion of the systems formed in A + A collisions and their dynamical decoupling described by escape probabilities. The method corresponds to a generalized relaxation time (tau(rel)) approximation for the Boltzmann equation applied to inhomogeneous expanding systems; at small tau(rel) it also allows one to catch the viscous effects in hadronic component-hadron-resonance gas. We demonstrate how the approximation of sudden freeze-out can be obtained within this dynamical picture of continuous emission and find that hypersurfaces, corresponding to a sharp freeze-out limit, are momentum dependent. The pion m(T) spectra are computed in the developed hydro-kinetic model, and compared with those obtained from ideal hydrodynamics with the Cooper-Frye isothermal prescription. Our results indicate that there does not exist a universal freeze-out temperature for pions with different momenta, and support an earlier decoupling of higher p(T) particles. By performing numerical simulations for various initial conditions and equations of state we identify several characteristic features of the bulk QCD matter evolution preferred in view of the current analysis of heavy ion collisions at RHIC energies.
Resumo:
In this paper, we estimate the losses during teleportation processes requiring either two high-Q cavities or a single bimodal cavity. The estimates were carried out using the phenomenological operator approach introduced by de Almeida et al. [Phys. Rev. A 62, 033815 (2000)].
Resumo:
We present parameter-free calculations of electronic properties of InGaN, InAlN, and AlGaN alloys. The calculations are based on a generalized quasichemical approach, to account for disorder and composition effects, and first-principles calculations within the density functional theory with the LDA-1/2 approach, to accurately determine the band gaps. We provide precise results for AlGaN, InGaN, and AlInN band gaps for the entire range of compositions, and their respective bowing parameters. (C) 2011 American Institute of Physics. [doi:10.1063/1.3576570]
Resumo:
Thanks to recent advances in molecular biology, allied to an ever increasing amount of experimental data, the functional state of thousands of genes can now be extracted simultaneously by using methods such as cDNA microarrays and RNA-Seq. Particularly important related investigations are the modeling and identification of gene regulatory networks from expression data sets. Such a knowledge is fundamental for many applications, such as disease treatment, therapeutic intervention strategies and drugs design, as well as for planning high-throughput new experiments. Methods have been developed for gene networks modeling and identification from expression profiles. However, an important open problem regards how to validate such approaches and its results. This work presents an objective approach for validation of gene network modeling and identification which comprises the following three main aspects: (1) Artificial Gene Networks (AGNs) model generation through theoretical models of complex networks, which is used to simulate temporal expression data; (2) a computational method for gene network identification from the simulated data, which is founded on a feature selection approach where a target gene is fixed and the expression profile is observed for all other genes in order to identify a relevant subset of predictors; and (3) validation of the identified AGN-based network through comparison with the original network. The proposed framework allows several types of AGNs to be generated and used in order to simulate temporal expression data. The results of the network identification method can then be compared to the original network in order to estimate its properties and accuracy. Some of the most important theoretical models of complex networks have been assessed: the uniformly-random Erdos-Renyi (ER), the small-world Watts-Strogatz (WS), the scale-free Barabasi-Albert (BA), and geographical networks (GG). The experimental results indicate that the inference method was sensitive to average degree k variation, decreasing its network recovery rate with the increase of k. The signal size was important for the inference method to get better accuracy in the network identification rate, presenting very good results with small expression profiles. However, the adopted inference method was not sensible to recognize distinct structures of interaction among genes, presenting a similar behavior when applied to different network topologies. In summary, the proposed framework, though simple, was adequate for the validation of the inferred networks by identifying some properties of the evaluated method, which can be extended to other inference methods.
Resumo:
We consider the problem of interaction neighborhood estimation from the partial observation of a finite number of realizations of a random field. We introduce a model selection rule to choose estimators of conditional probabilities among natural candidates. Our main result is an oracle inequality satisfied by the resulting estimator. We use then this selection rule in a two-step procedure to evaluate the interacting neighborhoods. The selection rule selects a small prior set of possible interacting points and a cutting step remove from this prior set the irrelevant points. We also prove that the Ising models satisfy the assumptions of the main theorems, without restrictions on the temperature, on the structure of the interacting graph or on the range of the interactions. It provides therefore a large class of applications for our results. We give a computationally efficient procedure in these models. We finally show the practical efficiency of our approach in a simulation study.
Resumo:
Obesity has been recognized as a worldwide public health problem. It significantly increases the chances of developing several diseases, including Type II diabetes. The roles of insulin and leptin in obesity involve reactions that can be better understood when they are presented step by step. The aim of this work was to design software with data from some of the most recent publications on obesity, especially those concerning the roles of insulin and leptin in this metabolic disturbance. The most notable characteristic of this software is the use of animations representing the cellular response together with the presentation of recently discovered mechanisms on the participation of insulin and leptin in processes leading to obesity. The software was field tested in the Biochemistry of Nutrition web-based course. After using the software and discussing its contents in chatrooms, students were asked to answer an evaluation survey about the whole activity and the usefulness of the software within the learning process. The teaching assistants (TA) evaluated the software as a tool to help in the teaching process. The students' and TAs' satisfaction was very evident and encouraged us to move forward with the software development and to improve the use of this kind of educational tool in biochemistry classes.
Resumo:
The aim of this paper was to study a method based on gas production technique to measure the biological effects of tannins on rumen fermentation. Six feeds were used as fermentation substrates in a semi-automated gas method: feed A - aroeira (Astronium urundeuva); feed B - jurema preta (Mimosa hostilis), feed C - sorghum grains (Sorghum bicolor); feed D - Tifton-85 (Cynodon sp.); and two others prepared mixing 450 g sorghum leaves, 450 g concentrate (maize and soybean meal) and 100 g either of acacia (Acacia mearnsii) tannin extract (feed E) or quebracho (Schinopsis lorentzii) tannin extract (feed F) per kg (w:w). Three assays were carried out to standardize the bioassay for tannins. The first assay compared two binding agents (polyethylene glycol - PEG - and polyvinyl polypirrolidone - PVPP) to attenuate the tannin effects. The complex formed by PEG and tannins showed to be more stable than PVPP and tannins. Then, in the second assay, PEG was used as binding agent, and this assay was done to evaluate levels of PEG (0, 500, 750, 1000 and 1250 mg/g DM) to minimize the tannin effect. All the tested levels of PEG produced a response to evaluate tannin effects but the best response was for dose of 1000 mg/g DM. Using this dose of PEG, the final assay was carried out to test three compounds (tannic acid, quebracho extract and acacia extract) to establish a curve of biological equivalent effect of tannins. For this, five levels of each compound were added to I g of a standard feed (Lucerne hay). The equivalent effect showed not to be directly related to the chemical analysis for tannins. It was shown that different sources of tannins had different activities or reactivities. The curves of biological equivalence can provide information about tannin reactivity and its use seems to be important as an additional factor for chemical analysis. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
A simultaneous optimization strategy based on a neuro-genetic approach is proposed for selection of laser induced breakdown spectroscopy operational conditions for the simultaneous determination of macronutrients (Ca, Mg and P), micro-nutrients (B, Cu, Fe, Mn and Zn), Al and Si in plant samples. A laser induced breakdown spectroscopy system equipped with a 10 Hz Q-switched Nd:YAG laser (12 ns, 532 nm, 140 mJ) and an Echelle spectrometer with intensified coupled-charge device was used. Integration time gate, delay time, amplification gain and number of pulses were optimized. Pellets of spinach leaves (NIST 1570a) were employed as laboratory samples. In order to find a model that could correlate laser induced breakdown spectroscopy operational conditions with compromised high peak areas of all elements simultaneously, a Bayesian Regularized Artificial Neural Network approach was employed. Subsequently, a genetic algorithm was applied to find optimal conditions for the neural network model, in an approach called neuro-genetic, A single laser induced breakdown spectroscopy working condition that maximizes peak areas of all elements simultaneously, was obtained with the following optimized parameters: 9.0 mu s integration time gate, 1.1 mu s delay time, 225 (a.u.) amplification gain and 30 accumulated laser pulses. The proposed approach is a useful and a suitable tool for the optimization process of such a complex analytical problem. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
A novel strategy for accomplishing zone trapping in flow analysis is proposed. The sample and the reagent solutions are simultaneously inserted into convergent carrier streams and the established zones merge together before reaching the detector, where the most concentrated portion of the entire sample zone is trapped. The main characteristics, potentialities and limitations of the strategy were critically evaluated in relation to an analogous flow system with zone stopping. When applied to the spectrophotometric determination of nitrite in river waters, the main figures of merit were maintained, exception made for the sampling frequency which was calculated as 189h(-1), about 32% higher relatively to the analogous system with zone stopping. The sample inserted volume can be increased up to 1.0 mL without affecting sampling frequency and no problems with pump heating or malfunctions were noted after 8-h operation of the system. In contrast to zone stopping, only a small portion of the sample zone is halted with zone trapping, leading to these beneficial effects. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
This article intends to contribute to the reflection on the Educational Statistics as being source for the researches on History of Education. The main concern was to reveal the way Educational Statistics related to the period from 1871 to 1931 were produced, in central government. Official reports - from the General Statistics Directory - and Statistics yearbooks released by that department were analyzed and, on this analysis, recommendations and definitions to perform the works were sought. By rending problematic to the documental issues on Educational Statistics and their usual interpretations, the intention was to reduce the ignorance about the origin of the school numbers, which are occasionally used in current researches without the convenient critical exam.
Resumo:
Age-related changes in running kinematics have been reported in the literature using classical inferential statistics. However, this approach has been hampered by the increased number of biomechanical gait variables reported and subsequently the lack of differences presented in these studies. Data mining techniques have been applied in recent biomedical studies to solve this problem using a more general approach. In the present work, we re-analyzed lower extremity running kinematic data of 17 young and 17 elderly male runners using the Support Vector Machine (SVM) classification approach. In total, 31 kinematic variables were extracted to train the classification algorithm and test the generalized performance. The results revealed different accuracy rates across three different kernel methods adopted in the classifier, with the linear kernel performing the best. A subsequent forward feature selection algorithm demonstrated that with only six features, the linear kernel SVM achieved 100% classification performance rate, showing that these features provided powerful combined information to distinguish age groups. The results of the present work demonstrate potential in applying this approach to improve knowledge about the age-related differences in running gait biomechanics and encourages the use of the SVM in other clinical contexts. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Electrodeposition of thin copper layer was carried out on titanium wires in acidic sulphate bath. The influence of titanium surface preparation, cathodic current density, copper sulphate and sulphuric acid concentrations, electrical charge density and stirring of the solution on the adhesion of the electrodeposits was studied using the Taguchi statistical method. A L(16) orthogonal array with the six factors of control at two levels each and three interactions was employed. The analysis of variance of the mean adhesion response and signal-to-noise ratio showed the great influence of cathodic current density on adhesion. on the contrary, the other factors as well as the three investigated interactions revealed low or no significant effect. From this study optimized electrolysis conditions were defined. The copper electrocoating improved the electrical conductivity of the titanium wire. This shows that copper electrocoated titanium wires could be employed for both electrical purpose and mechanical reinforcement in superconducting magnets. (C) 2008 Elsevier B.V. All rights reserved.