954 resultados para Computational Simulation
Resumo:
Context. Fossil systems are defined to be X- ray bright galaxy groups ( or clusters) with a two- magnitude difference between their two brightest galaxies within half the projected virial radius, and represent an interesting extreme of the population of galaxy agglomerations. However, the physical conditions and processes leading to their formation are still poorly constrained. Aims. We compare the outskirts of fossil systems with that of normal groups to understand whether environmental conditions play a significant role in their formation. We study the groups of galaxies in both, numerical simulations and observations. Methods. We use a variety of statistical tools including the spatial cross- correlation function and the local density parameter Delta(5) to probe differences in the density and structure of the environments of "" normal"" and "" fossil"" systems in the Millennium simulation. Results. We find that the number density of galaxies surrounding fossil systems evolves from greater than that observed around normal systems at z = 0.69, to lower than the normal systems by z = 0. Both fossil and normal systems exhibit an increment in their otherwise radially declining local density measure (Delta(5)) at distances of order 2.5 r(vir) from the system centre. We show that this increment is more noticeable for fossil systems than normal systems and demonstrate that this difference is linked to the earlier formation epoch of fossil groups. Despite the importance of the assembly time, we show that the environment is different for fossil and non- fossil systems with similar masses and formation times along their evolution. We also confirm that the physical characteristics identified in the Millennium simulation can also be detected in SDSS observations. Conclusions. Our results confirm the commonly held belief that fossil systems assembled earlier than normal systems but also show that the surroundings of fossil groups could be responsible for the formation of their large magnitude gap.
Resumo:
Spectral changes of Na(2) in liquid helium were studied using the sequential Monte Carlo-quantum mechanics method. Configurations composed by Na(2) surrounded by explicit helium atoms sampled from the Monte Carlo simulation were submitted to time-dependent density-functional theory calculations of the electronic absorption spectrum using different functionals. Attention is given to both line shift and line broadening. The Perdew, Burke, and Ernzerhof (PBE1PBE, also known as PBE0) functional, with the PBE1PBE/6-311++G(2d,2p) basis set, gives the spectral shift, compared to gas phase, of 500 cm(-1) for the allowed X (1)Sigma(+)(g) -> B (1)Pi(u) transition, in very good agreement with the experimental value (700 cm(-1)). For comparison, cluster calculations were also performed and the first X (1)Sigma(+)(g) -> A (1)Sigma(+)(u) transition was also considered.
Resumo:
The solvent effects on the low-lying absorption spectrum and on the (15)N chemical shielding of pyrimidine in water are calculated using the combined and sequential Monte Carlo simulation and quantum mechanical calculations. Special attention is devoted to the solute polarization. This is included by an iterative procedure previously developed where the solute is electrostatically equilibrated with the solvent. In addition, we verify the simple yet unexplored alternative of combining the polarizable continuum model (PCM) and the hybrid QM/MM method. We use PCM to obtain the average solute polarization and include this in the MM part of the sequential QM/MM methodology, PCM-MM/QM. These procedures are compared and further used in the discrete and the explicit solvent models. The use of the PCM polarization implemented in the MM part seems to generate a very good description of the average solute polarization leading to very good results for the n-pi* excitation energy and the (15)N nuclear chemical shield of pyrimidine in aqueous environment. The best results obtained here using the solute pyrimidine surrounded by 28 explicit water molecules embedded in the electrostatic field of the remaining 472 molecules give the statistically converged values for the low lying n-pi* absorption transition in water of 36 900 +/- 100 (PCM polarization) and 36 950 +/- 100 cm(-1) (iterative polarization), in excellent agreement among one another and with the experimental value observed with a band maximum at 36 900 cm(-1). For the nuclear shielding (15)N the corresponding gas-water chemical shift obtained using the solute pyrimidine surrounded by 9 explicit water molecules embedded in the electrostatic field of the remaining 491 molecules give the statistically converged values of 24.4 +/- 0.8 and 28.5 +/- 0.8 ppm, compared with the inferred experimental value of 19 +/- 2 ppm. Considering the simplicity of the PCM over the iterative polarization this is an important aspect and the computational savings point to the possibility of dealing with larger solute molecules. This PCM-MM/QM approach reconciles the simplicity of the PCM model with the reliability of the combined QM/MM approaches.
Resumo:
We study the electronic transport properties of a dual-gated bilayer graphene nanodevice via first-principles calculations. We investigate the electric current as a function of gate length and temperature. Under the action of an external electrical field we show that even for gate lengths up 100 angstrom, a nonzero current is exhibited. The results can be explained by the presence of a tunneling regime due the remanescent states in the gap. We also discuss the conditions to reach the charge neutrality point in a system free of defects and extrinsic carrier doping.
Resumo:
Thanks to recent advances in molecular biology, allied to an ever increasing amount of experimental data, the functional state of thousands of genes can now be extracted simultaneously by using methods such as cDNA microarrays and RNA-Seq. Particularly important related investigations are the modeling and identification of gene regulatory networks from expression data sets. Such a knowledge is fundamental for many applications, such as disease treatment, therapeutic intervention strategies and drugs design, as well as for planning high-throughput new experiments. Methods have been developed for gene networks modeling and identification from expression profiles. However, an important open problem regards how to validate such approaches and its results. This work presents an objective approach for validation of gene network modeling and identification which comprises the following three main aspects: (1) Artificial Gene Networks (AGNs) model generation through theoretical models of complex networks, which is used to simulate temporal expression data; (2) a computational method for gene network identification from the simulated data, which is founded on a feature selection approach where a target gene is fixed and the expression profile is observed for all other genes in order to identify a relevant subset of predictors; and (3) validation of the identified AGN-based network through comparison with the original network. The proposed framework allows several types of AGNs to be generated and used in order to simulate temporal expression data. The results of the network identification method can then be compared to the original network in order to estimate its properties and accuracy. Some of the most important theoretical models of complex networks have been assessed: the uniformly-random Erdos-Renyi (ER), the small-world Watts-Strogatz (WS), the scale-free Barabasi-Albert (BA), and geographical networks (GG). The experimental results indicate that the inference method was sensitive to average degree k variation, decreasing its network recovery rate with the increase of k. The signal size was important for the inference method to get better accuracy in the network identification rate, presenting very good results with small expression profiles. However, the adopted inference method was not sensible to recognize distinct structures of interaction among genes, presenting a similar behavior when applied to different network topologies. In summary, the proposed framework, though simple, was adequate for the validation of the inferred networks by identifying some properties of the evaluated method, which can be extended to other inference methods.
Resumo:
Background: There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results: This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent) and non-time series (independent) data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models) and dependent (autoregressive models) data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error). The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions: Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.
Resumo:
Structural and dynamical properties of liquid trimethylphosphine (TMP), (CH(3))(3)P, as a function of temperature is investigated by molecular dynamics (MD) simulations. The force field used in the MD simulations, which has been proposed from molecular mechanics and quantum chemistry calculations, is able to reproduce the experimental density of liquid TMP at room temperature. Equilibrium structure is investigated by the usual radial distribution function, g(r), and also in the reciprocal space by the static structure factor, S(k). On the basis of center of mass distances, liquid TMP behaves like a simple liquid of almost spherical particles, but orientational correlation due to dipole-dipole interactions is revealed at short-range distances. Single particle and collective dynamics are investigated by several time correlation functions. At high temperatures, diffusion and reorientation occur at the same time range as relaxation of the liquid structure. Decoupling of these dynamic properties starts below ca. 220 K, when rattling dynamics of a given TMP molecules due to the cage effect of neighbouring molecules becomes important. (C) 2011 American Institute of Physics. [doi: 10.1063/1.3624408]
Resumo:
Due to the worldwide increase in demand for biofuels, the area cultivated with sugarcane is expected to increase. For environmental and economic reasons, an increasing proportion of the areas are being harvested without burning, leaving the residues on the soil surface. This periodical input of residues affects soil physical, chemical and biological properties, as well as plant growth and nutrition. Modeling can be a useful tool in the study of the complex interactions between the climate, residue quality, and the biological factors controlling plant growth and residue decomposition. The approach taken in this work was to parameterize the CENTURY model for the sugarcane crop, to simulate the temporal dynamics of aboveground phytomass and litter decomposition, and to validate the model through field experiment data. When studying aboveground growth, burned and unburned harvest systems were compared, as well as the effect of mineral fertilizer and organic residue applications. The simulations were performed with data from experiments with different durations, from 12 months to 60 years, in Goiana, TimbaA(0)ba and Pradpolis, Brazil; Harwood, Mackay and Tully, Australia; and Mount Edgecombe, South Africa. The differentiation of two pools in the litter, with different decomposition rates, was found to be a relevant factor in the simulations made. Originally, the model had a basically unlimited layer of mulch directly available for decomposition, 5,000 g m(-2). Through a parameter optimization process, the thickness of the mulch layer closer to the soil, more vulnerable to decomposition, was set as 110 g m(-2). By changing the layer of mulch at any given time available for decomposition, the sugarcane residues decomposition simulations where close to measured values (R (2) = 0.93), contributing to making the CENTURY model a tool for the study of sugarcane litter decomposition patterns. The CENTURY model accurately simulated aboveground carbon stalk values (R (2) = 0.76), considering burned and unburned harvest systems, plots with and without nitrogen fertilizer and organic amendment applications, in different climates and soil conditions.
Resumo:
Currently there is a trend for the expansion of the area cropped with sugarcane (Saccharum officinarum L.), driven by an increase in the world demand for biofuels, due to economical, environmental, and geopolitical issues. Although sugarcane is traditionally harvested by burning dried leaves and tops, the unburned, mechanized harvest has been progressively adopted. The use of process based models is useful in understanding the effects of plant litter in soil C dynamics. The objective of this work was to use the CENTURY model in evaluating the effect of sugarcane residue management in the temporal dynamics of soil C. The approach taken in this work was to parameterize the CENTURY model for the sugarcane crop, to simulate the temporal dynamics of soil C, validating the model through field experiment data, and finally to make predictions in the long term regarding soil C. The main focus of this work was the comparison of soil C stocks between the burned and unburned litter management systems, but the effect of mineral fertilizer and organic residue applications were also evaluated. The simulations were performed with data from experiments with different durations, from 1 to 60 yr, in Goiana and Timbauba, Pernambuco, and Pradopolis, Sao Paulo, all in Brazil; and Mount Edgecombe, Kwazulu-Natal, South Africa. It was possible to simulate the temporal dynamics of soil C (R(2) = 0.89). The predictions made with the model revealed that there is, in the long term, a trend for higher soil C stocks with the unburned management. This increase is conditioned by factors such as climate, soil texture, time of adoption of the unburned system, and N fertilizer management.
Resumo:
This study investigated the energy system contributions of rowers in three different conditions: rowing on an ergometer without and with the slide and rowing in the water. For this purpose, eight rowers were submitted to 2,000 m race simulations in each of the situations defined above. The fractions of the aerobic (W(AER)), anaerobic alactic (W(PCR)) and anaerobic lactic (W([La-])) systems were calculated based on the oxygen uptake, the fast component of excess post-exercise oxygen uptake and changes in net blood lactate, respectively. In the water, the metabolic work was significantly higher [(851 (82) kJ] than during both ergometer [674 (60) kJ] and ergometer with slide [663 (65) kJ] (P <= 0.05). The time in the water [515 (11) s] was higher (P < 0.001) than in the ergometers with [398 (10) s] and without the slide [402 (15) s], resulting in no difference when relative energy expenditure was considered: in the water [99 (9) kJ min(-1)], ergometer without the slide [99.6 (9) kJ min(-1)] and ergometer with the slide [100.2 (9.6) kJ min(-1)]. The respective contributions of the WAER, WPCR and W[La-] systems were water = 87 (2), 7 (2) and 6 (2)%, ergometer = 84 (2), 7 (2) and 9 (2)%, and ergometer with the slide = 84 (2), 7 (2) and 9 (1)%. (V) over dotO(2), HR and lactate were not different among conditions. These results seem to indicate that the ergometer braking system simulates conditions of a bigger and faster boat and not a single scull. Probably, a 2,500 m test should be used to properly simulate in the water single-scull race.
Resumo:
The main purpose of this paper is to present architecture of automated system that allows monitoring and tracking in real time (online) the possible occurrence of faults and electromagnetic transients observed in primary power distribution networks. Through the interconnection of this automated system to the utility operation center, it will be possible to provide an efficient tool that will assist in decisionmaking by the Operation Center. In short, the desired purpose aims to have all tools necessary to identify, almost instantaneously, the occurrence of faults and transient disturbances in the primary power distribution system, as well as to determine its respective origin and probable location. The compilations of results from the application of this automated system show that the developed techniques provide accurate results, identifying and locating several occurrences of faults observed in the distribution system.
Resumo:
This paper proposes an approach of optimal sensitivity applied in the tertiary loop of the automatic generation control. The approach is based on the theorem of non-linear perturbation. From an optimal operation point obtained by an optimal power flow a new optimal operation point is directly determined after a perturbation, i.e., without the necessity of an iterative process. This new optimal operation point satisfies the constraints of the problem for small perturbation in the loads. The participation factors and the voltage set point of the automatic voltage regulators (AVR) of the generators are determined by the technique of optimal sensitivity, considering the effects of the active power losses minimization and the network constraints. The participation factors and voltage set point of the generators are supplied directly to a computational program of dynamic simulation of the automatic generation control, named by power sensitivity mode. Test results are presented to show the good performance of this approach. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The purpose of this paper is to propose a multiobjective optimization approach for solving the manufacturing cell formation problem, explicitly considering the performance of this said manufacturing system. Cells are formed so as to simultaneously minimize three conflicting objectives, namely, the level of the work-in-process, the intercell moves and the total machinery investment. A genetic algorithm performs a search in the design space, in order to approximate to the Pareto optimal set. The values of the objectives for each candidate solution in a population are assigned by running a discrete-event simulation, in which the model is automatically generated according to the number of machines and their distribution among cells implied by a particular solution. The potential of this approach is evaluated via its application to an illustrative example, and a case from the relevant literature. The obtained results are analyzed and reviewed. Therefore, it is concluded that this approach is capable of generating a set of alternative manufacturing cell configurations considering the optimization of multiple performance measures, greatly improving the decision making process involved in planning and designing cellular systems. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The confined flows in tubes with permeable surfaces arc associated to tangential filtration processes (microfiltration or ultrafiltration). The complexity of the phenomena do not allow for the development of exact analytical solutions, however, approximate solutions are of great interest for the calculation of the transmembrane outflow and estimate of the concentration, polarization phenomenon. In the present work, the generalized integral transform technique (GITT) was employed in solving the laminar and permanent flow in permeable tubes of Newtonian and incompressible fluid. The mathematical formulation employed the parabolic differential equation of chemical species conservation (convective-diffusive equation). The velocity profiles for the entrance region flow, which are found in the connective terms of the equation, were assessed by solutions obtained from literature. The velocity at the permeable wall was considered uniform, with the concentration at the tube wall regarded as variable with an axial position. A computational methodology using global error control was applied to determine the concentration in the wall and concentration boundary layer thickness. The results obtained for the local transmembrane flux and the concentration boundary layer thickness were compared against others in literature. (C) 2007 Elsevier B.V. All rights reserved.