865 resultados para Time equivalent approach
Resumo:
The dynamic polarizability and optical absorption spectrum of liquid water in the 6-15 eV energy range are investigated by a sequential molecular dynamics (MD)/quantum mechanical approach. The MD simulations are based on a polarizable model for liquid water. Calculation of electronic properties relies on time-dependent density functional and equation-of-motion coupled-cluster theories. Results for the dynamic polarizability, Cauchy moments, S(-2), S(-4), S(-6), and dielectric properties of liquid water are reported. The theoretical predictions for the optical absorption spectrum of liquid water are in good agreement with experimental information.
Resumo:
We develop a combined hydro-kinetic approach which incorporates a hydrodynamical expansion of the systems formed in A + A collisions and their dynamical decoupling described by escape probabilities. The method corresponds to a generalized relaxation time (tau(rel)) approximation for the Boltzmann equation applied to inhomogeneous expanding systems; at small tau(rel) it also allows one to catch the viscous effects in hadronic component-hadron-resonance gas. We demonstrate how the approximation of sudden freeze-out can be obtained within this dynamical picture of continuous emission and find that hypersurfaces, corresponding to a sharp freeze-out limit, are momentum dependent. The pion m(T) spectra are computed in the developed hydro-kinetic model, and compared with those obtained from ideal hydrodynamics with the Cooper-Frye isothermal prescription. Our results indicate that there does not exist a universal freeze-out temperature for pions with different momenta, and support an earlier decoupling of higher p(T) particles. By performing numerical simulations for various initial conditions and equations of state we identify several characteristic features of the bulk QCD matter evolution preferred in view of the current analysis of heavy ion collisions at RHIC energies.
Resumo:
We show that measurements of finite duration performed on an open two-state system can protect the initial state from a phase-noisy environment, provided the measured observable does not commute with the perturbing interaction. When the measured observable commutes with the environmental interaction, the finite-duration measurement accelerates the rate of decoherence induced by the phase noise. For the description of the measurement of an observable that is incompatible with the interaction between system and environment, we have found an approximate analytical expression, valid at zero temperature and weak coupling with the measuring device. We have tested the validity of the analytical predictions against an exact numerical approach, based on the superoperator-splitting method, that confirms the protection of the initial state of the system. When the coupling between the system and the measuring apparatus increases beyond the range of validity of the analytical approximation, the initial state is still protected by the finite-time measurement, according with the exact numerical calculations.
Resumo:
Background: The inference of gene regulatory networks (GRNs) from large-scale expression profiles is one of the most challenging problems of Systems Biology nowadays. Many techniques and models have been proposed for this task. However, it is not generally possible to recover the original topology with great accuracy, mainly due to the short time series data in face of the high complexity of the networks and the intrinsic noise of the expression measurements. In order to improve the accuracy of GRNs inference methods based on entropy (mutual information), a new criterion function is here proposed. Results: In this paper we introduce the use of generalized entropy proposed by Tsallis, for the inference of GRNs from time series expression profiles. The inference process is based on a feature selection approach and the conditional entropy is applied as criterion function. In order to assess the proposed methodology, the algorithm is applied to recover the network topology from temporal expressions generated by an artificial gene network (AGN) model as well as from the DREAM challenge. The adopted AGN is based on theoretical models of complex networks and its gene transference function is obtained from random drawing on the set of possible Boolean functions, thus creating its dynamics. On the other hand, DREAM time series data presents variation of network size and its topologies are based on real networks. The dynamics are generated by continuous differential equations with noise and perturbation. By adopting both data sources, it is possible to estimate the average quality of the inference with respect to different network topologies, transfer functions and network sizes. Conclusions: A remarkable improvement of accuracy was observed in the experimental results by reducing the number of false connections in the inferred topology by the non-Shannon entropy. The obtained best free parameter of the Tsallis entropy was on average in the range 2.5 <= q <= 3.5 (hence, subextensive entropy), which opens new perspectives for GRNs inference methods based on information theory and for investigation of the nonextensivity of such networks. The inference algorithm and criterion function proposed here were implemented and included in the DimReduction software, which is freely available at http://sourceforge.net/projects/dimreduction and http://code.google.com/p/dimreduction/.
Resumo:
The aim of this paper was to study a method based on gas production technique to measure the biological effects of tannins on rumen fermentation. Six feeds were used as fermentation substrates in a semi-automated gas method: feed A - aroeira (Astronium urundeuva); feed B - jurema preta (Mimosa hostilis), feed C - sorghum grains (Sorghum bicolor); feed D - Tifton-85 (Cynodon sp.); and two others prepared mixing 450 g sorghum leaves, 450 g concentrate (maize and soybean meal) and 100 g either of acacia (Acacia mearnsii) tannin extract (feed E) or quebracho (Schinopsis lorentzii) tannin extract (feed F) per kg (w:w). Three assays were carried out to standardize the bioassay for tannins. The first assay compared two binding agents (polyethylene glycol - PEG - and polyvinyl polypirrolidone - PVPP) to attenuate the tannin effects. The complex formed by PEG and tannins showed to be more stable than PVPP and tannins. Then, in the second assay, PEG was used as binding agent, and this assay was done to evaluate levels of PEG (0, 500, 750, 1000 and 1250 mg/g DM) to minimize the tannin effect. All the tested levels of PEG produced a response to evaluate tannin effects but the best response was for dose of 1000 mg/g DM. Using this dose of PEG, the final assay was carried out to test three compounds (tannic acid, quebracho extract and acacia extract) to establish a curve of biological equivalent effect of tannins. For this, five levels of each compound were added to I g of a standard feed (Lucerne hay). The equivalent effect showed not to be directly related to the chemical analysis for tannins. It was shown that different sources of tannins had different activities or reactivities. The curves of biological equivalence can provide information about tannin reactivity and its use seems to be important as an additional factor for chemical analysis. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
A simultaneous optimization strategy based on a neuro-genetic approach is proposed for selection of laser induced breakdown spectroscopy operational conditions for the simultaneous determination of macronutrients (Ca, Mg and P), micro-nutrients (B, Cu, Fe, Mn and Zn), Al and Si in plant samples. A laser induced breakdown spectroscopy system equipped with a 10 Hz Q-switched Nd:YAG laser (12 ns, 532 nm, 140 mJ) and an Echelle spectrometer with intensified coupled-charge device was used. Integration time gate, delay time, amplification gain and number of pulses were optimized. Pellets of spinach leaves (NIST 1570a) were employed as laboratory samples. In order to find a model that could correlate laser induced breakdown spectroscopy operational conditions with compromised high peak areas of all elements simultaneously, a Bayesian Regularized Artificial Neural Network approach was employed. Subsequently, a genetic algorithm was applied to find optimal conditions for the neural network model, in an approach called neuro-genetic, A single laser induced breakdown spectroscopy working condition that maximizes peak areas of all elements simultaneously, was obtained with the following optimized parameters: 9.0 mu s integration time gate, 1.1 mu s delay time, 225 (a.u.) amplification gain and 30 accumulated laser pulses. The proposed approach is a useful and a suitable tool for the optimization process of such a complex analytical problem. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This paper deals with the H(infinity) recursive estimation problem for general rectangular time-variant descriptor systems in discrete time. Riccati-equation based recursions for filtered and predicted estimates are developed based on a data fitting approach and game theory. In this approach, the nature determines a state sequence seeking to maximize the estimation cost, whereas the estimator tries to find an estimate that brings the estimation cost to a minimum. A solution exists for a specified gamma-level if the resulting cost is positive. In order to present some computational alternatives to the H(infinity) filters developed, they are rewritten in information form along with the respective array algorithms. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents a new approach, predictor-corrector modified barrier approach (PCMBA), to minimize the active losses in power system planning studies. In the PCMBA, the inequality constraints are transformed into equalities by introducing positive auxiliary variables. which are perturbed by the barrier parameter, and treated by the modified barrier method. The first-order necessary conditions of the Lagrangian function are solved by predictor-corrector Newton`s method. The perturbation of the auxiliary variables results in an expansion of the feasible set of the original problem, reaching the limits of the inequality constraints. The feasibility of the proposed approach is demonstrated using various IEEE test systems and a realistic power system of 2256-bus corresponding to the Brazilian South-Southeastern interconnected system. The results show that the utilization of the predictor-corrector method with the pure modified barrier approach accelerates the convergence of the problem in terms of the number of iterations and computational time. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Active control solutions appear to be a feasible approach to cope with the steadily increasing requirements for noise reduction in the transportation industry. Active controllers tend to be designed with a target on the sound pressure level reduction. However, the perceived control efficiency for the occupants can be more accurately assessed if psychoacoustic metrics can be taken into account. Therefore, this paper aims to evaluate, numerically and experimentally, the effect of a feedback controller on the sound quality of a vehicle mockup excited with engine noise. The proposed simulation scheme is described and experimentally validated. The engine excitation is provided by a sound quality equivalent engine simulator, running on a real-time platform that delivers harmonic excitation in function of the driving condition. The controller performance is evaluated in terms of specific loudness and roughness. It is shown that the use of a quite simple control strategy, such as a velocity feedback, can result in satisfactory loudness reduction with slightly spread roughness, improving the overall perception of the engine sound. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
The demands for improvement in sound quality and reduction of noise generated by vehicles are constantly increasing, as well as the penalties for space and weight of the control solutions. A promising approach to cope with this challenge is the use of active structural-acoustic control. Usually, the low frequency noise is transmitted into the vehicle`s cabin through structural paths, which raises the necessity of dealing with vibro-acoustic models. This kind of models should allow the inclusion of sensors and actuators models, if accurate performance indexes are to be accessed. The challenge thus resides in deriving reasonable sized models that integrate structural, acoustic, electrical components and the controller algorithm. The advantages of adequate active control simulation strategies relies on the cost and time reduction in the development phase. Therefore, the aim of this paper is to present a methodology for simulating vibro-acoustic systems including this coupled model in a closed loop control simulation framework that also takes into account the interaction between the system and the control sensors/actuators. It is shown that neglecting the sensor/actuator dynamics can lead to inaccurate performance predictions.
Resumo:
The selection criteria for Euler-Bernoulli or Timoshenko beam theories are generally given by means of some deterministic rule involving beam dimensions. The Euler-Bernoulli beam theory is used to model the behavior of flexure-dominated (or ""long"") beams. The Timoshenko theory applies for shear-dominated (or ""short"") beams. In the mid-length range, both theories should be equivalent, and some agreement between them would be expected. Indeed, it is shown in the paper that, for some mid-length beams, the deterministic displacement responses for the two theories agrees very well. However, the article points out that the behavior of the two beam models is radically different in terms of uncertainty propagation. In the paper, some beam parameters are modeled as parameterized stochastic processes. The two formulations are implemented and solved via a Monte Carlo-Galerkin scheme. It is shown that, for uncertain elasticity modulus, propagation of uncertainty to the displacement response is much larger for Timoshenko beams than for Euler-Bernoulli beams. On the other hand, propagation of the uncertainty for random beam height is much larger for Euler beam displacements. Hence, any reliability or risk analysis becomes completely dependent on the beam theory employed. The authors believe this is not widely acknowledged by the structural safety or stochastic mechanics communities. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents an Adaptive Maximum Entropy (AME) approach for modeling biological species. The Maximum Entropy algorithm (MaxEnt) is one of the most used methods in modeling biological species geographical distribution. The approach presented here is an alternative to the classical algorithm. Instead of using the same set features in the training, the AME approach tries to insert or to remove a single feature at each iteration. The aim is to reach the convergence faster without affect the performance of the generated models. The preliminary experiments were well performed. They showed an increasing on performance both in accuracy and in execution time. Comparisons with other algorithms are beyond the scope of this paper. Some important researches are proposed as future works.
Resumo:
This paper studies a simplified methodology to integrate the real time optimization (RTO) of a continuous system into the model predictive controller in the one layer strategy. The gradient of the economic objective function is included in the cost function of the controller. Optimal conditions of the process at steady state are searched through the use of a rigorous non-linear process model, while the trajectory to be followed is predicted with the use of a linear dynamic model, obtained through a plant step test. The main advantage of the proposed strategy is that the resulting control/optimization problem can still be solved with a quadratic programming routine at each sampling step. Simulation results show that the approach proposed may be comparable to the strategy that solves the full economic optimization problem inside the MPC controller where the resulting control problem becomes a non-linear programming problem with a much higher computer load. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Pipeline systems play a key role in the petroleum business. These operational systems provide connection between ports and/or oil fields and refineries (upstream), as well as between these and consumer markets (downstream). The purpose of this work is to propose a novel MINLP formulation based on a continuous time representation for the scheduling of multiproduct pipeline systems that must supply multiple consumer markets. Moreover, it also considers that the pipeline operates intermittently and that the pumping costs depend on the booster stations yield rates, which in turn may generate different flow rates. The proposed continuous time representation is compared with a previously developed discrete time representation [Rejowski, R., Jr., & Pinto, J. M. (2004). Efficient MILP formulations and valid cuts for multiproduct pipeline scheduling. Computers and Chemical Engineering, 28, 1511] in terms of solution quality and computational performance. The influence of the number of time intervals that represents the transfer operation is studied and several configurations for the booster stations are tested. Finally, the proposed formulation is applied to a larger case, in which several booster configurations with different numbers of stages are tested. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we consider a real-life heterogeneous fleet vehicle routing problem with time windows and split deliveries that occurs in a major Brazilian retail group. A single depot attends 519 stores of the group distributed in 11 Brazilian states. To find good solutions to this problem, we propose heuristics as initial solutions and a scatter search (SS) approach. Next, the produced solutions are compared with the routes actually covered by the company. Our results show that the total distribution cost can be reduced significantly when such methods are used. Experimental testing with benchmark instances is used to assess the merit of our proposed procedure. (C) 2008 Published by Elsevier B.V.