960 resultados para Equilibrium calculation
Resumo:
A model for binary mixture adsorption accounting for energetic heterogeneity and intermolecular interactions is proposed in this paper. The model is based on statistical thermodynamics, and it is able to describe molecular rearrangement of a mixture in a nonuniform adsorption field inside a cavity. The Helmholtz free energy obtained in the framework of this approach has upper and lower limits, which define a permissible range in which all possible solutions will be found. One limit corresponds to a completely chaotic distribution of molecules within a cavity, while the other corresponds to a maximum ordered molecular structure. Comparison of the nearly ideal O-2-N-2-zeolite NaX system at ambient temperature with the system Of O-2-N-2-zeolite CaX at 144 K has shown that a decrease of temperature leads to a molecular rearrangement in the cavity volume, which results from the difference in the fluid-solid interactions. The model is able to describe this behavior and therefore allows predicting mixture adsorption more accurately compared to those assuming energetic uniformity of the adsorption volume. Another feature of the model is its ability to correctly describe the negative deviations from Raoult's law exhibited by the O-2-N-2-CaX system at 144 K. Analysis of the highly nonideal CO2-C2H6-zeolite NaX system has shown that the spatial molecular rearrangement in separate cavities is induced by not only the ion-quadrupole interaction of the CO2 molecule but also the significant difference in molecular size and the difference between the intermolecular interactions of molecules of the same species and those of molecules of different species. This leads to the highly ordered structure of this system.
Resumo:
The beta-strand conformation is unknown for short peptides in aqueous solution, yet it is a fundamental building block in proteins and the crucial recognition motif for proteolytic enzymes that enable formation and turnover of all proteins. To create a generalized scaffold as a peptidomimetic that is preorganized in a beta-strand, we individually synthesized a series of 15-22-membered macrocyclic analogues of tripeptides and analyzed their structures. Each cycle is highly constrained by two trans amide bonds and a planar aromatic ring with a short nonpeptidic linker between them. A measure of this ring strain is the restricted rotation of the component tyrosinyl aromatic ring (DeltaG(rot) 76.7 kJ mol(-1) (16-membered ring), 46.1 kJ mol(-1) (17-membered ring)) evidenced by variable temperature proton NMR spectra (DMF-d(7), 200-400 K). Unusually large amide coupling constants ((3)J(NH-CHalpha) 9-10 Hz) corresponding to large dihedral angles were detected in both protic and aprotic solvents for these macrocycles, consistent with a high degree of structure in solution. The temperature dependence of all amide NH chemical shifts (Deltadelta/T7-12 ppb/deg) precluded the presence of transannular hydrogen bonds that define alternative turn structures. Whereas similar sized conventional cyclic peptides usually exist in solution as an equilibrium mixture of multiple conformers, these macrocycles adopt a well-defined beta-strand structure even in water as revealed by 2-D NMR spectral data and by a structure calculation for the smallest (15-membered) and most constrained macrocycle. Macrocycles that are sufficiently constrained to exclusively adopt a beta-strand-mimicking structure in water may be useful pre-organized and generic templates for the design of compounds that interfere with beta-strand recognition in biology.
Resumo:
Many large-scale stochastic systems, such as telecommunications networks, can be modelled using a continuous-time Markov chain. However, it is frequently the case that a satisfactory analysis of their time-dependent, or even equilibrium, behaviour is impossible. In this paper, we propose a new method of analyzing Markovian models, whereby the existing transition structure is replaced by a more amenable one. Using rates of transition given by the equilibrium expected rates of the corresponding transitions of the original chain, we are able to approximate its behaviour. We present two formulations of the idea of expected rates. The first provides a method for analysing time-dependent behaviour, while the second provides a highly accurate means of analysing equilibrium behaviour. We shall illustrate our approach with reference to a variety of models, giving particular attention to queueing and loss networks. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Let X and Y be Hausdorff topological vector spaces, K a nonempty, closed, and convex subset of X, C: K--> 2(Y) a point-to-set mapping such that for any x is an element of K, C(x) is a pointed, closed, and convex cone in Y and int C(x) not equal 0. Given a mapping g : K --> K and a vector valued bifunction f : K x K - Y, we consider the implicit vector equilibrium problem (IVEP) of finding x* is an element of K such that f (g(x*), y) is not an element of - int C(x) for all y is an element of K. This problem generalizes the (scalar) implicit equilibrium problem and implicit variational inequality problem. We propose the dual of the implicit vector equilibrium problem (DIVEP) and establish the equivalence between (IVEP) and (DIVEP) under certain assumptions. Also, we give characterizations of the set of solutions for (IVP) in case of nonmonotonicity, weak C-pseudomonotonicity, C-pseudomonotonicity, and strict C-pseudomonotonicity, respectively. Under these assumptions, we conclude that the sets of solutions are nonempty, closed, and convex. Finally, we give some applications of (IVEP) to vector variational inequality problems and vector optimization problems. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
There has been a resurgence of interest in the mean trace length estimator of Pahl for window sampling of traces. The estimator has been dealt with by Mauldon and Zhang and Einstein in recent publications. The estimator is a very useful one in that it is non-parametric. However, despite some discussion regarding the statistical distribution of the estimator, none of the recent works or the original work by Pahl provide a rigorous basis for the determination a confidence interval for the estimator or a confidence region for the estimator and the corresponding estimator of trace spatial intensity in the sampling window. This paper shows, by consideration of a simplified version of the problem but without loss of generality, that the estimator is in fact the maximum likelihood estimator (MLE) and that it can be considered essentially unbiased. As the MLE, it possesses the least variance of all estimators and confidence intervals or regions should therefore be available through application of classical ML theory. It is shown that valid confidence intervals can in fact be determined. The results of the work and the calculations of the confidence intervals are illustrated by example. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
Bound and resonance states of HO2 have been calculated quantum mechanically by the Lanczos homogeneous filter diagonalization method [Zhang and Smith, Phys. Chem. Chem. Phys. 3, 2282 (2001); J. Chem. Phys. 115, 5751 (2001)] for nonzero total angular momentum J = 1,2,3. For lower bound states, agreement between the results in this paper and previous work is quite satisfactory; while for high lying bound states and resonances these are the first reported results. A helicity quantum number V assignment (within the helicity conserving approximation) is performed and the results indicate that for lower bound states it is possible to assign the V quantum numbers unambiguously, but for resonances it is impossible to assign the V helicity quantum numbers due to strong mixing. In fact, for the high-lying bound states, the mixing has already appeared. These results indicate that the helicity conserving approximation is not good for the resonance state calculations and exact quantum calculations are needed to accurately describe the reaction dynamics for HO2 system. Analysis of the resonance widths shows that most of the resonances are overlapping and the interferences between them lead to large fluctuations from one resonance to another. In accord with the conclusions from earlier J = 0 calculations, this indicates that the dissociation of HO2 is essentially irregular. (C) 2003 American Institute of Physics.
Resumo:
In modern magnetic resonance imaging (MRI), patients are exposed to strong, nonuniform static magnetic fields outside the central imaging region, in which the movement of the body may be able to induce electric currents in tissues which could be possibly harmful. This paper presents theoretical investigations into the spatial distribution of induced electric fields and currents in the patient when moving into the MRI scanner and also for head motion at various positions in the magnet. The numerical calculations are based on an efficient, quasi-static, finite-difference scheme and an anatomically realistic, full-body, male model. 3D field profiles from an actively shielded 4T magnet system are used and the body model projected through the field profile with a range of velocities. The simulation shows that it possible to induce electric fields/currents near the level of physiological significance under some circumstances and provides insight into the spatial characteristics of the induced fields. The results are extrapolated to very high field strengths and tabulated data shows the expected induced currents and fields with both movement velocity and field strength. (C) 2003 Elsevier Science (USA). All rights reserved.
Resumo:
The aim of this study was to compare accumulated oxygen deficit data derived using two different exercise protocols with the aim of producing a less time-consuming test specifically for use with athletes. Six road and four track male endurance cyclists performed two series of cycle ergometer tests. The first series involved five 10 min sub-maximal cycle exercise bouts, a (V) over dotO(2peak) test and a 115% (V) over dotO(2peak) test. Data from these tests were used to estimate the accumulated oxygen deficit according to the calculations of Medbo et al. (1988). In the second series of tests, participants performed a 15 min incremental cycle ergometer test followed, 2 min later, by a 2 min variable resistance test in which they completed as much work as possible while pedalling at a constant rate. Analysis revealed that the accumulated oxygen deficit calculated from the first series of tests was higher (P< 0.02) than that calculated from the second series: 52.3 +/- 11.7 and 43.9 +/- 6.4 ml . kg(-1), respectively (mean +/- s). Other significant differences between the two protocols were observed for (V) over dot O-2peak, total work and maximal heart rate; all were higher during the modified protocol (P
Resumo:
Program slicing is a well known family of techniques used to identify code fragments which depend on or are depended upon specific program entities. They are particularly useful in the areas of reverse engineering, program understanding, testing and software maintenance. Most slicing methods, usually oriented towards the imperative or object paradigms, are based on some sort of graph structure representing program dependencies. Slicing techniques amount, therefore, to (sophisticated) graph transversal algorithms. This paper proposes a completely different approach to the slicing problem for functional programs. Instead of extracting program information to build an underlying dependencies’ structure, we resort to standard program calculation strategies, based on the so-called Bird-Meertens formalism. The slicing criterion is specified either as a projection or a hiding function which, once composed with the original program, leads to the identification of the intended slice. Going through a number of examples, the paper suggests this approach may be an interesting, even if not completely general, alternative to slicing functional programs
Resumo:
Program slicing is a well known family of techniques used to identify code fragments which depend on or are depended upon specific program entities. They are particularly useful in the areas of reverse engineering, program understanding, testing and software maintenance. Most slicing methods, usually targeting either the imperative or the object oriented paradigms, are based on some sort of graph structure representing program dependencies. Slicing techniques amount, therefore, to (sophisticated) graph transversal algorithms. This paper proposes a completely different approach to the slicing problem for functional programs. Instead of extracting program information to build an underlying dependencies’ structure, we resort to standard program calculation strategies, based on the so-called Bird- Meertens formalism. The slicing criterion is specified either as a projection or a hiding function which, once composed with the original program, leads to the identification of the intended slice. Going through a number of examples, the paper suggests this approach may be an interesting, even if not completely general alternative to slicing functional programs
Resumo:
The main intend of this work, is to determinate the Specific Absorption Rate (SAR) on human head tissues exposed to radiation caused by sources of 900 and 1800MHz, since those are the typical frequencies for mobile communications systems nowadays. In order to determinate the SAR, has been used the FDTD (Finite Difference Time Domain), which is a numeric method in time domain, obtained from the Maxwell equations in differential mode. In order to do this, a computational model from the human head in two dimensions made with cells of the smallest possible size was implemented, respecting the limits from computational processing. It was possible to verify the very good efficiency of the FDTD method in the resolution of those types of problems.
Resumo:
We calculate the equilibrium thermodynamic properties, percolation threshold, and cluster distribution functions for a model of associating colloids, which consists of hard spherical particles having on their surfaces three short-ranged attractive sites (sticky spots) of two different types, A and B. The thermodynamic properties are calculated using Wertheim's perturbation theory of associating fluids. This also allows us to find the onset of self-assembly, which can be quantified by the maxima of the specific heat at constant volume. The percolation threshold is derived, under the no-loop assumption, for the correlated bond model: In all cases it is two percolated phases that become identical at a critical point, when one exists. Finally, the cluster size distributions are calculated by mapping the model onto an effective model, characterized by a-state-dependent-functionality (f) over bar and unique bonding probability (p) over bar. The mapping is based on the asymptotic limit of the cluster distributions functions of the generic model and the effective parameters are defined through the requirement that the equilibrium cluster distributions of the true and effective models have the same number-averaged and weight-averaged sizes at all densities and temperatures. We also study the model numerically in the case where BB interactions are missing. In this limit, AB bonds either provide branching between A-chains (Y-junctions) if epsilon(AB)/epsilon(AA) is small, or drive the formation of a hyperbranched polymer if epsilon(AB)/epsilon(AA) is large. We find that the theoretical predictions describe quite accurately the numerical data, especially in the region where Y-junctions are present. There is fairly good agreement between theoretical and numerical results both for the thermodynamic (number of bonds and phase coexistence) and the connectivity properties of the model (cluster size distributions and percolation locus).
Resumo:
Reinforcement Learning is an area of Machine Learning that deals with how an agent should take actions in an environment such as to maximize the notion of accumulated reward. This type of learning is inspired by the way humans learn and has led to the creation of various algorithms for reinforcement learning. These algorithms focus on the way in which an agent’s behaviour can be improved, assuming independence as to their surroundings. The current work studies the application of reinforcement learning methods to solve the inverted pendulum problem. The importance of the variability of the environment (factors that are external to the agent) on the execution of reinforcement learning agents is studied by using a model that seeks to obtain equilibrium (stability) through dynamism – a Cart-Pole system or inverted pendulum. We sought to improve the behaviour of the autonomous agents by changing the information passed to them, while maintaining the agent’s internal parameters constant (learning rate, discount factors, decay rate, etc.), instead of the classical approach of tuning the agent’s internal parameters. The influence of changes on the state set and the action set on an agent’s capability to solve the Cart-pole problem was studied. We have studied typical behaviour of reinforcement learning agents applied to the classic BOXES model and a new form of characterizing the environment was proposed using the notion of convergence towards a reference value. We demonstrate the gain in performance of this new method applied to a Q-Learning agent.
Resumo:
OBJECTIVE: Describe the overall transmission of malaria through a compartmental model, considering the human host and mosquito vector. METHODS: A mathematical model was developed based on the following parameters: human host immunity, assuming the existence of acquired immunity and immunological memory, which boosts the protective response upon reinfection; mosquito vector, taking into account that the average period of development from egg to adult mosquito and the extrinsic incubation period of parasites (transformation of infected but non-infectious mosquitoes into infectious mosquitoes) are dependent on the ambient temperature. RESULTS: The steady state equilibrium values obtained with the model allowed the calculation of the basic reproduction ratio in terms of the model's parameters. CONCLUSIONS: The model allowed the calculation of the basic reproduction ratio, one of the most important epidemiological variables.