914 resultados para Feynman-Kac formula Markov semigroups principal eigenvalue
Resumo:
A quantitative structure-activity relationship (QSAR) study of 19 quinone compounds with trypanocidal activity was performed by Partial Least Squares (PLS) and Principal Component Regression (PCR) methods with the use of leave-one-out crossvalidation procedure to build the regression models. The trypanocidal activity of the compounds is related to their first cathodic potential (Ep(c1)). The regression PLS and PCR models built in this study were also used to predict the Ep(c1) of six new quinone compounds. The PLS model was built with three principal components that described 96.50% of the total variance and present Q(2) = 0.83 and R-2 = 0.90. The results obtained with the PCR model were similar to those obtained with the PLS model. The PCR model was also built with three principal components that described 96.67% of the total variance with Q(2) = 0.83 and R-2 = 0.90. The most important descriptors for our PLS and PCR models were HOMO-1 (energy of the molecular orbital below HOMO), Q4 (atomic charge at position 4), MAXDN (maximal electrotopological negative difference), and HYF (hydrophilicity index).
Resumo:
When the (X) over bar chart is in use, samples are regularly taken from the process, and their means are plotted on the chart. In some cases, it is too expensive to obtain the X values, but not the values of a correlated variable Y. This paper presents a model for the economic design of a two-stage control chart, that is. a control chart based on both performance (X) and surrogate (Y) variables. The process is monitored by the surrogate variable until it signals an out-of-control behavior, and then a switch is made to the (X) over bar chart. The (X) over bar chart is built with central, warning. and action regions. If an X sample mean falls in the central region, the process surveillance returns to the (Y) over bar chart. Otherwise. The process remains under the (X) over bar chart's surveillance until an (X) over bar sample mean falls outside the control limits. The search for an assignable cause is undertaken when the performance variable signals an out-of-control behavior. In this way, the two variables, are used in an alternating fashion. The assumption of an exponential distribution to describe the length of time the process remains in control allows the application of the Markov chain approach for developing the cost function. A study is performed to examine the economic advantages of using performance and surrogate variables. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Ultrastructural observations of principal cells of the epithelium lining of the proximal caput epididymis in experimental alcoholic albino rats at 180 days of treatment showed pyknotic nuclei, ill-defined cellular organelles and clusters of electrondense bodies, perhaps lysosomes. It was also verified for a progressive accumulation of lipid droplets initially in the basal and perinuclear cytoplasm and finally in the apical cytoplasm of principal cells at 60, 120 and 180 days of experimentation, respectively. The clear cells of alcoholic rats at 180 days showed the cytoplasm totally filled with lipid droplets. These findings were taken comparatively with the morphological features of the same epididymal cells in control (normal) rats.
Resumo:
This paper presents an economic design of (X) over bar control charts with variable sample sizes, variable sampling intervals, and variable control limits. The sample size n, the sampling interval h, and the control limit coefficient k vary between minimum and maximum values, tightening or relaxing the control. The control is relaxed when an (X) over bar value falls close to the target and is tightened when an (X) over bar value falls far from the target. A cost model is constructed that involves the cost of false alarms, the cost of finding and eliminating the assignable cause, the cost associated with production in an out-of-control state, and the cost of sampling and testing. The assumption of an exponential distribution to describe the length of time the process remains in control allows the application of the Markov chain approach for developing the cost function. A comprehensive study is performed to examine the economic advantages of varying the (X) over bar chart parameters.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The nutritional management of infants admitted with diarrhoea to the University Hospital of Botucatu includes a change from bolus feeding of a modulated minced-chicken formula to a continuous nasogastric drip (NGD) feeding, whenever the required calorie intake is not achieved or the diarrhoea does not subside. To evaluate this approach, the clinical course and weight changes of 63 children, aged 1-20 months, were reviewed; most (81 per cent) were below the third percentile for weight at admission and 76 per cent had a total duration of diarrhoea ≥10 days. Associated infections, mainly systemic, were present at or after admission in 70 per cent of them. Twenty-five survivors needed nutritional support (NS), predominantly NGD, for a median duration of 30 per cent of their admission time, and were compared to 31 survivors managed without NS. Those who necessitated NS lost weight for a significantly longer median time (12x4 days, p<0.005), but their total weight loss was similar (5x4 per cent) as well as diarrhoea's duration (8x7 days). There was a tendency for a longer hospitalization (21x16 days 0.05
Resumo:
Using the coadjoint orbit method we derive a geometric WZWN action based on the extended two-loop Kac-Moody algebra. We show that under a hamiltonian reduction procedure, which respects conformal invariance, we obtain a hierarchy of Toda type field theories, which contain as submodels the Toda molecule and periodic Toda lattice theories. We also discuss the classical r-matrix and integrability properties.
Resumo:
We investigate higher grading integrable generalizations of the affine Toda systems, where the flat connections defining the models take values in eigensubspaces of an integral gradation of an affine Kac-Moody algebra, with grades varying from l to -l (l > 1). The corresponding target space possesses nontrivial vacua and soliton configurations, which can be interpreted as particles of the theory, on the same footing as those associated to fundamental fields. The models can also be formulated by a hamiltonian reduction procedure from the so-called two-loop WZNW models. We construct the general solution and show the classes corresponding to the solitons. Some of the particles and solitons become massive when the conformal symmetry is spontaneously broken by a mechanism with an intriguing topological character and leading to a very simple mass formula. The massive fields associated to nonzero grade generators obey field equations of the Dirac type and may be regarded as matter fields. A special class of models is remarkable. These theories possess a U(1 ) Noether current, which, after a special gauge fixing of the conformal symmetry, is proportional to a topological current. This leads to the confinement of the matter field inside the solitons, which can be regarded as a one-dimensional bag model for QCD. These models are also relevant to the study of electron self-localization in (quasi-)one-dimensional electron-phonon systems.
Resumo:
The Weyl-Wigner correspondence prescription, which makes great use of Fourier duality, is reexamined from the point of view of Kac algebras, the most general background for noncommutative Fourier analysis allowing for that property. It is shown how the standard Kac structure has to be extended in order to accommodate the physical requirements. Both an Abelian and a symmetric projective Kac algebra are shown to provide, in close parallel to the standard case, a new dual framework and a well-defined notion of projective Fourier duality for the group of translations on the plane. The Weyl formula arises naturally as an irreducible component of the duality mapping between these projective algebras.
Resumo:
A methodology to define favorable areas in petroleum and mineral exploration is applied, which consists in weighting the exploratory variables, in order to characterize their importance as exploration guides. The exploration data are spatially integrated in the selected area to establish the association between variables and deposits, and the relationships among distribution, topology, and indicator pattern of all variables. Two methods of statistical analysis were compared. The first one is the Weights of Evidence Modeling, a conditional probability approach (Agterberg, 1989a), and the second one is the Principal Components Analysis (Pan, 1993). In the conditional method, the favorability estimation is based on the probability of deposit and variable joint occurrence, with the weights being defined as natural logarithms of likelihood ratios. In the multivariate analysis, the cells which contain deposits are selected as control cells and the weights are determined by eigendecomposition, being represented by the coefficients of the eigenvector related to the system's largest eigenvalue. The two techniques of weighting and complementary procedures were tested on two case studies: 1. Recôncavo Basin, Northeast Brazil (for Petroleum) and 2. Itaiacoca Formation of Ribeira Belt, Southeast Brazil (for Pb-Zn Mississippi Valley Type deposits). The applied methodology proved to be easy to use and of great assistance to predict the favorability in large areas, particularly in the initial phase of exploration programs. © 1998 International Association for Mathematical Geology.
Resumo:
The negative-dimensional integration method (NDIM) seems to be a very promising technique for evaluating massless and/or massive Feynman diagrams. It is unique in the sense that the method gives solutions in different regions of external momenta simultaneously. Moreover, it is a technique whereby the difficulties associated with performing parametric integrals in the standard approach are transferred to a simpler solving of a system of linear algebraic equations, thanks to the polynomial character of the relevant integrands. We employ this method to evaluate a scalar integral for a massless two-loop three-point vertex with all the external legs off-shell, and consider several special cases for it, yielding results, even for distinct simpler diagrams. We also consider the possibility of NDIM in non-covariant gauges such as the light-cone gauge and do some illustrative calculations, showing that for one-degree violation of covariance (i.e. one external, gauge-breaking, light-like vector n μ) the ensuing results are concordant with the ones obtained via either the usual dimensional regularization technique, or the use of the principal value prescription for the gauge-dependent pole, while for two-degree violation of covariance - i.e. two external, light-like vectors n μ, the gauge-breaking one, and (its dual) n * μ - the ensuing results are concordant with the ones obtained via causal constraints or the use of the so-called generalized Mandelstam-Leibbrandt prescription. © 1999 Elsevier Science B.V.
Resumo:
The negative-dimensional integration method (NDIM) is revealing itself as a very useful technique for computing massless and/or massive Feynman integrals, covariant and noncovanant alike. Up until now however, the illustrative calculations done using such method have been mostly covariant scalar integrals/without numerator factors. We show here how those integrals with tensorial structures also can be handled straightforwardly and easily. However, contrary to the absence of significant features in the usual approach, here the NDIM also allows us to come across surprising unsuspected bonuses. Toward this end, we present two alternative ways of working out the integrals and illustrate them by taking the easiest Feynman integrals in this category that emerge in the computation of a standard one-loop self-energy diagram. One of the novel and heretofore unsuspected bonuses is that there are degeneracies in the way one can express the final result for the referred Feynman integral.
Resumo:
We apply the negative dimensional integration method (NDIM) to three outstanding gauges: Feynman, light-cone, and Coulomb gauges. Our aim is to show that NDIM is a very suitable technique to deal with loop integrals, regardless of which gauge choice that originated them. In the Feynman gauge we perform scalar two-loop four-point massless integrals; in the light-cone gauge we calculate scalar two-loop integrals contributing to two-point functions without any kind of prescriptions, since NDIM can abandon such devices - this calculation is the first test of our prescriptionless method beyond one-loop order; and finally, for the Coulomb gauge we consider a four-propagator massless loop integral, in the split-dimensional regularization context. © 2001 Academic Press.