978 resultados para Exponential Sum
Resumo:
A significant problem in the collection of responses to potentially sensitive questions, such as relating to illegal, immoral or embarrassing activities, is non-sampling error due to refusal to respond or false responses. Eichhorn & Hayre (1983) suggested the use of scrambled responses to reduce this form of bias. This paper considers a linear regression model in which the dependent variable is unobserved but for which the sum or product with a scrambling random variable of known distribution, is known. The performance of two likelihood-based estimators is investigated, namely of a Bayesian estimator achieved through a Markov chain Monte Carlo (MCMC) sampling scheme, and a classical maximum-likelihood estimator. These two estimators and an estimator suggested by Singh, Joarder & King (1996) are compared. Monte Carlo results show that the Bayesian estimator outperforms the classical estimators in almost all cases, and the relative performance of the Bayesian estimator improves as the responses become more scrambled.
Resumo:
Expokit provides a set of routines aimed at computing matrix exponentials. More precisely, it computes either a small matrix exponential in full, the action of a large sparse matrix exponential on an operand vector, or the solution of a system of linear ODEs with constant inhomogeneity. The backbone of the sparse routines consists of matrix-free Krylov subspace projection methods (Arnoldi and Lanczos processes), and that is why the toolkit is capable of coping with sparse matrices of large dimension. The software handles real and complex matrices and provides specific routines for symmetric and Hermitian matrices. The computation of matrix exponentials is a numerical issue of critical importance in the area of Markov chains and furthermore, the computed solution is subject to probabilistic constraints. In addition to addressing general matrix exponentials, a distinct attention is assigned to the computation of transient states of Markov chains.
Resumo:
Krylov subspace techniques have been shown to yield robust methods for the numerical computation of large sparse matrix exponentials and especially the transient solutions of Markov Chains. The attractiveness of these methods results from the fact that they allow us to compute the action of a matrix exponential operator on an operand vector without having to compute, explicitly, the matrix exponential in isolation. In this paper we compare a Krylov-based method with some of the current approaches used for computing transient solutions of Markov chains. After a brief synthesis of the features of the methods used, wide-ranging numerical comparisons are performed on a power challenge array supercomputer on three different models. (C) 1999 Elsevier Science B.V. All rights reserved.AMS Classification: 65F99; 65L05; 65U05.
Resumo:
Fed-batch fermentation is used to prevent or reduce substrate-associated growth inhibition by controlling nutrient supply. Here we review the advances in control of fed-batch fermentations. Simple exponential feeding and inferential methods are examined, as are newer methods based on fuzzy control and neural networks. Considerable interest has developed in these more advanced methods that hold promise for optimizing fed-batch techniques for complex fermentation systems. (C) 1999 Elsevier Science Inc. All rights reserved.
Resumo:
Polyamine-induced inward rectification of cyclic nucleotide-gated channels was studied in inside-out patches from rat olfactory neurons. The polyamines, spermine, spermidine and putrescine, induced an 'instantaneous' voltage-dependent inhibition with K-d values at 0 mV of 39, 121 mu M and 2.7 mM, respectively. Hill coefficients for inhibition were significantly < 1, suggesting an allosteric inhibitory mechanism. The Woodhull model for voltage-dependent block predicted that all 3 polyamines bound to a site 1/3 of the electrical distance through the membrane from the internal side. Instantaneous inhibition was relieved at positive potentials, implying significant polyamine permeation. Spermine also induced exponential current relaxations to a 'steady-state' impermeant level. This inhibition was also mediated by a binding site 1/3 of the electrical distance through the pore, but with a K-d of 2.6 mM. Spermine inhibition was explained by postulating two spermine binding sites at a similar depth. Occupation of the first site occurs rapidly and with high affinity, but once a spermine molecule has bound, it inhibits spermine occupation of the second binding site via electrostatic repulsion. This repulsion is overcome at higher membrane potentials, but results in a lower apparent binding affinity for the second spermine molecule. The on-rate constant for the second spermine binding saturated at a low rate (similar to 200 sec(-1) at +120 mV), providing further evidence for an allosteric mechanism. Polyamine-induced inward rectification was significant at physiological concentrations.
Resumo:
We review recent developments in quantum and classical soliton theory, leading to the possibility of observing both classical and quantum parametric solitons in higher-dimensional environments. In particular, we consider the theory of three bosonic fields interacting via both parametric (cubic) and quartic couplings. In the case of photonic fields in a nonlinear optical medium this corresponds to the process of sum frequency generation (via chi((2)) nonlinearity) modified by the chi((3)) nonlinearity. Potential applications include an ultrafast photonic AND-gate. The simplest quantum solitons or energy eigenstates (bound-state solutions) of the interacting field Hamiltonian are obtained exactly in three space dimensions. They have a point-like structure-even though the corresponding classical theory is nonsingular. We show that the solutions can be regularized with the imposition of a momentum cut-off on the nonlinear couplings. The case of three-dimensional matter-wave solitons in coupled atomic/molecular Bose-Einstein condensates is discussed.
Resumo:
A mixture model for long-term survivors has been adopted in various fields such as biostatistics and criminology where some individuals may never experience the type of failure under study. It is directly applicable in situations where the only information available from follow-up on individuals who will never experience this type of failure is in the form of censored observations. In this paper, we consider a modification to the model so that it still applies in the case where during the follow-up period it becomes known that an individual will never experience failure from the cause of interest. Unless a model allows for this additional information, a consistent survival analysis will not be obtained. A partial maximum likelihood (ML) approach is proposed that preserves the simplicity of the long-term survival mixture model and provides consistent estimators of the quantities of interest. Some simulation experiments are performed to assess the efficiency of the partial ML approach relative to the full ML approach for survival in the presence of competing risks.
Resumo:
Cell-wall mechanical properties play an integral part in the growth and form of Saccharomyces cerevisiae, In contrast to the tremendous knowledge on the genetics of S. cerevisiae, almost nothing is known about its mechanical properties. We have developed a micromanipulation technique to measure the force required to burst single cells and have recently established a mathematical model to extract the mechanical properties of the cell wall from such data, Here we determine the average surface modulus of the S, cerevisiae cell wall to be 11.1 +/- 0.6 N/m and 12.9 +/- 0.7 N/m in exponential and stationary phases, respectively, giving corresponding Young's moduli of 112 +/- 6 MPa and 107 +/- 6 MPa, This result demonstrates that yeast cell populations strengthen as they enter stationary phase by increasing wall thickness and hence the surface modulus, without altering the average elastic properties of the cell-wall material. We also determined the average breaking strain of the cell wall to be 82% +/- 3% in exponential phase and 80% +/- 3% in stationary phase, This finding provides a failure criterion that can be used to predict when applied stresses (e,g,, because of fluid flow) will lead to wall rupture, This work analyzes yeast compression experiments in different growth phases by using engineering methodology.
Resumo:
Objective-The purpose of mammographic screening is to reduce mortality from breast cancer. This study describes a method for projecting the number of screens to be performed by a mammographic screening programme, and applies this method in the context of New South Wales, Australia. Method-The total number of mammographic screens was projected as the sum of initial screens and re-screens, and is based on projections of the population, rates of new recruitment, rates of attrition within the programme, and the mix of screening intervals. The baseline scenario involved: 70% participation of women aged 50-69 years, 90% return rate for the second and subsequent re-screens, 5% annual screens (95% biennial screens), and a specified population projection. The results were assessed with respect to variations in these assumptions. Results-The projections were strongly influenced by: the rate of screening of the target age group; the proportion of women re-screened annually; and the rates of attrition within the programme. Although demographic change had a notable effect, there was little difference between different population projections. Standard assumptions about attrition within the programme suggest that the current target participation rates in NSW may not be achieved in the long term. Conclusions-A practical model for projecting mammographic screens for populations is described which is capable of forecasting the number of screens under different scenarios. Implications-Projections of mammographic screens provide important information for the planning and financing of equipment and personnel, and for testing the effects of variations in important operational parameters. Re-screening attrition is an important contributor to screening viability.
Resumo:
The present paper proposes an approach to obtaining the activation energy distribution for chemisorption of oxygen onto carbon surfaces, while simultaneously allowing for the activation energy dependence of the pre-exponential factor of the rate constant. Prior studies in this area have considered this factor to be uniform, thereby biasing estimated distributions. The results show that the derived activation energy distribution is not sensitive to the chemisorption mechanism because of the step function like property of the coverage. The activation energy distribution is essentially uniform for some carbons, and has two or possibly more discrete stages, suggestive of at least two types of sites, each with its own uniform distribution. The pre-exponential factors of the reactions are determined directly from the experimental data, and are found not to be constant as assumed in earlier work, but correlated with the activation energy. The latter results empirically follow an exponential function, supporting some earlier statistical and experimental work. The activation energy distribution obtained in the present paper permits improved correlation of chemisorption data in comparison to earlier studies. (C) 2000 Elsevier Science Ltd. All rights reserved.
Resumo:
Small area health statistics has assumed increasing importance as the focus of population and public health moves to a more individualised approach of smaller area populations. Small populations and low event occurrence produce difficulties in interpretation and require appropriate statistical methods, including for age adjustment. There are also statistical questions related to multiple comparisons. Privacy and confidentiality issues include the possibility of revealing information on individuals or health care providers by fine cross-tabulations. Interpretation of small area population differences in health status requires consideration of migrant and Indigenous composition, socio-economic status and rural-urban geography before assessment of the effects of physical environmental exposure and services and interventions. Burden of disease studies produce a single measure for morbidity and mortality - disability adjusted life year (DALY) - which is the sum of the years of life lost (YLL) from premature mortality and the years lived with disability (YLD) for particular diseases (or all conditions). Calculation of YLD requires estimates of disease incidence (and complications) and duration, and weighting by severity. These procedures often mean problematic assumptions, as does future discounting and age weighting of both YLL and YLD. Evaluation of the Victorian small area population disease burden study presents important cross-disciplinary challenges as it relies heavily on synthetic approaches of demography and economics rather than on the empirical methods of epidemiology. Both empirical and synthetic methods are used to compute small area mortality and morbidity, disease burden, and then attribution to risk factors. Readers need to examine the methodology and assumptions carefully before accepting the results.
Resumo:
Sum: Plant biologists in fields of ecology, evolution, genetics and breeding frequently use multivariate methods. This paper illustrates Principal Component Analysis (PCA) and Gabriel's biplot as applied to microarray expression data from plant pathology experiments. Availability: An example program in the publicly distributed statistical language R is available from the web site (www.tpp.uq.edu.au) and by e-mail from the contact. Contact: scott.chapman@csiro.au.
Resumo:
The explosive growth in biotechnology combined with major advancesin information technology has the potential to radically transformimmunology in the postgenomics era. Not only do we now have readyaccess to vast quantities of existing data, but new data with relevanceto immunology are being accumulated at an exponential rate. Resourcesfor computational immunology include biological databases and methodsfor data extraction, comparison, analysis and interpretation. Publiclyaccessible biological databases of relevance to immunologists numberin the hundreds and are growing daily. The ability to efficientlyextract and analyse information from these databases is vital forefficient immunology research. Most importantly, a new generationof computational immunology tools enables modelling of peptide transportby the transporter associated with antigen processing (TAP), modellingof antibody binding sites, identification of allergenic motifs andmodelling of T-cell receptor serial triggering.
Resumo:
The assumption in analytical solutions for flow from surface and buried point sources of an average water content, (θ) over bar, behind the wetting front is examined. Some recent work has shown that this assumption fitted some field data well. Here we calculated (θ) over bar using a steady state solution based on the work by Raats [1971] and an exponential dependence of the diffusivity upon the water content. This is compared with a constant value of (θ) over bar calculated from an assumption of a hydraulic conductivity at the wetting front of 1 mm day(-1) and the water content at saturation. This comparison was made for a wide range of soils. The constant (θ) over bar generally underestimated (θ) over bar at small wetted radii and overestimated (θ) over bar at large radii. The crossover point between under and overestimation changed with both soil properties and flow rate. The largest variance occurred for coarser texture soils at low-flow rates. At high-flow rates in finer-textured soils the use of a constant (θ) over bar results in underestimation of the time for the wetting front to reach a particular radius. The value of (θ) over bar is related to the time at which the wetting front reaches a given radius. In coarse-textured soils the use of a constant value of (θ) over bar can result in an error of the time when the wetting front reaches a particular radius, as large as 80% at low-flow rates and large radii.
Resumo:
As in the standard land assembly problem, a developer wants to buy two adjacent blocks of land belonging to two different owners. The value of the two blocks of land to the developer is greater than the sum of the individual values of the blocks for each owner. Unlike the land assembly literature, however, our focus is on the incentive that each lot owner has to delay the start of negotiations, rather than on the public goods nature of the problem. An incentive for delay exists, for example, when owners perceive that being last to sell will allow them to capture a larger share of the joint surplus from the development. We show that competition at point of sale can cause equilibrium delay, and that cooperation at point of sale will eliminate delay. This suggests that strategic delay is another source for the inefficient allocation of land, in addition to the public-good type externality pointed out by Grossman and Hart [Bell Journal of Economics 11 (1980) 42] and O'Flaherty [Regional Science and Urban Economics 24 (1994) 287]. (C) 2004 Elsevier B.V. All rights reserved.