992 resultados para Quadratic error gradient
Resumo:
In the commercial food industry, demonstration of microbiological safety and thermal process equivalence often involves a mathematical framework that assumes log-linear inactivation kinetics and invokes concepts of decimal reduction time (DT), z values, and accumulated lethality. However, many microbes, particularly spores, exhibit inactivation kinetics that are not log linear. This has led to alternative modeling approaches, such as the biphasic and Weibull models, that relax strong log-linear assumptions. Using a statistical framework, we developed a novel log-quadratic model, which approximates the biphasic and Weibull models and provides additional physiological interpretability. As a statistical linear model, the log-quadratic model is relatively simple to fit and straightforwardly provides confidence intervals for its fitted values. It allows a DT-like value to be derived, even from data that exhibit obvious "tailing." We also showed how existing models of non-log-linear microbial inactivation, such as the Weibull model, can fit into a statistical linear model framework that dramatically simplifies their solution. We applied the log-quadratic model to thermal inactivation data for the spore-forming bacterium Clostridium botulinum and evaluated its merits compared with those of popular previously described approaches. The log-quadratic model was used as the basis of a secondary model that can capture the dependence of microbial inactivation kinetics on temperature. This model, in turn, was linked to models of spore inactivation of Sapru et al. and Rodriguez et al. that posit different physiological states for spores within a population. We believe that the log-quadratic model provides a useful framework in which to test vitalistic and mechanistic hypotheses of inactivation by thermal and other processes. Copyright © 2009, American Society for Microbiology. All Rights Reserved.
Resumo:
Bounds on the expectation and variance of errors at the output of a multilayer feedforward neural network with perturbed weights and inputs are derived. It is assumed that errors in weights and inputs to the network are statistically independent and small. The bounds obtained are applicable to both digital and analogue network implementations and are shown to be of practical value.
Resumo:
We study the rates of growth of the regret in online convex optimization. First, we show that a simple extension of the algorithm of Hazan et al eliminates the need for a priori knowledge of the lower bound on the second derivatives of the observed functions. We then provide an algorithm, Adaptive Online Gradient Descent, which interpolates between the results of Zinkevich for linear functions and of Hazan et al for strongly convex functions, achieving intermediate rates between [square root T] and [log T]. Furthermore, we show strong optimality of the algorithm. Finally, we provide an extension of our results to general norms.
Resumo:
The positive relationship between household income and child health is well documented in the child health literature but the precise mechanisms via which income generates better health and whether the income gradient is increasing in child age are not well understood. This paper presents new Australian evidence on the child health–income gradient. We use data from the Longitudinal Study of Australian Children (LSAC), which involved two waves of data collection for children born between March 2003 and February 2004 (B-Cohort: 0–3 years), and between March 1999 and February 2000 (K-Cohort: 4–7 years). This data set allows us to test the robustness of some of the findings of the influential studies of Case et al. [Case, A., Lubotsky, D., Paxson, C., 2002. Economic status and health in childhood: the origins of the gradient. The American Economic Review 92 (5) 1308–1344] and Currie and Stabile [Currie, J., Stabile, M., 2003. Socioeconomic status and child health: why is the relationship stronger for older children. The American Economic Review 93 (5) 1813–1823], and a recent study by Currie et al. [Currie, A., Shields, M.A., Price, S.W., 2007. The child health/family income gradient: evidence from England. Journal of Health Economics 26 (2) 213–232]. The richness of the LSAC data set also allows us to conduct further exploration of the determinants of child health. Our results reveal an increasing income gradient by child age using similar covariates to Case et al. [Case, A., Lubotsky, D., Paxson, C., 2002. Economic status and health in childhood: the origins of the gradient. The American Economic Review 92 (5) 1308–1344]. However, the income gradient disappears if we include a rich set of controls. Our results indicate that parental health and, in particular, the mother's health plays a significant role, reducing the income coefficient to zero; suggesting an underlying mechanism that can explain the observed relationship between child health and family income. Overall, our results for Australian children are similar to those produced by Propper et al. [Propper, C., Rigg, J., Burgess, S., 2007. Child health: evidence on the roles of family income and maternal mental health from a UK birth cohort. Health Economics 16 (11) 1245–1269] on their British child cohort.
Resumo:
The literature to date shows that children from poorer households tend to have worse health than their peers, and the gap between them grows with age. We investigate whether and how health shocks (as measured by the onset of chronic conditions) contribute to the income–child health gradient and whether the contemporaneous or cumulative effects of income play important mitigating roles. We exploit a rich panel dataset with three panel waves called the Longitudinal Study of Australian children. Given the availability of three waves of data, we are able to apply a range of econometric techniques (e.g. fixed and random effects) to control for unobserved heterogeneity. The paper makes several contributions to the extant literature. First, it shows that an apparent income gradient becomes relatively attenuated in our dataset when the cumulative and contemporaneous effects of household income are distinguished econometrically. Second, it demonstrates that the income–child health gradient becomes statistically insignificant when controlling for parental health and health-related behaviours or unobserved heterogeneity.
Resumo:
Integration of biometrics is considered as an attractive solution for the issues associated with password based human authentication as well as for secure storage and release of cryptographic keys which is one of the critical issues associated with modern cryptography. However, the widespread popularity of bio-cryptographic solutions are somewhat restricted by the fuzziness associated with biometric measurements. Therefore, error control mechanisms must be adopted to make sure that fuzziness of biometric inputs can be sufficiently countered. In this paper, we have outlined such existing techniques used in bio-cryptography while explaining how they are deployed in different types of solutions. Finally, we have elaborated on the important facts to be considered when choosing appropriate error correction mechanisms for a particular biometric based solution.
Resumo:
A description of a computer program to analyse cine angiograms of the heart and pressure waveforms to calculate valve gradients.
Resumo:
Index tracking is an investment approach where the primary objective is to keep portfolio return as close as possible to a target index without purchasing all index components. The main purpose is to minimize the tracking error between the returns of the selected portfolio and a benchmark. In this paper, quadratic as well as linear models are presented for minimizing the tracking error. The uncertainty is considered in the input data using a tractable robust framework that controls the level of conservatism while maintaining linearity. The linearity of the proposed robust optimization models allows a simple implementation of an ordinary optimization software package to find the optimal robust solution. The proposed model of this paper employs Morgan Stanley Capital International Index as the target index and the results are reported for six national indices including Japan, the USA, the UK, Germany, Switzerland and France. The performance of the proposed models is evaluated using several financial criteria e.g. information ratio, market ratio, Sharpe ratio and Treynor ratio. The preliminary results demonstrate that the proposed model lowers the amount of tracking error while raising values of portfolio performance measures.
Resumo:
Melanopsin containing intrinsically photosensitive Retinal Ganglion cells (ipRGCs) mediate the pupil light reflex (PLR) during light onset and at light offset (the post-illumination pupil response, PIPR). Recent evidence shows that the PLR and PIPR can provide non-invasive, objective markers of age-related retinal and optic nerve disease, however there is no consensus on the effects of healthy ageing or refractive error on the ipRGC mediated pupil function. Here we isolated melanopsin contributions to the pupil control pathway in 59 human participants with no ocular pathology across a range of ages and refractive errors. We show that there is no effect of age or refractive error on ipRGC inputs to the human pupil control pathway. The stability of the ipRGC mediated pupil response across the human lifespan provides a functional correlate of their robustness observed during ageing in rodent models.
Resumo:
The aim of this study was to evaluate the mechanical triggers that may cause plaque rupture. Wall shear stress (WSS) and pressure gradient are the direct mechanical forces acting on the plaque in a stenotic artery. Their influence on plaque stability is thought to be controversial. This study used a physiologically realistic, pulsatile flow, two-dimensional, cine phase-contrast MRI sequence in a patient with a 70% carotid stenosis. Instead of considering the full patient-specific carotid bifurcation derived from MRI, only the plaque region has been modelled by means of the idealised flow model. WSS reached a local maximum just distal to the stenosis followed by a negative local minimum. A pressure drop across the stenosis was found which varied significantly during systole and diastole. The ratio of the relative importance of WSS and pressure was assessed and was found to be less than 0.07% for all time phases, even at the throat of the stenosis. In conclusion, although the local high WSS at the stenosis may damage the endothelium and fissure plaque, the magnitude of WSS is small compared with the overall loading on plaque. Therefore, pressure may be the main mechanical trigger for plaque rupture and risk stratification using stress analysis of plaque stability may only need to consider the pressure effect.
Resumo:
Results are reported from an extensive series of experiments on boundary layers in which the location of pressure gradient and transition onset could be varied almost independently, by judicious use of tunnel wall liners and transition-fixing devices. The experiments show that the transition zone is sensitive to the pressure gradient especially near onset, and can be significantly asymmetric; no universal similarity appears valid in general. Observed intermittency distributions cannot be explained on the basis of the hypothesis, often made, that the spot propagates at speeds proportional to the local free-stream velocity but is otherwise unaffected by the pressure gradient.
Resumo:
So far, most Phase II trials have been designed and analysed under a frequentist framework. Under this framework, a trial is designed so that the overall Type I and Type II errors of the trial are controlled at some desired levels. Recently, a number of articles have advocated the use of Bavesian designs in practice. Under a Bayesian framework, a trial is designed so that the trial stops when the posterior probability of treatment is within certain prespecified thresholds. In this article, we argue that trials under a Bayesian framework can also be designed to control frequentist error rates. We introduce a Bayesian version of Simon's well-known two-stage design to achieve this goal. We also consider two other errors, which are called Bayesian errors in this article because of their similarities to posterior probabilities. We show that our method can also control these Bayesian-type errors. We compare our method with other recent Bayesian designs in a numerical study and discuss implications of different designs on error rates. An example of a clinical trial for patients with nasopharyngeal carcinoma is used to illustrate differences of the different designs.
Resumo:
Error estimates for the error reproducing kernel method (ERKM) are provided. The ERKM is a mesh-free functional approximation scheme [A. Shaw, D. Roy, A NURBS-based error reproducing kernel method with applications in solid mechanics, Computational Mechanics (2006), to appear (available online)], wherein a targeted function and its derivatives are first approximated via non-uniform rational B-splines (NURBS) basis function. Errors in the NURBS approximation are then reproduced via a family of non-NURBS basis functions, constructed using a polynomial reproduction condition, and added to the NURBS approximation of the function obtained in the first step. In addition to the derivation of error estimates, convergence studies are undertaken for a couple of test boundary value problems with known exact solutions. The ERKM is next applied to a one-dimensional Burgers equation where, time evolution leads to a breakdown of the continuous solution and the appearance of a shock. Many available mesh-free schemes appear to be unable to capture this shock without numerical instability. However, given that any desired order of continuity is achievable through NURBS approximations, the ERKM can even accurately approximate functions with discontinuous derivatives. Moreover, due to the variation diminishing property of NURBS, it has advantages in representing sharp changes in gradients. This paper is focused on demonstrating this ability of ERKM via some numerical examples. Comparisons of some of the results with those via the standard form of the reproducing kernel particle method (RKPM) demonstrate the relative numerical advantages and accuracy of the ERKM.
Resumo:
Topology-based methods have been successfully used for the analysis and visualization of piecewise-linear functions defined on triangle meshes. This paper describes a mechanism for extending these methods to piecewise-quadratic functions defined on triangulations of surfaces. Each triangular patch is tessellated into monotone regions, so that existing algorithms for computing topological representations of piecewise-linear functions may be applied directly to the piecewise-quadratic function. In particular, the tessellation is used for computing the Reeb graph, a topological data structure that provides a succinct representation of level sets of the function.