95 resultados para Quadratic error gradient
Resumo:
In this paper, a method of thrust allocation based on a linearly constrained quadratic cost function capable of handling rotating azimuths is presented. The problem formulation accounts for magnitude and rate constraints on both thruster forces and azimuth angles. The advantage of this formulation is that the solution can be found with a finite number of iterations for each time step. Experiments with a model ship are used to validate the thrust allocation system.
Resumo:
Bayesian networks (BNs) are graphical probabilistic models used for reasoning under uncertainty. These models are becoming increasing popular in a range of fields including ecology, computational biology, medical diagnosis, and forensics. In most of these cases, the BNs are quantified using information from experts, or from user opinions. An interest therefore lies in the way in which multiple opinions can be represented and used in a BN. This paper proposes the use of a measurement error model to combine opinions for use in the quantification of a BN. The multiple opinions are treated as a realisation of measurement error and the model uses the posterior probabilities ascribed to each node in the BN which are computed from the prior information given by each expert. The proposed model addresses the issues associated with current methods of combining opinions such as the absence of a coherent probability model, the lack of the conditional independence structure of the BN being maintained, and the provision of only a point estimate for the consensus. The proposed model is applied an existing Bayesian Network and performed well when compared to existing methods of combining opinions.
Resumo:
In the commercial food industry, demonstration of microbiological safety and thermal process equivalence often involves a mathematical framework that assumes log-linear inactivation kinetics and invokes concepts of decimal reduction time (DT), z values, and accumulated lethality. However, many microbes, particularly spores, exhibit inactivation kinetics that are not log linear. This has led to alternative modeling approaches, such as the biphasic and Weibull models, that relax strong log-linear assumptions. Using a statistical framework, we developed a novel log-quadratic model, which approximates the biphasic and Weibull models and provides additional physiological interpretability. As a statistical linear model, the log-quadratic model is relatively simple to fit and straightforwardly provides confidence intervals for its fitted values. It allows a DT-like value to be derived, even from data that exhibit obvious "tailing." We also showed how existing models of non-log-linear microbial inactivation, such as the Weibull model, can fit into a statistical linear model framework that dramatically simplifies their solution. We applied the log-quadratic model to thermal inactivation data for the spore-forming bacterium Clostridium botulinum and evaluated its merits compared with those of popular previously described approaches. The log-quadratic model was used as the basis of a secondary model that can capture the dependence of microbial inactivation kinetics on temperature. This model, in turn, was linked to models of spore inactivation of Sapru et al. and Rodriguez et al. that posit different physiological states for spores within a population. We believe that the log-quadratic model provides a useful framework in which to test vitalistic and mechanistic hypotheses of inactivation by thermal and other processes. Copyright © 2009, American Society for Microbiology. All Rights Reserved.
Resumo:
Bounds on the expectation and variance of errors at the output of a multilayer feedforward neural network with perturbed weights and inputs are derived. It is assumed that errors in weights and inputs to the network are statistically independent and small. The bounds obtained are applicable to both digital and analogue network implementations and are shown to be of practical value.
Resumo:
We study the rates of growth of the regret in online convex optimization. First, we show that a simple extension of the algorithm of Hazan et al eliminates the need for a priori knowledge of the lower bound on the second derivatives of the observed functions. We then provide an algorithm, Adaptive Online Gradient Descent, which interpolates between the results of Zinkevich for linear functions and of Hazan et al for strongly convex functions, achieving intermediate rates between [square root T] and [log T]. Furthermore, we show strong optimality of the algorithm. Finally, we provide an extension of our results to general norms.
Resumo:
The positive relationship between household income and child health is well documented in the child health literature but the precise mechanisms via which income generates better health and whether the income gradient is increasing in child age are not well understood. This paper presents new Australian evidence on the child health–income gradient. We use data from the Longitudinal Study of Australian Children (LSAC), which involved two waves of data collection for children born between March 2003 and February 2004 (B-Cohort: 0–3 years), and between March 1999 and February 2000 (K-Cohort: 4–7 years). This data set allows us to test the robustness of some of the findings of the influential studies of Case et al. [Case, A., Lubotsky, D., Paxson, C., 2002. Economic status and health in childhood: the origins of the gradient. The American Economic Review 92 (5) 1308–1344] and Currie and Stabile [Currie, J., Stabile, M., 2003. Socioeconomic status and child health: why is the relationship stronger for older children. The American Economic Review 93 (5) 1813–1823], and a recent study by Currie et al. [Currie, A., Shields, M.A., Price, S.W., 2007. The child health/family income gradient: evidence from England. Journal of Health Economics 26 (2) 213–232]. The richness of the LSAC data set also allows us to conduct further exploration of the determinants of child health. Our results reveal an increasing income gradient by child age using similar covariates to Case et al. [Case, A., Lubotsky, D., Paxson, C., 2002. Economic status and health in childhood: the origins of the gradient. The American Economic Review 92 (5) 1308–1344]. However, the income gradient disappears if we include a rich set of controls. Our results indicate that parental health and, in particular, the mother's health plays a significant role, reducing the income coefficient to zero; suggesting an underlying mechanism that can explain the observed relationship between child health and family income. Overall, our results for Australian children are similar to those produced by Propper et al. [Propper, C., Rigg, J., Burgess, S., 2007. Child health: evidence on the roles of family income and maternal mental health from a UK birth cohort. Health Economics 16 (11) 1245–1269] on their British child cohort.
Resumo:
The literature to date shows that children from poorer households tend to have worse health than their peers, and the gap between them grows with age. We investigate whether and how health shocks (as measured by the onset of chronic conditions) contribute to the income–child health gradient and whether the contemporaneous or cumulative effects of income play important mitigating roles. We exploit a rich panel dataset with three panel waves called the Longitudinal Study of Australian children. Given the availability of three waves of data, we are able to apply a range of econometric techniques (e.g. fixed and random effects) to control for unobserved heterogeneity. The paper makes several contributions to the extant literature. First, it shows that an apparent income gradient becomes relatively attenuated in our dataset when the cumulative and contemporaneous effects of household income are distinguished econometrically. Second, it demonstrates that the income–child health gradient becomes statistically insignificant when controlling for parental health and health-related behaviours or unobserved heterogeneity.
Resumo:
Integration of biometrics is considered as an attractive solution for the issues associated with password based human authentication as well as for secure storage and release of cryptographic keys which is one of the critical issues associated with modern cryptography. However, the widespread popularity of bio-cryptographic solutions are somewhat restricted by the fuzziness associated with biometric measurements. Therefore, error control mechanisms must be adopted to make sure that fuzziness of biometric inputs can be sufficiently countered. In this paper, we have outlined such existing techniques used in bio-cryptography while explaining how they are deployed in different types of solutions. Finally, we have elaborated on the important facts to be considered when choosing appropriate error correction mechanisms for a particular biometric based solution.
Resumo:
A description of a computer program to analyse cine angiograms of the heart and pressure waveforms to calculate valve gradients.
Resumo:
Index tracking is an investment approach where the primary objective is to keep portfolio return as close as possible to a target index without purchasing all index components. The main purpose is to minimize the tracking error between the returns of the selected portfolio and a benchmark. In this paper, quadratic as well as linear models are presented for minimizing the tracking error. The uncertainty is considered in the input data using a tractable robust framework that controls the level of conservatism while maintaining linearity. The linearity of the proposed robust optimization models allows a simple implementation of an ordinary optimization software package to find the optimal robust solution. The proposed model of this paper employs Morgan Stanley Capital International Index as the target index and the results are reported for six national indices including Japan, the USA, the UK, Germany, Switzerland and France. The performance of the proposed models is evaluated using several financial criteria e.g. information ratio, market ratio, Sharpe ratio and Treynor ratio. The preliminary results demonstrate that the proposed model lowers the amount of tracking error while raising values of portfolio performance measures.
Resumo:
Melanopsin containing intrinsically photosensitive Retinal Ganglion cells (ipRGCs) mediate the pupil light reflex (PLR) during light onset and at light offset (the post-illumination pupil response, PIPR). Recent evidence shows that the PLR and PIPR can provide non-invasive, objective markers of age-related retinal and optic nerve disease, however there is no consensus on the effects of healthy ageing or refractive error on the ipRGC mediated pupil function. Here we isolated melanopsin contributions to the pupil control pathway in 59 human participants with no ocular pathology across a range of ages and refractive errors. We show that there is no effect of age or refractive error on ipRGC inputs to the human pupil control pathway. The stability of the ipRGC mediated pupil response across the human lifespan provides a functional correlate of their robustness observed during ageing in rodent models.
Resumo:
The aim of this study was to evaluate the mechanical triggers that may cause plaque rupture. Wall shear stress (WSS) and pressure gradient are the direct mechanical forces acting on the plaque in a stenotic artery. Their influence on plaque stability is thought to be controversial. This study used a physiologically realistic, pulsatile flow, two-dimensional, cine phase-contrast MRI sequence in a patient with a 70% carotid stenosis. Instead of considering the full patient-specific carotid bifurcation derived from MRI, only the plaque region has been modelled by means of the idealised flow model. WSS reached a local maximum just distal to the stenosis followed by a negative local minimum. A pressure drop across the stenosis was found which varied significantly during systole and diastole. The ratio of the relative importance of WSS and pressure was assessed and was found to be less than 0.07% for all time phases, even at the throat of the stenosis. In conclusion, although the local high WSS at the stenosis may damage the endothelium and fissure plaque, the magnitude of WSS is small compared with the overall loading on plaque. Therefore, pressure may be the main mechanical trigger for plaque rupture and risk stratification using stress analysis of plaque stability may only need to consider the pressure effect.
Resumo:
So far, most Phase II trials have been designed and analysed under a frequentist framework. Under this framework, a trial is designed so that the overall Type I and Type II errors of the trial are controlled at some desired levels. Recently, a number of articles have advocated the use of Bavesian designs in practice. Under a Bayesian framework, a trial is designed so that the trial stops when the posterior probability of treatment is within certain prespecified thresholds. In this article, we argue that trials under a Bayesian framework can also be designed to control frequentist error rates. We introduce a Bayesian version of Simon's well-known two-stage design to achieve this goal. We also consider two other errors, which are called Bayesian errors in this article because of their similarities to posterior probabilities. We show that our method can also control these Bayesian-type errors. We compare our method with other recent Bayesian designs in a numerical study and discuss implications of different designs on error rates. An example of a clinical trial for patients with nasopharyngeal carcinoma is used to illustrate differences of the different designs.
Resumo:
For a wide class of semi-Markov decision processes the optimal policies are expressible in terms of the Gittins indices, which have been found useful in sequential clinical trials and pharmaceutical research planning. In general, the indices can be approximated via calibration based on dynamic programming of finite horizon. This paper provides some results on the accuracy of such approximations, and, in particular, gives the error bounds for some well known processes (Bernoulli reward processes, normal reward processes and exponential target processes).