134 resultados para Error estimate.
Resumo:
We derive a new method for determining size-transition matrices (STMs) that eliminates probabilities of negative growth and accounts for individual variability. STMs are an important part of size-structured models, which are used in the stock assessment of aquatic species. The elements of STMs represent the probability of growth from one size class to another, given a time step. The growth increment over this time step can be modelled with a variety of methods, but when a population construct is assumed for the underlying growth model, the resulting STM may contain entries that predict negative growth. To solve this problem, we use a maximum likelihood method that incorporates individual variability in the asymptotic length, relative age at tagging, and measurement error to obtain von Bertalanffy growth model parameter estimates. The statistical moments for the future length given an individual's previous length measurement and time at liberty are then derived. We moment match the true conditional distributions with skewed-normal distributions and use these to accurately estimate the elements of the STMs. The method is investigated with simulated tag-recapture data and tag-recapture data gathered from the Australian eastern king prawn (Melicertus plebejus).
Resumo:
We consider estimating the total load from frequent flow data but less frequent concentration data. There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates that minimizes the biases and makes use of informative predictive variables. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized rating-curve approach with additional predictors that capture unique features in the flow data, such as the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and the discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. Forming this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach for two rivers delivering to the Great Barrier Reef, Queensland, Australia. One is a data set from the Burdekin River, and consists of the total suspended sediment (TSS) and nitrogen oxide (NO(x)) and gauged flow for 1997. The other dataset is from the Tully River, for the period of July 2000 to June 2008. For NO(x) Burdekin, the new estimates are very similar to the ratio estimates even when there is no relationship between the concentration and the flow. However, for the Tully dataset, by incorporating the additional predictive variables namely the discounted flow and flow phases (rising or recessing), we substantially improved the model fit, and thus the certainty with which the load is estimated.
Resumo:
There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.
Resumo:
The Fabens method is commonly used to estimate growth parameters k and l infinity in the von Bertalanffy model from tag-recapture data. However, the Fabens method of estimation has an inherent bias when individual growth is variable. This paper presents an asymptotically unbiassed method using a maximum likelihood approach that takes account of individual variability in both maximum length and age-at-tagging. It is assumed that each individual's growth follows a von Bertalanffy curve with its own maximum length and age-at-tagging. The parameter k is assumed to be a constant to ensure that the mean growth follows a von Bertalanffy curve and to avoid overparameterization. Our method also makes more efficient use nf thp measurements at tno and recapture and includes diagnostic techniques for checking distributional assumptions. The method is reasonably robust and performs better than the Fabens method when individual growth differs from the von Bertalanffy relationship. When measurement error is negligible, the estimation involves maximizing the profile likelihood of one parameter only. The method is applied to tag-recapture data for the grooved tiger prawn (Penaeus semisulcatus) from the Gulf of Carpentaria, Australia.
Resumo:
So far, most Phase II trials have been designed and analysed under a frequentist framework. Under this framework, a trial is designed so that the overall Type I and Type II errors of the trial are controlled at some desired levels. Recently, a number of articles have advocated the use of Bavesian designs in practice. Under a Bayesian framework, a trial is designed so that the trial stops when the posterior probability of treatment is within certain prespecified thresholds. In this article, we argue that trials under a Bayesian framework can also be designed to control frequentist error rates. We introduce a Bayesian version of Simon's well-known two-stage design to achieve this goal. We also consider two other errors, which are called Bayesian errors in this article because of their similarities to posterior probabilities. We show that our method can also control these Bayesian-type errors. We compare our method with other recent Bayesian designs in a numerical study and discuss implications of different designs on error rates. An example of a clinical trial for patients with nasopharyngeal carcinoma is used to illustrate differences of the different designs.
Resumo:
For a wide class of semi-Markov decision processes the optimal policies are expressible in terms of the Gittins indices, which have been found useful in sequential clinical trials and pharmaceutical research planning. In general, the indices can be approximated via calibration based on dynamic programming of finite horizon. This paper provides some results on the accuracy of such approximations, and, in particular, gives the error bounds for some well known processes (Bernoulli reward processes, normal reward processes and exponential target processes).
Resumo:
Recovering the motion of a non-rigid body from a set of monocular images permits the analysis of dynamic scenes in uncontrolled environments. However, the extension of factorisation algorithms for rigid structure from motion to the low-rank non-rigid case has proved challenging. This stems from the comparatively hard problem of finding a linear “corrective transform” which recovers the projection and structure matrices from an ambiguous factorisation. We elucidate that this greater difficulty is due to the need to find multiple solutions to a non-trivial problem, casting a number of previous approaches as alleviating this issue by either a) introducing constraints on the basis, making the problems nonidentical, or b) incorporating heuristics to encourage a diverse set of solutions, making the problems inter-dependent. While it has previously been recognised that finding a single solution to this problem is sufficient to estimate cameras, we show that it is possible to bootstrap this partial solution to find the complete transform in closed-form. However, we acknowledge that our method minimises an algebraic error and is thus inherently sensitive to deviation from the low-rank model. We compare our closed-form solution for non-rigid structure with known cameras to the closed-form solution of Dai et al. [1], which we find to produce only coplanar reconstructions. We therefore make the recommendation that 3D reconstruction error always be measured relative to a trivial reconstruction such as a planar one.
Resumo:
The decision of Henry J in Ginn & Anor v Ginn; ex parte Absolute Law Lawyers & Attorneys [2015] QSC 49 provides clarification of the approach to be taken on a default costs assessment under r708 of the Uniform Civil Procedure Rules 1999
Resumo:
A 59-year-old man was mistakenly prescribed Slow-Na instead of Slow-K due to incorrect selection from a drop-down list in the prescribing software. This error was identified by a pharmacist during a home medicine review (HMR) before the patient began taking the supplement. The reported error emphasizes the need for vigilance due to the emergence of novel look-alike, sound-alike (LASA) drug pairings. This case highlights the important role of pharmacists in medication safety.
Resumo:
Age estimation from facial images is increasingly receiving attention to solve age-based access control, age-adaptive targeted marketing, amongst other applications. Since even humans can be induced in error due to the complex biological processes involved, finding a robust method remains a research challenge today. In this paper, we propose a new framework for the integration of Active Appearance Models (AAM), Local Binary Patterns (LBP), Gabor wavelets (GW) and Local Phase Quantization (LPQ) in order to obtain a highly discriminative feature representation which is able to model shape, appearance, wrinkles and skin spots. In addition, this paper proposes a novel flexible hierarchical age estimation approach consisting of a multi-class Support Vector Machine (SVM) to classify a subject into an age group followed by a Support Vector Regression (SVR) to estimate a specific age. The errors that may happen in the classification step, caused by the hard boundaries between age classes, are compensated in the specific age estimation by a flexible overlapping of the age ranges. The performance of the proposed approach was evaluated on FG-NET Aging and MORPH Album 2 datasets and a mean absolute error (MAE) of 4.50 and 5.86 years was achieved respectively. The robustness of the proposed approach was also evaluated on a merge of both datasets and a MAE of 5.20 years was achieved. Furthermore, we have also compared the age estimation made by humans with the proposed approach and it has shown that the machine outperforms humans. The proposed approach is competitive with current state-of-the-art and it provides an additional robustness to blur, lighting and expression variance brought about by the local phase features.
Resumo:
We present substantial evidence for the existence of a bias in the distribution of births of leading US politicians in favour of those who were the eldest in their cohort at school. This result adds to the research on the long-term effects of relative age among peers at school. We discuss parametric and non-parametric tests to identify this effect, and we show that it is not driven by measurement error, redshirting or a sorting effect of highly educated parents. The magnitude of the effect that we estimate is larger than what other studies on ‘relative age effects’ have found for broader populations but is in general consistent with research that looks at professional sportsmen. We also find that relative age does not seem to correlate with the quality of elected politicians.
Resumo:
Thickness measurements derived from optical coherence tomography (OCT) images of the eye are a fundamental clinical and research metric, since they provide valuable information regarding the eye’s anatomical and physiological characteristics, and can assist in the diagnosis and monitoring of numerous ocular conditions. Despite the importance of these measurements, limited attention has been given to the methods used to estimate thickness in OCT images of the eye. Most current studies employing OCT use an axial thickness metric, but there is evidence that axial thickness measures may be biased by tilt and curvature of the image. In this paper, standard axial thickness calculations are compared with a variety of alternative metrics for estimating tissue thickness. These methods were tested on a data set of wide-field chorio-retinal OCT scans (field of view (FOV) 60° x 25°) to examine their performance across a wide region of interest and to demonstrate the potential effect of curvature of the posterior segment of the eye on the thickness estimates. Similarly, the effect of image tilt was systematically examined with the same range of proposed metrics. The results demonstrate that image tilt and curvature of the posterior segment can affect axial tissue thickness calculations, while alternative metrics, which are not biased by these effects, should be considered. This study demonstrates the need to consider alternative methods to calculate tissue thickness in order to avoid measurement error due to image tilt and curvature.
Resumo:
The paper presents an innovative approach to modelling the causal relationships of human errors in rail crack incidents (RCI) from a managerial perspective. A Bayesian belief network is developed to model RCI by considering the human errors of designers, manufactures, operators and maintainers (DMOM) and the causal relationships involved. A set of dependent variables whose combinations express the relevant functions performed by each DMOM participant is used to model the causal relationships. A total of 14 RCI on Hong Kong’s mass transit railway (MTR) from 2008 to 2011 are used to illustrate the application of the model. Bayesian inference is used to conduct an importance analysis to assess the impact of the participants’ errors. Sensitivity analysis is then employed to gauge the effect the increased probability of occurrence of human errors on RCI. Finally, strategies for human error identification and mitigation of RCI are proposed. The identification of ability of maintainer in the case study as the most important factor influencing the probability of RCI implies the priority need to strengthen the maintenance management of the MTR system and that improving the inspection ability of the maintainer is likely to be an effective strategy for RCI risk mitigation.
Genetic analysis of structural brain connectivity using DICCCOL models of diffusion MRI in 522 twins
Resumo:
Genetic and environmental factors affect white matter connectivity in the normal brain, and they also influence diseases in which brain connectivity is altered. Little is known about genetic influences on brain connectivity, despite wide variations in the brain's neural pathways. Here we applied the 'DICCCOL' framework to analyze structural connectivity, in 261 twin pairs (522 participants, mean age: 21.8 y ± 2.7SD). We encoded connectivity patterns by projecting the white matter (WM) bundles of all 'DICCCOLs' as a tracemap (TM). Next we fitted an A/C/E structural equation model to estimate additive genetic (A), common environmental (C), and unique environmental/error (E) components of the observed variations in brain connectivity. We found 44 'heritable DICCCOLs' whose connectivity was genetically influenced (α2>1%); half of them showed significant heritability (α2>20%). Our analysis of genetic influences on WM structural connectivity suggests high heritability for some WM projection patterns, yielding new targets for genome-wide association studies.