985 resultados para Error estimate.


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present substantial evidence for the existence of a bias in the distribution of births of leading US politicians in favour of those who were the eldest in their cohort at school. This result adds to the research on the long-term effects of relative age among peers at school. We discuss parametric and non-parametric tests to identify this effect, and we show that it is not driven by measurement error, redshirting or a sorting effect of highly educated parents. The magnitude of the effect that we estimate is larger than what other studies on ‘relative age effects’ have found for broader populations but is in general consistent with research that looks at professional sportsmen. We also find that relative age does not seem to correlate with the quality of elected politicians.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Digital elevation models (DEMs) have been an important topic in geography and surveying sciences for decades due to their geomorphological importance as the reference surface for gravita-tion-driven material flow, as well as the wide range of uses and applications. When DEM is used in terrain analysis, for example in automatic drainage basin delineation, errors of the model collect in the analysis results. Investigation of this phenomenon is known as error propagation analysis, which has a direct influence on the decision-making process based on interpretations and applications of terrain analysis. Additionally, it may have an indirect influence on data acquisition and the DEM generation. The focus of the thesis was on the fine toposcale DEMs, which are typically represented in a 5-50m grid and used in the application scale 1:10 000-1:50 000. The thesis presents a three-step framework for investigating error propagation in DEM-based terrain analysis. The framework includes methods for visualising the morphological gross errors of DEMs, exploring the statistical and spatial characteristics of the DEM error, making analytical and simulation-based error propagation analysis and interpreting the error propagation analysis results. The DEM error model was built using geostatistical methods. The results show that appropriate and exhaustive reporting of various aspects of fine toposcale DEM error is a complex task. This is due to the high number of outliers in the error distribution and morphological gross errors, which are detectable with presented visualisation methods. In ad-dition, the use of global characterisation of DEM error is a gross generalisation of reality due to the small extent of the areas in which the decision of stationarity is not violated. This was shown using exhaustive high-quality reference DEM based on airborne laser scanning and local semivariogram analysis. The error propagation analysis revealed that, as expected, an increase in the DEM vertical error will increase the error in surface derivatives. However, contrary to expectations, the spatial au-tocorrelation of the model appears to have varying effects on the error propagation analysis depend-ing on the application. The use of a spatially uncorrelated DEM error model has been considered as a 'worst-case scenario', but this opinion is now challenged because none of the DEM derivatives investigated in the study had maximum variation with spatially uncorrelated random error. Sig-nificant performance improvement was achieved in simulation-based error propagation analysis by applying process convolution in generating realisations of the DEM error model. In addition, typology of uncertainty in drainage basin delineations is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis addresses modeling of financial time series, especially stock market returns and daily price ranges. Modeling data of this kind can be approached with so-called multiplicative error models (MEM). These models nest several well known time series models such as GARCH, ACD and CARR models. They are able to capture many well established features of financial time series including volatility clustering and leptokurtosis. In contrast to these phenomena, different kinds of asymmetries have received relatively little attention in the existing literature. In this thesis asymmetries arise from various sources. They are observed in both conditional and unconditional distributions, for variables with non-negative values and for variables that have values on the real line. In the multivariate context asymmetries can be observed in the marginal distributions as well as in the relationships of the variables modeled. New methods for all these cases are proposed. Chapter 2 considers GARCH models and modeling of returns of two stock market indices. The chapter introduces the so-called generalized hyperbolic (GH) GARCH model to account for asymmetries in both conditional and unconditional distribution. In particular, two special cases of the GARCH-GH model which describe the data most accurately are proposed. They are found to improve the fit of the model when compared to symmetric GARCH models. The advantages of accounting for asymmetries are also observed through Value-at-Risk applications. Both theoretical and empirical contributions are provided in Chapter 3 of the thesis. In this chapter the so-called mixture conditional autoregressive range (MCARR) model is introduced, examined and applied to daily price ranges of the Hang Seng Index. The conditions for the strict and weak stationarity of the model as well as an expression for the autocorrelation function are obtained by writing the MCARR model as a first order autoregressive process with random coefficients. The chapter also introduces inverse gamma (IG) distribution to CARR models. The advantages of CARR-IG and MCARR-IG specifications over conventional CARR models are found in the empirical application both in- and out-of-sample. Chapter 4 discusses the simultaneous modeling of absolute returns and daily price ranges. In this part of the thesis a vector multiplicative error model (VMEM) with asymmetric Gumbel copula is found to provide substantial benefits over the existing VMEM models based on elliptical copulas. The proposed specification is able to capture the highly asymmetric dependence of the modeled variables thereby improving the performance of the model considerably. The economic significance of the results obtained is established when the information content of the volatility forecasts derived is examined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fisheries management agencies around the world collect age data for the purpose of assessing the status of natural resources in their jurisdiction. Estimates of mortality rates represent a key information to assess the sustainability of fish stocks exploitation. Contrary to medical research or manufacturing where survival analysis is routinely applied to estimate failure rates, survival analysis has seldom been applied in fisheries stock assessment despite similar purposes between these fields of applied statistics. In this paper, we developed hazard functions to model the dynamic of an exploited fish population. These functions were used to estimate all parameters necessary for stock assessment (including natural and fishing mortality rates as well as gear selectivity) by maximum likelihood using age data from a sample of catch. This novel application of survival analysis to fisheries stock assessment was tested by Monte Carlo simulations to assert that it provided unbiased estimations of relevant quantities. The method was applied to the data from the Queensland (Australia) sea mullet (Mugil cephalus) commercial fishery collected between 2007 and 2014. It provided, for the first time, an estimate of natural mortality affecting this stock: 0.22±0.08 year −1 .

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective To identify measures that most closely relate to hydration in healthy Brahman-cross neonatal calves that experience milk deprivation. Methods In a dry tropical environment, eight neonatal Brahman-cross calves were prevented from suckling for 2–3 days during which measurements were performed twice daily. Results Mean body water, as estimated by the mean urea space, was 74 ± 3% of body weight at full hydration. The mean decrease in hydration was 7.3 ± 1.1% per day. The rate of decrease was more than three-fold higher during the day than at night. At an ambient temperature of 39°C, the decrease in hydration averaged 1.1% hourly. Measures that were most useful in predicting the degree of hydration in both simple and multiple-regression prediction models were body weight, hindleg length, girth, ambient and oral temperatures, eyelid tenting, alertness score and plasma sodium. These parameters are different to those recommended for assessing calves with diarrhoea. Single-measure predictions had a standard error of at least 5%, which reduced to 3–4% if multiple measures were used. Conclusion We conclude that simple assessment of non-suckling Brahman-cross neonatal calves can estimate the severity of dehydration, but the estimates are imprecise. Dehydration in healthy neonatal calves that do not have access to milk can exceed 20% (>15% weight loss) in 1–3 days under tropical conditions and at this point some are unable to recover without clinical intervention.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Thickness measurements derived from optical coherence tomography (OCT) images of the eye are a fundamental clinical and research metric, since they provide valuable information regarding the eye’s anatomical and physiological characteristics, and can assist in the diagnosis and monitoring of numerous ocular conditions. Despite the importance of these measurements, limited attention has been given to the methods used to estimate thickness in OCT images of the eye. Most current studies employing OCT use an axial thickness metric, but there is evidence that axial thickness measures may be biased by tilt and curvature of the image. In this paper, standard axial thickness calculations are compared with a variety of alternative metrics for estimating tissue thickness. These methods were tested on a data set of wide-field chorio-retinal OCT scans (field of view (FOV) 60° x 25°) to examine their performance across a wide region of interest and to demonstrate the potential effect of curvature of the posterior segment of the eye on the thickness estimates. Similarly, the effect of image tilt was systematically examined with the same range of proposed metrics. The results demonstrate that image tilt and curvature of the posterior segment can affect axial tissue thickness calculations, while alternative metrics, which are not biased by these effects, should be considered. This study demonstrates the need to consider alternative methods to calculate tissue thickness in order to avoid measurement error due to image tilt and curvature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper presents an innovative approach to modelling the causal relationships of human errors in rail crack incidents (RCI) from a managerial perspective. A Bayesian belief network is developed to model RCI by considering the human errors of designers, manufactures, operators and maintainers (DMOM) and the causal relationships involved. A set of dependent variables whose combinations express the relevant functions performed by each DMOM participant is used to model the causal relationships. A total of 14 RCI on Hong Kong’s mass transit railway (MTR) from 2008 to 2011 are used to illustrate the application of the model. Bayesian inference is used to conduct an importance analysis to assess the impact of the participants’ errors. Sensitivity analysis is then employed to gauge the effect the increased probability of occurrence of human errors on RCI. Finally, strategies for human error identification and mitigation of RCI are proposed. The identification of ability of maintainer in the case study as the most important factor influencing the probability of RCI implies the priority need to strengthen the maintenance management of the MTR system and that improving the inspection ability of the maintainer is likely to be an effective strategy for RCI risk mitigation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, we propose a denoising algorithm to denoise a time series y(i) = x(i) + e(i), where {x(i)} is a time series obtained from a time- T map of a uniformly hyperbolic or Anosov flow, and {e(i)} a uniformly bounded sequence of independent and identically distributed (i.i.d.) random variables. Making use of observations up to time n, we create an estimate of x(i) for i<n. We show under typical limiting behaviours of the orbit and the recurrence properties of x(i), the estimation error converges to zero as n tends to infinity with probability 1.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Visual acuities at the time of referral and on the day before surgery were compared in 124 patients operated on for cataract in Vaasa Central Hospital, Finland. Preoperative visual acuity and the occurrence of ocular and general disease were compared in samples of consecutive cataract extractions performed in 1982, 1985, 1990, 1995 and 2000 in two hospitals in the Vaasa region in Finland. The repeatability and standard deviation of random measurement error in visual acuity and refractive error determination in a clinical environment in cataractous, pseudophakic and healthy eyes were estimated by re-examining visual acuity and refractive error of patients referred to cataract surgery or consultation by ophthalmic professionals. Altogether 99 eyes of 99 persons (41 cataractous, 36 pseudophakic and 22 healthy eyes) with a visual acuity range of Snellen 0.3 to 1.3 (0.52 to -0.11 logMAR) were examined. During an average waiting time of 13 months, visual acuity in the study eye decreased from 0.68 logMAR to 0.96 logMAR (from 0.2 to 0.1 in Snellen decimal values). The average decrease in vision was 0.27 logMAR per year. In the fastest quartile, visual acuity change per year was 0.75 logMAR, and in the second fastest 0.29 logMAR, the third and fourth quartiles were virtually unaffected. From 1982 to 2000, the incidence of cataract surgery increased from 1.0 to 7.2 operations per 1000 inhabitants per year in the Vaasa region. The average preoperative visual acuity in the operated eye increased by 0.85 logMAR (in decimal values from 0.03to 0.2) and in the better eye 0.27 logMAR (in decimal values from 0.23 to 0.43) over this period. The proportion of patients profoundly visually handicapped (VA in the better eye <0.1) before the operation fell from 15% to 4%, and that of patients less profoundly visually handicapped (VA in the better eye 0.1 to <0.3) from 47% to 15%. The repeatability visual acuity measurement estimated as a coefficient of repeatability for all 99 eyes was ±0.18 logMAR, and the standard deviation of measurement error was 0.06 logMAR. Eyes with the lowest visual acuity (0.3-0.45) had the largest variability, the coefficient of repeatability values being ±0.24 logMAR and eyes with a visual acuity of 0.7 or better had the smallest, ±0.12 logMAR. The repeatability of refractive error measurement was studied in the same patient material as the repeatability of visual acuity. Differences between measurements 1 and 2 were calculated as three-dimensional vector values and spherical equivalents and expressed by coefficients of repeatability. Coefficients of repeatability for all eyes for vertical, torsional and horisontal vectors were ±0.74D, ±0.34D and ±0.93D, respectively, and for spherical equivalent for all eyes ±0.74D. Eyes with lower visual acuity (0.3-0.45) had larger variability in vector and spherical equivalent values (±1.14), but the difference between visual acuity groups was not statistically significant. The difference in the mean defocus equivalent between measurements 1 and 2 was, however, significantly greater in the lower visual acuity group. If a change of ±0.5D (measured in defocus equivalents) is accepted as a basis for change of spectacles for eyes with good vision, the basis for eyes in the visual acuity range of 0.3 - 0.65 would be ±1D. Differences in repeated visual acuity measurements are partly explained by errors in refractive error measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A method for determining the electron/hole transport length scale of model semiconducting polymer systems by scanning a narrow-light probe beam over the nonoverlapping anode/cathode region in asymmetric sandwich device structures is presented (see figure). Electron versus hole collection efficacy, and disorder and spatial anisotropy in the electrical transport parameters can be estimated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Genetic and environmental factors affect white matter connectivity in the normal brain, and they also influence diseases in which brain connectivity is altered. Little is known about genetic influences on brain connectivity, despite wide variations in the brain's neural pathways. Here we applied the 'DICCCOL' framework to analyze structural connectivity, in 261 twin pairs (522 participants, mean age: 21.8 y ± 2.7SD). We encoded connectivity patterns by projecting the white matter (WM) bundles of all 'DICCCOLs' as a tracemap (TM). Next we fitted an A/C/E structural equation model to estimate additive genetic (A), common environmental (C), and unique environmental/error (E) components of the observed variations in brain connectivity. We found 44 'heritable DICCCOLs' whose connectivity was genetically influenced (α2>1%); half of them showed significant heritability (α2>20%). Our analysis of genetic influences on WM structural connectivity suggests high heritability for some WM projection patterns, yielding new targets for genome-wide association studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The reduction in natural frequencies,however small, of a civil engineering structure, is the first and the easiest method of estimating its impending damage. As a first level screening for health-monitoring, information on the frequency reduction of a few fundamentalmodes can be used to estimate the positions and the magnitude of damage in a smeared fashion. The paper presents the Eigen value sensitivity equations, derived from first-order perturbation technique, for typical infra-structural systems like a simply supported bridge girder, modelled as a beam, an endbearing pile, modelled as an axial rod and a simply supported plate as a continuum dynamic system. A discrete structure, like a building frame is solved for damage using Eigen-sensitivity derived by a computationalmodel. Lastly, neural network based damage identification is also demonstrated for a simply supported bridge beam, where the known-pairs of damage-frequency vector is used to train a neural network. The performance of these methods under the influence of measurement error is outlined. It is hoped that the developed method could be integrated in a typical infra-structural management program, such that magnitudes of damage and their positions can be obtained using acquired natural frequencies, synthesized from the excited/ambient vibration signatures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With technology scaling, vulnerability to soft errors in random logic is increasing. There is a need for on-line error detection and protection for logic gates even at sea level. The error checker is the key element for an on-line detection mechanism. We compare three different checkers for error detection from the point of view of area, power and false error detection rates. We find that the double sampling checker (used in Razor), is the simplest and most area and power efficient, but suffers from very high false detection rates of 1.15 times the actual error rates. We also find that the alternate approaches of triple sampling and integrate and sample method (I&S) can be designed to have zero false detection rates, but at an increased area, power and implementation complexity. The triple sampling method has about 1.74 times the area and twice the power as compared to the Double Sampling method and also needs a complex clock generation scheme. The I&S method needs about 16% more power with 0.58 times the area as double sampling, but comes with more stringent implementation constraints as it requires detection of small voltage swings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Foliage density and leaf area index are important vegetation structure variables. They can be measured by several methods but few have been tested in tropical forests which have high structural heterogeneity. In this study, foliage density estimates by two indirect methods, the point quadrat and photographic methods, were compared with those obtained by direct leaf counts in the understorey of a wet evergreen forest in southern India. The point quadrat method has a tendency to overestimate, whereas the photographic method consistently and ignificantly underestimates foliage density. There was stratification within the understorey, with areas close to the ground having higher foliage densities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of constructing space-time (ST) block codes over a fixed, desired signal constellation is considered. In this situation, there is a tradeoff between the transmission rate as measured in constellation symbols per channel use and the transmit diversity gain achieved by the code. The transmit diversity is a measure of the rate of polynomial decay of pairwise error probability of the code with increase in the signal-to-noise ratio (SNR). In the setting of a quasi-static channel model, let n(t) denote the number of transmit antennas and T the block interval. For any n(t) <= T, a unified construction of (n(t) x T) ST codes is provided here, for a class of signal constellations that includes the familiar pulse-amplitude (PAM), quadrature-amplitude (QAM), and 2(K)-ary phase-shift-keying (PSK) modulations as special cases. The construction is optimal as measured by the rate-diversity tradeoff and can achieve any given integer point on the rate-diversity tradeoff curve. An estimate of the coding gain realized is given. Other results presented here include i) an extension of the optimal unified construction to the multiple fading block case, ii) a version of the optimal unified construction in which the underlying binary block codes are replaced by trellis codes, iii) the providing of a linear dispersion form for the underlying binary block codes, iv) a Gray-mapped version of the unified construction, and v) a generalization of construction of the S-ary case corresponding to constellations of size S-K. Items ii) and iii) are aimed at simplifying the decoding of this class of ST codes.