125 resultados para Error estimate.
Resumo:
A 94 GHz waveguide Rotman lens is described which can be used to implement an amplitude comparison monopulse RADAR. In transmit mode, adjacent dual beam ports are excited with equal amplitude and phase to form a sum radiation pattern, and in receive mode, the outputs of the beam port pairs are combined using magic tees to provide a sum and a difference signal which can be used to calculate an angular error estimate for target acquisition and tracking. This approach provides an amplitude comparison monopulse system which can be scanned in azimuth and which has a low component count, with no requirement for phase shift circuitry in the array feed lines, making it suitable for mm-wave frequencies. A 12 input (beam ports), 12 output (array ports) lens is designed using CST Microwave Studio, and the predicted results are presented.
Resumo:
To estimate the prevalence of refractive error in adults across Europe. Refractive data (mean spherical equivalent) collected between 1990 and 2013 from fifteen population-based cohort and cross-sectional studies of the European Eye Epidemiology (E3) Consortium were combined in a random effects meta-analysis stratified by 5-year age intervals and gender. Participants were excluded if they were identified as having had cataract surgery, retinal detachment, refractive surgery or other factors that might influence refraction. Estimates of refractive error prevalence were obtained including the following classifications: myopia ≤−0.75 diopters (D), high myopia ≤−6D, hyperopia ≥1D and astigmatism ≥1D. Meta-analysis of refractive error was performed for 61,946 individuals from fifteen studies with median age ranging from 44 to 81 and minimal ethnic variation (98 % European ancestry). The age-standardised prevalences (using the 2010 European Standard Population, limited to those ≥25 and <90 years old) were: myopia 30.6 % [95 % confidence interval (CI) 30.4–30.9], high myopia 2.7 % (95 % CI 2.69–2.73), hyperopia 25.2 % (95 % CI 25.0–25.4) and astigmatism 23.9 % (95 % CI 23.7–24.1). Age-specific estimates revealed a high prevalence of myopia in younger participants [47.2 % (CI 41.8–52.5) in 25–29 years-olds]. Refractive error affects just over a half of European adults. The greatest burden of refractive error is due to myopia, with high prevalence rates in young adults. Using the 2010 European population estimates, we estimate there are 227.2 million people with myopia across Europe.
Resumo:
Diagnostic test sensitivity and specificity are probabilistic estimates with far reaching implications for disease control, management and genetic studies. In the absence of 'gold standard' tests, traditional Bayesian latent class models may be used to assess diagnostic test accuracies through the comparison of two or more tests performed on the same groups of individuals. The aim of this study was to extend such models to estimate diagnostic test parameters and true cohort-specific prevalence, using disease surveillance data. The traditional Hui-Walter latent class methodology was extended to allow for features seen in such data, including (i) unrecorded data (i.e. data for a second test available only on a subset of the sampled population) and (ii) cohort-specific sensitivities and specificities. The model was applied with and without the modelling of conditional dependence between tests. The utility of the extended model was demonstrated through application to bovine tuberculosis surveillance data from Northern and the Republic of Ireland. Simulation coupled with re-sampling techniques, demonstrated that the extended model has good predictive power to estimate the diagnostic parameters and true herd-level prevalence from surveillance data. Our methodology can aid in the interpretation of disease surveillance data, and the results can potentially refine disease control strategies.
Resumo:
PURPOSE: To determine the heritability of refractive error and the familial aggregation of myopia in an older population. METHODS: Seven hundred fifty-nine siblings (mean age, 73.4 years) in 241 families were recruited from the Salisbury Eye Evaluation (SEE) Study in eastern Maryland. Refractive error was determined by noncycloplegic subjective refraction (if presenting distance visual acuity was < or =20/40) or lensometry (if best corrected visual acuity was >20/40 with spectacles). Participants were considered plano (refractive error of zero) if uncorrected visual acuity was >20/40. Preoperative refraction from medical records was used for pseudophakic subjects. Heritability of refractive error was calculated with multivariate linear regression and was estimated as twice the residual between-sibling correlation after adjusting for age, gender, and race. Logistic regression models were used to estimate the odds ratio (OR) of myopia, given a myopic sibling relative to having a nonmyopic sibling. RESULTS: The estimated heritability of refractive error was 61% (95% confidence interval [CI]: 34%-88%) in this population. The age-, race-, and sex-adjusted ORs of myopia were 2.65 (95% CI: 1.67-4.19), 2.25 (95% CI: 1.31-3.87), 3.00 (95% CI: 1.56-5.79), and 2.98 (95% CI: 1.51-5.87) for myopia thresholds of -0.50, -1.00, -1.50, and -2.00 D, respectively. Neither race nor gender was significantly associated with an increased risk of myopia. CONCLUSIONS: Refractive error and myopia are highly heritable in this elderly population.
Resumo:
Reliability has emerged as a critical design constraint especially in memories. Designers are going to great lengths to guarantee fault free operation of the underlying silicon by adopting redundancy-based techniques, which essentially try to detect and correct every single error. However, such techniques come at a cost of large area, power and performance overheads which making many researchers to doubt their efficiency especially for error resilient systems where 100% accuracy is not always required. In this paper, we present an alternative method focusing on the confinement of the resulting output error induced by any reliability issues. By focusing on memory faults, rather than correcting every single error the proposed method exploits the statistical characteristics of any target application and replaces any erroneous data with the best available estimate of that data. To realize the proposed method a RISC processor is augmented with custom instructions and special-purpose functional units. We apply the method on the proposed enhanced processor by studying the statistical characteristics of the various algorithms involved in a popular multimedia application. Our experimental results show that in contrast to state-of-the-art fault tolerance approaches, we are able to reduce runtime and area overhead by 71.3% and 83.3% respectively.
Resumo:
Estimating a time interval and temporally coordinating movements in space are fundamental skills, but the relationships between these different forms of timing, and the neural processes that they incur, are not well understood. While different theories have been proposed to account for time perception, time estimation, and the temporal patterns of coordination, there are no general mechanisms which unify these various timing skills. This study considers whether a model of perceptuo-motor timing, the tau(GUIDE), can also describe how certain judgements of elapsed time are made. To evaluate this, an equation for determining interval estimates was derived from the tau(GUIDE) model and tested in a task where participants had to throw a ball and estimate when it would hit the floor. The results showed that in accordance with the model, very accurate judgements could be made without vision (mean timing error -19.24 msec), and the model was a good predictor of skilled participants' estimate timing. It was concluded that since the tau(GUIDE) principle provides temporal information in a generic form, it could be a unitary process that links different forms of timing.
Resumo:
In 1999 Stephen Gorard published an article in this journal in which he provided a trenchant critique of what he termed the `politician's error' in analysing differences in educational attainment. The main consequence of this error, he argued, has been the production of misleading findings in relation to trends in educational performance over time that have, in turn, led to misguided and potentially damaging policy interventions. By using gender differences in educational attainment as a case study, this article begins by showing how Gorard's notion of the politician's error has been largely embraced and adopted uncritically by those within the field. However, the article goes on to demonstrate how Gorard's own preferred way of analysing such differences – by calculating and comparing proportionate changes in performance between groups – is also inherently problematic and can lead to the production of equally misleading findings. The article will argue that there is a need to develop a more reliable and valid way of measuring trends in educational performance over time and will show that one of the simplest ways of doing this is to make use of existing, and widely accepted, measures of effect size.
Resumo:
In a model commonly used in dynamic traffic assignment the link travel time for a vehicle entering a link at time t is taken as a function of the number of vehicles on the link at time t. In an alternative recently introduced model, the travel time for a vehicle entering a link at time t is taken as a function of an estimate of the flow in the immediate neighbourhood of the vehicle, averaged over the time the vehicle is traversing the link. Here we compare the solutions obtained from these two models when applied to various inflow profiles. We also divide the link into segments, apply each model sequentially to the segments and again compare the results. As the number of segments is increased, the discretisation refined to the continuous limit, the solutions from the two models converge to the same solution, which is the solution of the Lighthill, Whitham, Richards (LWR) model for traffic flow. We illustrate the results for different travel time functions and patterns of inflows to the link. In the numerical examples the solutions from the second of the two models are closer to the limit solutions. We also show that the models converge even when the link segments are not homogeneous, and introduce a correction scheme in the second model to compensate for an approximation error, hence improving the approximation to the LWR model.
Resumo:
Historical GIS has the potential to re-invigorate our use of statistics from historical censuses and related sources. In particular, areal interpolation can be used to create long-run time-series of spatially detailed data that will enable us to enhance significantly our understanding of geographical change over periods of a century or more. The difficulty with areal interpolation, however, is that the data that it generates are estimates which will inevitably contain some error. This paper describes a technique that allows the automated identification of possible errors at the level of the individual data values.
Resumo:
We present a fast and efficient hybrid algorithm for selecting exoplanetary candidates from wide-field transit surveys. Our method is based on the widely used SysRem and Box Least-Squares (BLS) algorithms. Patterns of systematic error that are common to all stars on the frame are mapped and eliminated using the SysRem algorithm. The remaining systematic errors caused by spatially localized flat-fielding and other errors are quantified using a boxcar-smoothing method. We show that the dimensions of the search-parameter space can be reduced greatly by carrying out an initial BLS search on a coarse grid of reduced dimensions, followed by Newton-Raphson refinement of the transit parameters in the vicinity of the most significant solutions. We illustrate the method's operation by applying it to data from one field of the SuperWASP survey, comprising 2300 observations of 7840 stars brighter than V = 13.0. We identify 11 likely transit candidates. We reject stars that exhibit significant ellipsoidal variations caused indicative of a stellar-mass companion. We use colours and proper motions from the Two Micron All Sky Survey and USNO-B1.0 surveys to estimate the stellar parameters and the companion radius. We find that two stars showing unambiguous transit signals pass all these tests, and so qualify for detailed high-resolution spectroscopic follow-up.