909 resultados para data accuracy


Relevância:

40.00% 40.00%

Publicador:

Resumo:

BACKGROUND: Chest pain is a common complaint in primary care, with coronary heart disease (CHD) being the most concerning of many potential causes. Systematic reviews on the sensitivity and specificity of symptoms and signs summarize the evidence about which of them are most useful in making a diagnosis. Previous meta-analyses are dominated by studies of patients referred to specialists. Moreover, as the analysis is typically based on study-level data, the statistical analyses in these reviews are limited while meta-analyses based on individual patient data can provide additional information. Our patient-level meta-analysis has three unique aims. First, we strive to determine the diagnostic accuracy of symptoms and signs for myocardial ischemia in primary care. Second, we investigate associations between study- or patient-level characteristics and measures of diagnostic accuracy. Third, we aim to validate existing clinical prediction rules for diagnosing myocardial ischemia in primary care. This article describes the methods of our study and six prospective studies of primary care patients with chest pain. Later articles will describe the main results. METHODS/DESIGN: We will conduct a systematic review and IPD meta-analysis of studies evaluating the diagnostic accuracy of symptoms and signs for diagnosing coronary heart disease in primary care. We will perform bivariate analyses to determine the sensitivity, specificity and likelihood ratios of individual symptoms and signs and multivariate analyses to explore the diagnostic value of an optimal combination of all symptoms and signs based on all data of all studies. We will validate existing clinical prediction rules from each of the included studies by calculating measures of diagnostic accuracy separately by study. DISCUSSION: Our study will face several methodological challenges. First, the number of studies will be limited. Second, the investigators of original studies defined some outcomes and predictors differently. Third, the studies did not collect the same standard clinical data set. Fourth, missing data, varying from partly missing to fully missing, will have to be dealt with.Despite these limitations, we aim to summarize the available evidence regarding the diagnostic accuracy of symptoms and signs for diagnosing CHD in patients presenting with chest pain in primary care. REVIEW REGISTRATION: Centre for Reviews and Dissemination (University of York): CRD42011001170.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Foreign trade statistics are the main data source to the study of international trade.However its accuracy has been under suspicion since Morgernstern published hisfamous work in 1963. Federico and Tena (1991) have resumed the question arguing thatthey can be useful in an adequate level of aggregation. But the geographical assignmentproblem remains unsolved. This article focuses on the spatial variable through theanalysis of the reliability of textile international data for 1913. A geographical biasarises between export and import series, but because of its quantitative importance it canbe negligible in an international scale.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study aims to improve the accuracy and usability of Iowa Falling Weight Deflectometer (FWD) data by incorporating significant enhancements into the fully-automated software system for rapid processing of the FWD data. These enhancements include: (1) refined prediction of backcalculated pavement layer modulus through deflection basin matching/optimization, (2) temperature correction of backcalculated Hot-Mix Asphalt (HMA) layer modulus, (3) computation of 1993 AASHTO design guide related effective SN (SNeff) and effective k-value (keff ), (4) computation of Iowa DOT asphalt concrete (AC) overlay design related Structural Rating (SR) and kvalue (k), and (5) enhancement of user-friendliness of input and output from the software tool. A high-quality, easy-to-use backcalculation software package, referred to as, I-BACK: the Iowa Pavement Backcalculation Software, was developed to achieve the project goals and requirements. This report presents theoretical background behind the incorporated enhancements as well as guidance on the use of I-BACK developed in this study. The developed tool, I-BACK, provides more fine-tuned ANN pavement backcalculation results by implementation of deflection basin matching optimizer for conventional flexible, full-depth, rigid, and composite pavements. Implementation of this tool within Iowa DOT will facilitate accurate pavement structural evaluation and rehabilitation designs for pavement/asset management purposes. This research has also set the framework for the development of a simplified FWD deflection based HMA overlay design procedure which is one of the recommended areas for future research.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study aims to improve the accuracy and usability of Iowa Falling Weight Deflectometer (FWD) data by incorporating significant enhancements into the fully-automated software system for rapid processing of the FWD data. These enhancements include: (1) refined prediction of backcalculated pavement layer modulus through deflection basin matching/optimization, (2) temperature correction of backcalculated Hot-Mix Asphalt (HMA) layer modulus, (3) computation of 1993 AASHTO design guide related effective SN (SNeff) and effective k-value (keff ), (4) computation of Iowa DOT asphalt concrete (AC) overlay design related Structural Rating (SR) and kvalue (k), and (5) enhancement of user-friendliness of input and output from the software tool. A high-quality, easy-to-use backcalculation software package, referred to as, I-BACK: the Iowa Pavement Backcalculation Software, was developed to achieve the project goals and requirements. This report presents theoretical background behind the incorporated enhancements as well as guidance on the use of I-BACK developed in this study. The developed tool, I-BACK, provides more fine-tuned ANN pavement backcalculation results by implementation of deflection basin matching optimizer for conventional flexible, full-depth, rigid, and composite pavements. Implementation of this tool within Iowa DOT will facilitate accurate pavement structural evaluation and rehabilitation designs for pavement/asset management purposes. This research has also set the framework for the development of a simplified FWD deflection based HMA overlay design procedure which is one of the recommended areas for future research.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Although accuracy of digital elevation models (DEMs) can be quantified and measured in different ways, each is influenced by three main factors: terrain character, sampling strategy and interpolation method. These parameters, and their interaction, are discussed. The generation of DEMs from digitised contours is emphasised because this is the major source of DEMs, particularly within member countries of OEEPE. Such DEMs often exhibit unwelcome artifacts, depending on the interpolation method employed. The origin and magnitude of these effects and how they can be reduced to improve the accuracy of the DEMs are also discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Stephens and Donnelly have introduced a simple yet powerful importance sampling scheme for computing the likelihood in population genetic models. Fundamental to the method is an approximation to the conditional probability of the allelic type of an additional gene, given those currently in the sample. As noted by Li and Stephens, the product of these conditional probabilities for a sequence of draws that gives the frequency of allelic types in a sample is an approximation to the likelihood, and can be used directly in inference. The aim of this note is to demonstrate the high level of accuracy of "product of approximate conditionals" (PAC) likelihood when used with microsatellite data. Results obtained on simulated microsatellite data show that this strategy leads to a negligible bias over a wide range of the scaled mutation parameter theta. Furthermore, the sampling variance of likelihood estimates as well as the computation time are lower than that obtained with importance sampling on the whole range of theta. It follows that this approach represents an efficient substitute to IS algorithms in computer intensive (e.g. MCMC) inference methods in population genetics. (c) 2006 Elsevier Inc. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Meteorological (met) station data is used as the basis for a number of influential studies into the impacts of the variability of renewable resources. Real turbine output data is not often easy to acquire, whereas meteorological wind data, supplied at a standardised height of 10 m, is widely available. This data can be extrapolated to a standard turbine height using the wind profile power law and used to simulate the hypothetical power output of a turbine. Utilising a number of met sites in such a manner can develop a model of future wind generation output. However, the accuracy of this extrapolation is strongly dependent on the choice of the wind shear exponent alpha. This paper investigates the accuracy of the simulated generation output compared to reality using a wind farm in North Rhins, Scotland and a nearby met station in West Freugh. The results show that while a single annual average value for alpha may be selected to accurately represent the long term energy generation from a simulated wind farm, there are significant differences between simulation and reality on an hourly power generation basis, with implications for understanding the impact of variability of renewables on short timescales, particularly system balancing and the way that conventional generation may be asked to respond to a high level of variable renewable generation on the grid in the future.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Advanced Along-Track Scanning Radiometer (AATSR) was launched on Envisat in March 2002. The AATSR instrument is designed to retrieve precise and accurate global sea surface temperature (SST) that, combined with the large data set collected from its predecessors, ATSR and ATSR-2, will provide a long term record of SST data that is greater than 15 years. This record can be used for independent monitoring and detection of climate change. The AATSR validation programme has successfully completed its initial phase. The programme involves validation of the AATSR derived SST values using in situ radiometers, in situ buoys and global SST fields from other data sets. The results of the initial programme presented here will demonstrate that the AATSR instrument is currently close to meeting its scientific objectives of determining global SST to an accuracy of 0.3 K (one sigma). For night time data, the analysis gives a warm bias of between +0.04 K (0.28 K) for buoys to +0.06 K (0.20 K) for radiometers, with slightly higher errors observed for day time data, showing warm biases of between +0.02 (0.39 K) for buoys to +0.11 K (0.33 K) for radiometers. They show that the ATSR series of instruments continues to be the world leader in delivering accurate space-based observations of SST, which is a key climate parameter.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Met Office station data from 1980 to 2012 has been used to characterise the interannual variability of incident solar irradiance across the UK. The same data are used to evaluate four popular historical irradiance products to determine which are most suitable for use by the UK PV industry for site selection and system design. The study confirmed previous findings that interannual variability is typically 3–6% and weighted average probability of a particular percentage deviation from the mean at an average site in the UK was calculated. This weighted average showed that fewer than 2% of site-years could be expected to fall below 90% of the long-term site mean. The historical irradiance products were compared against Met Office station data from the input years of each product. This investigation has found that all products perform well. No products have a strong spatial trend. Meteonorm 7 is most conservative (MBE = −2.5%), CMSAF is most optimistic (MBE = +3.4%) and an average of all four products performs better than any one individual product (MBE = 0.3%)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.