995 resultados para Statistical bias
Resumo:
Projections of Arctic sea ice thickness (SIT) have the potential to inform stakeholders about accessibility to the region, but are currently rather uncertain. The latest suite of CMIP5 Global Climate Models (GCMs) produce a wide range of simulated SIT in the historical period (1979–2014) and exhibit various biases when compared with the Pan-Arctic Ice Ocean Modelling and Assimilation System (PIOMAS) sea ice reanalysis. We present a new method to constrain such GCM simulations of SIT via a statistical bias correction technique. The bias correction successfully constrains the spatial SIT distribution and temporal variability in the CMIP5 projections whilst retaining the climatic fluctuations from individual ensemble members. The bias correction acts to reduce the spread in projections of SIT and reveals the significant contributions of climate internal variability in the first half of the century and of scenario uncertainty from mid-century onwards. The projected date of ice-free conditions in the Arctic under the RCP8.5 high emission scenario occurs in the 2050s, which is a decade earlier than without the bias correction, with potentially significant implications for stakeholders in the Arctic such as the shipping industry. The bias correction methodology developed could be similarly applied to other variables to reduce spread in climate projections more generally.
Resumo:
The study is intended to estimate the existing rate of participation of women beneficiaries in the development programmes of different organisations in Kerala. It would enable one to understand whether participation is at the satisfactory level or not. Given the rate of participation, the major thrust of the analysis is on the impact of governmental and non-governmental organisations on the rate of participation. This is undertaken under the assumption that NGOs, due to their proximity to people and their needs, ensure better participation rates. Besides the organisational differences, the other major determinants of women participation such as their socio-economic characteristics, psychological make up, the nature of the programme etc. are also highlighted. 0 Since the ascribed status of women in society is inferior, the role of organisers, development personnel and local leaders is also pointed out. Thus the basic objective of the study is women participation and its determinants in the development programmes
Resumo:
Background It can be argued that adaptive designs are underused in clinical research. We have explored concerns related to inadequate reporting of such trials, which may influence their uptake. Through a careful examination of the literature, we evaluated the standards of reporting of group sequential (GS) randomised controlled trials, one form of a confirmatory adaptive design. Methods We undertook a systematic review, by searching Ovid MEDLINE from the 1st January 2001 to 23rd September 2014, supplemented with trials from an audit study. We included parallel group, confirmatory, GS trials that were prospectively designed using a Frequentist approach. Eligible trials were examined for compliance in their reporting against the CONSORT 2010 checklist. In addition, as part of our evaluation, we developed a supplementary checklist to explicitly capture group sequential specific reporting aspects, and investigated how these are currently being reported. Results Of the 284 screened trials, 68(24%) were eligible. Most trials were published in “high impact” peer-reviewed journals. Examination of trials established that 46(68%) were stopped early, predominantly either for futility or efficacy. Suboptimal reporting compliance was found in general items relating to: access to full trials protocols; methods to generate randomisation list(s); details of randomisation concealment, and its implementation. Benchmarking against the supplementary checklist, GS aspects were largely inadequately reported. Only 3(7%) trials which stopped early reported use of statistical bias correction. Moreover, 52(76%) trials failed to disclose methods used to minimise the risk of operational bias, due to the knowledge or leakage of interim results. Occurrence of changes to trial methods and outcomes could not be determined in most trials, due to inaccessible protocols and amendments. Discussion and Conclusions There are issues with the reporting of GS trials, particularly those specific to the conduct of interim analyses. Suboptimal reporting of bias correction methods could potentially imply most GS trials stopping early are giving biased results of treatment effects. As a result, research consumers may question credibility of findings to change practice when trials are stopped early. These issues could be alleviated through a CONSORT extension. Assurance of scientific rigour through transparent adequate reporting is paramount to the credibility of findings from adaptive trials. Our systematic literature search was restricted to one database due to resource constraints.
Resumo:
The development of new statistical and computational methods is increasingly making it possible to bridge the gap between hard sciences and humanities. In this study, we propose an approach based on a quantitative evaluation of attributes of objects in fields of humanities, from which concepts such as dialectics and opposition are formally defined mathematically. As case studies, we analyzed the temporal evolution of classical music and philosophy by obtaining data for 8 features characterizing the corresponding fields for 7 well-known composers and philosophers, which were treated with multivariate statistics and pattern recognition methods. A bootstrap method was applied to avoid statistical bias caused by the small sample data set, with which hundreds of artificial composers and philosophers were generated, influenced by the 7 names originally chosen. Upon defining indices for opposition, skewness and counter-dialectics, we confirmed the intuitive analysis of historians in that classical music evolved according to a master apprentice tradition, while in philosophy changes were driven by opposition. Though these case studies were meant only to show the possibility of treating phenomena in humanities quantitatively, including a quantitative measure of concepts such as dialectics and opposition, the results are encouraging for further application of the approach presented here to many other areas, since it is entirely generic.
Resumo:
Distributed Brillouin sensing of strain and temperature works by making spatially resolved measurements of the position of the measurand-dependent extremum of the resonance curve associated with the scattering process in the weakly nonlinear regime. Typically, measurements of backscattered Stokes intensity (the dependent variable) are made at a number of predetermined fixed frequencies covering the design measurand range of the apparatus and combined to yield an estimate of the position of the extremum. The measurand can then be found because its relationship to the position of the extremum is assumed known. We present analytical expressions relating the relative error in the extremum position to experimental errors in the dependent variable. This is done for two cases: (i) a simple non-parametric estimate of the mean based on moments and (ii) the case in which a least squares technique is used to fit a Lorentzian to the data. The question of statistical bias in the estimates is discussed and in the second case we go further and present for the first time a general method by which the probability density function (PDF) of errors in the fitted parameters can be obtained in closed form in terms of the PDFs of the errors in the noisy data.
Resumo:
Distributed Brillouin sensing of strain and temperature works by making spatially resolved measurements of the position of the measurand-dependent extremum of the resonance curve associated with the scattering process in the weakly nonlinear regime. Typically, measurements of backscattered Stokes intensity (the dependent variable) are made at a number of predetermined fixed frequencies covering the design measurand range of the apparatus and combined to yield an estimate of the position of the extremum. The measurand can then be found because its relationship to the position of the extremum is assumed known. We present analytical expressions relating the relative error in the extremum position to experimental errors in the dependent variable. This is done for two cases: (i) a simple non-parametric estimate of the mean based on moments and (ii) the case in which a least squares technique is used to fit a Lorentzian to the data. The question of statistical bias in the estimates is discussed and in the second case we go further and present for the first time a general method by which the probability density function (PDF) of errors in the fitted parameters can be obtained in closed form in terms of the PDFs of the errors in the noisy data.
Resumo:
Aims. A model-independent reconstruction of the cosmic expansion rate is essential to a robust analysis of cosmological observations. Our goal is to demonstrate that current data are able to provide reasonable constraints on the behavior of the Hubble parameter with redshift, independently of any cosmological model or underlying gravity theory. Methods. Using type Ia supernova data, we show that it is possible to analytically calculate the Fisher matrix components in a Hubble parameter analysis without assumptions about the energy content of the Universe. We used a principal component analysis to reconstruct the Hubble parameter as a linear combination of the Fisher matrix eigenvectors (principal components). To suppress the bias introduced by the high redshift behavior of the components, we considered the value of the Hubble parameter at high redshift as a free parameter. We first tested our procedure using a mock sample of type Ia supernova observations, we then applied it to the real data compiled by the Sloan Digital Sky Survey (SDSS) group. Results. In the mock sample analysis, we demonstrate that it is possible to drastically suppress the bias introduced by the high redshift behavior of the principal components. Applying our procedure to the real data, we show that it allows us to determine the behavior of the Hubble parameter with reasonable uncertainty, without introducing any ad-hoc parameterizations. Beyond that, our reconstruction agrees with completely independent measurements of the Hubble parameter obtained from red-envelope galaxies.
Resumo:
Objective: To compare measurements of sleeping metabolic rate (SMR) in infancy with predicted basal metabolic rate (BMR) estimated by the equations of Schofield. Methods: Some 104 serial measurements of SMR by indirect calorimetry were performed in 43 healthy infants at 1.5, 3, 6, 9 and 12 months of age. Predicted BMR was calculated using the weight only (BMR-wo) and weight and height (BMR-wh) equations of Schofield for 0-3-y-olds. Measured SMR values were compared with both predictive values by means of the Bland-Altman statistical test. Results: The mean measured SMR was 1.48 MJ/day. The mean predicted BMR values were 1.66 and 1.47 MJ/day for the weight only and weight and height equations, respectively. The Bland-Altman analysis showed that BMR-wo equation on average overestimated SMR by 0.18 MJ/day (11%) and the BMR-wh equation underestimated SMR by 0.01 MJ/day (1%). However the 95% limits of agreement were wide: - 0.64 to - 0.28MJ/day (28%) for the former equation and - 0.39 to +0.41 MJ/day (27%) for the latter equation. Moreover there was a significant correlation between the mean of the measured and predicted metabolic rate and the difference between them. Conclusions: The wide variation seen in the difference between measured and predicted metabolic rate and the bias probably with age indicates there is a need to measure actual metabolic rate for individual clinical care in this age group.
Resumo:
Background: Mortality from invasive meningococcal disease (IMD) has remained stable over the last thirty years and it is unclear whether pre-hospital antibiotherapy actually produces a decrease in this mortality. Our aim was to examine whether pre-hospital oral antibiotherapy reduces mortality from IMD, adjusting for indication bias. Methods: A retrospective analysis was made of clinical reports of all patients (n = 848) diagnosed with IMD from 1995 to 2000 in Andalusia and the Canary Islands, Spain, and of the relationship between the use of pre-hospital oral antibiotherapy and mortality. Indication bias was controlled for by the propensity score technique, and a multivariate analysis was performed to determine the probability of each patient receiving antibiotics, according to the symptoms identified before admission. Data on in-hospital death, use of antibiotics and demographic variables were collected. A logistic regression analysis was then carried out, using death as the dependent variable, and prehospital antibiotic use, age, time from onset of symptoms to parenteral antibiotics and the propensity score as independent variables. Results: Data were recorded on 848 patients, 49 (5.72%) of whom died. Of the total number of patients, 226 had received oral antibiotics before admission, mainly betalactams during the previous 48 hours. After adjusting the association between the use of antibiotics and death for age, time between onset of symptoms and in-hospital antibiotic treatment, pre-hospital oral antibiotherapy remained a significant protective factor (Odds Ratio for death 0.37, 95% confidence interval 0.15–0.93). Conclusion: Pre-hospital oral antibiotherapy appears to reduce IMD mortality.
Resumo:
The work presented evaluates the statistical characteristics of regional bias and expected error in reconstructions of real positron emission tomography (PET) data of human brain fluoro-deoxiglucose (FDG) studies carried out by the maximum likelihood estimator (MLE) method with a robust stopping rule, and compares them with the results of filtered backprojection (FBP) reconstructions and with the method of sieves. The task of evaluating radioisotope uptake in regions-of-interest (ROIs) is investigated. An assessment of bias and variance in uptake measurements is carried out with simulated data. Then, by using three different transition matrices with different degrees of accuracy and a components of variance model for statistical analysis, it is shown that the characteristics obtained from real human FDG brain data are consistent with the results of the simulation studies.
Resumo:
We report the results of Monte Carlo simulations with the aim to clarify the microscopic origin of exchange bias in the magnetization hysteresis loops of a model of individual core/shell nanoparticles. Increase of the exchange coupling across the core/shell interface leads to an enhancement of exchange bias and to an increasing asymmetry between the two branches of the loops which is due to different reversal mechanisms. A detailed study of the magnetic order of the interfacial spins shows compelling evidence that the existence of a net magnetization due to uncompensated spins at the shell interface is responsible for both phenomena and allows to quantify the loop shifts directly in terms of microscopic parameters with striking agreement with the macroscopic observed values.
Resumo:
As the list of states adopting the HWTD continues to grow, there is a need to evaluate how results are utilized. AASHTO T 324 does not standardize the analysis and reporting of test results. Furthermore, processing and reporting of the results among manufacturers is not uniform. This is partly due to the variation among agency reporting requirements. Some include only the midpoint rut depth, while others include the average across the entire length of the wheel track. To eliminate bias in reporting, statistical analysis was performed on over 150 test runs on gyratory specimens. Measurement location was found to be a source of significant variation in the HWTD. This is likely due to the nonuniform wheel speed across the specimen, geometry of the specimen, and air void profile. Eliminating this source of bias when reporting results is feasible though is dependent upon the average rut depth at the final pass. When reporting rut depth at the final pass, it is suggested for poor performing samples to average measurement locations near the interface of the adjoining gyratory specimens. This is necessary due to the wheel lipping on the mold. For all other samples it is reasonable to only eliminate the 3 locations furthest from the gear house. For multi‐wheel units, wheel side was also found to be significant for poor and good performing samples. After eliminating the suggested measurements from the analysis, the wheel was no longer a significant source of variation.
Resumo:
EEG recordings are usually corrupted by spurious extra-cerebral artifacts, which should be rejected or cleaned up by the practitioner. Since manual screening of human EEGs is inherently error prone and might induce experimental bias, automatic artifact detection is an issue of importance. Automatic artifact detection is the best guarantee for objective and clean results. We present a new approach, based on the time–frequency shape of muscular artifacts, to achieve reliable and automatic scoring. The impact of muscular activity on the signal can be evaluated using this methodology by placing emphasis on the analysis of EEG activity. The method is used to discriminate evoked potentials from several types of recorded muscular artifacts—with a sensitivity of 98.8% and a specificity of 92.2%. Automatic cleaning ofEEGdata are then successfully realized using this method, combined with independent component analysis. The outcome of the automatic cleaning is then compared with the Slepian multitaper spectrum based technique introduced by Delorme et al (2007 Neuroimage 34 1443–9).
Resumo:
Microsatellite loci mutate at an extremely high rate and are generally thought to evolve through a stepwise mutation model. Several differentiation statistics taking into account the particular mutation scheme of the microsatellite have been proposed. The most commonly used is R(ST) which is independent of the mutation rate under a generalized stepwise mutation model. F(ST) and R(ST) are commonly reported in the literature, but often differ widely. Here we compare their statistical performances using individual-based simulations of a finite island model. The simulations were run under different levels of gene flow, mutation rates, population number and sizes. In addition to the per locus statistical properties, we compare two ways of combining R(ST) over loci. Our simulations show that even under a strict stepwise mutation model, no statistic is best overall. All estimators suffer to different extents from large bias and variance. While R(ST) better reflects population differentiation in populations characterized by very low gene-exchange, F(ST) gives better estimates in cases of high levels of gene flow. The number of loci sampled (12, 24, or 96) has only a minor effect on the relative performance of the estimators under study. For all estimators there is a striking effect of the number of samples, with the differentiation estimates showing very odd distributions for two samples.
Resumo:
BACKGROUND: Health professionals and policymakers aspire to make healthcare decisions based on the entire relevant research evidence. This, however, can rarely be achieved because a considerable amount of research findings are not published, especially in case of 'negative' results - a phenomenon widely recognized as publication bias. Different methods of detecting, quantifying and adjusting for publication bias in meta-analyses have been described in the literature, such as graphical approaches and formal statistical tests to detect publication bias, and statistical approaches to modify effect sizes to adjust a pooled estimate when the presence of publication bias is suspected. An up-to-date systematic review of the existing methods is lacking. METHODS/DESIGN: The objectives of this systematic review are as follows:âeuro¢ To systematically review methodological articles which focus on non-publication of studies and to describe methods of detecting and/or quantifying and/or adjusting for publication bias in meta-analyses.âeuro¢ To appraise strengths and weaknesses of methods, the resources they require, and the conditions under which the method could be used, based on findings of included studies.We will systematically search Web of Science, Medline, and the Cochrane Library for methodological articles that describe at least one method of detecting and/or quantifying and/or adjusting for publication bias in meta-analyses. A dedicated data extraction form is developed and pilot-tested. Working in teams of two, we will independently extract relevant information from each eligible article. As this will be a qualitative systematic review, data reporting will involve a descriptive summary. DISCUSSION: Results are expected to be publicly available in mid 2013. This systematic review together with the results of other systematic reviews of the OPEN project (To Overcome Failure to Publish Negative Findings) will serve as a basis for the development of future policies and guidelines regarding the assessment and handling of publication bias in meta-analyses.