88 resultados para BIAS CORRECTION
Resumo:
Background: Selection bias in HIV prevalence estimates occurs if non-participation in testing is correlated with HIV status. Longitudinal data suggests that individuals who know or suspect they are HIV positive are less likely to participate in testing in HIV surveys, in which case methods to correct for missing data which are based on imputation and observed characteristics will produce biased results. Methods: The identity of the HIV survey interviewer is typically associated with HIV testing participation, but is unlikely to be correlated with HIV status. Interviewer identity can thus be used as a selection variable allowing estimation of Heckman-type selection models. These models produce asymptotically unbiased HIV prevalence estimates, even when non-participation is correlated with unobserved characteristics, such as knowledge of HIV status. We introduce a new random effects method to these selection models which overcomes non-convergence caused by collinearity, small sample bias, and incorrect inference in existing approaches. Our method is easy to implement in standard statistical software, and allows the construction of bootstrapped standard errors which adjust for the fact that the relationship between testing and HIV status is uncertain and needs to be estimated. Results: Using nationally representative data from the Demographic and Health Surveys, we illustrate our approach with new point estimates and confidence intervals (CI) for HIV prevalence among men in Ghana (2003) and Zambia (2007). In Ghana, we find little evidence of selection bias as our selection model gives an HIV prevalence estimate of 1.4% (95% CI 1.2% – 1.6%), compared to 1.6% among those with a valid HIV test. In Zambia, our selection model gives an HIV prevalence estimate of 16.3% (95% CI 11.0% - 18.4%), compared to 12.1% among those with a valid HIV test. Therefore, those who decline to test in Zambia are found to be more likely to be HIV positive. Conclusions: Our approach corrects for selection bias in HIV prevalence estimates, is possible to implement even when HIV prevalence or non-participation is very high or very low, and provides a practical solution to account for both sampling and parameter uncertainty in the estimation of confidence intervals. The wide confidence intervals estimated in an example with high HIV prevalence indicate that it is difficult to correct statistically for the bias that may occur when a large proportion of people refuse to test.
Resumo:
The adaptor protein-2 sigma subunit (AP2sigma;2) is pivotal for clathrin-mediated endocytosis of plasma membrane constituents such as the calcium-sensing receptor (CaSR). Mutations of the AP2sigma;2 Arg15 residue result in familial hypocalciuric hypercalcaemia type 3 (FHH3), a disorder of extracellular calcium (Ca<inf>o</inf><sup>2+</sup>) homeostasis. To elucidate the role of AP2sigma;2 in Ca<inf>o</inf><sup>2+</sup> regulation, we investigated 65 FHH probands, without other FHH-associated mutations, for AP2sigma;2 mutations, characterized their functional consequences and investigated the genetic mechanisms leading to FHH3. AP2sigma;2 mutations were identified in 17 probands, comprising 5 Arg15Cys, 4 Arg15His and 8 Arg15Leu mutations. A genotype-phenotype correlation was observed with the Arg15Leu mutation leading to marked hypercalcaemia. FHH3 probands harboured additional phenotypes such as cognitive dysfunction. All three FHH3-causing AP2sigma;2 mutations impaired CaSR signal transduction in a dominant-negative manner. Mutational bias was observed at the AP2sigma;2 Arg15 residue as other predicted missense substitutions (Arg15Gly, Arg15Pro and Arg15Ser), which also caused CaSR loss-of-function, were not detected in FHH probands, and these mutations were found to reduce the numbers of CaSR-expressing cells. FHH3 probands had significantly greater serum calcium (sCa) and magnesium (sMg) concentrations with reduced urinary calcium to creatinine clearance ratios (CCCR) in comparison with FHH1 probands with CaSR mutations, and a calculated index of sCa × sMg/100 × CCCR, which was ≥ 5.0, had a diagnostic sensitivity and specificity of 83 and 86%, respectively, for FHH3. Thus, our studies demonstrate AP2sigma;2 mutations to result in a more severe FHH phenotype with genotype-phenotype correlations, and a dominant-negative mechanism of action with mutational bias at the Arg15 residue.
Resumo:
Background and objectives: Cognitive models suggest that attentional biases are integral in the maintenance of obsessive-compulsive symptoms (OCS). Such biases have been established experimentally in anxiety disorders; however, the evidence is unclear in Obsessive Compulsive disorder (OCD). In the present study, an eye-tracking methodology was employed to explore attentional biases in relation to OCS.
Methods: A convenience sample of 85 community volunteers was assessed on OCS using the Yale-Brown Obsessive Compulsive Scale-self report. Participants completed an eye-tracking paradigm where they were exposed to OCD, Aversive and Neutral visual stimuli. Indices of attentional bias were derived from the eye-tracking data.
Results: Simple linear regressions were performed with OCS severity as the predictor and eye-tracking measures of the different attentional biases for each of the three stimuli types were the criterion variables. Findings revealed that OCS severity moderately predicted greater frequency and duration of fixations on OCD stimuli, which reflect the maintenance attentional bias. No significant results were found in support of other biases.
Limitations: Interpretations based on a non-clinical sample limit the generalisability of the conclusions, although use of such samples in OCD research has been found to be comparable to clinical populations. Future research would include both clinical and sub-clinical participants.
Conclusions: Results provide some support for the theory of maintained attention in OCD attentional biases, as opposed to vigilance theory. Individuals with greater OCS do not orient to OCD stimuli any faster than individuals with lower OCS, but once a threat is identified, these individuals allocate more attention to OCS-relevant stimuli.
Resumo:
Both polygenicity (many small genetic effects) and confounding biases, such as cryptic relatedness and population stratification, can yield an inflated distribution of test statistics in genome-wide association studies (GWAS). However, current methods cannot distinguish between inflation from a true polygenic signal and bias. We have developed an approach, LD Score regression, that quantifies the contribution of each by examining the relationship between test statistics and linkage disequilibrium (LD). The LD Score regression intercept can be used to estimate a more powerful and accurate correction factor than genomic control. We find strong evidence that polygenicity accounts for the majority of the inflation in test statistics in many GWAS of large sample size.
Resumo:
In this paper, we introduce a statistical data-correction framework that aims at improving the DSP system performance in presence of unreliable memories. The proposed signal processing framework implements best-effort error mitigation for signals that are corrupted by defects in unreliable storage arrays using a statistical correction function extracted from the signal statistics, a data-corruption model, and an application-specific cost function. An application example to communication systems demonstrates the efficacy of the proposed approach.
Resumo:
PURPOSE: To quantify the impact on self-reported visual functioning of spectacle provision for school-aged children in Oaxaca, Mexico. METHODS: The Refractive Status Vision Profile (RSVP), a previously validated tool to measure the impact of refractive correction on visual functioning, was adapted for use in rural children and administered at baseline and 4 weeks (27.3 +/- 4.4 days) after the provision of free spectacles. Visual acuity with and without correction, age, sex, and spherical equivalent refraction were recorded at the time of follow-up. RESULTS: Among 88 children (mean age, 12 years; 55.7% girls), the median presenting acuity (uncorrected or with original spectacles), tested 4 weeks after the provision of free spectacles, was 6/9 (range, 6/6-6/120). Significant improvements in the following subscales of the RSVP were seen for the group as a whole after the provision of free spectacles: function, 11.2 points (P = 0.0001); symptoms, 14.3 points (P < 0.0001); total score, 10.3 points (P = 0.0001). After stratification by presenting vision in the better-seeing eye, children with 6/6 acuity (n = 22) did not have significant improvement in any subscale; those with acuity of 6/7.5 to 6/9 (n = 34) improved only on function (P = 0.02), symptoms (P = 0.005), and total score (P = 0.003); and those with acuity of 6/12 or worse improved on total score (P < 0.0001) and all subscales. Subjects (n = 31) with uncorrected myopia of -1.25 D or more had a mean improvement in total score of 15.9 points (P < 0.0001), whereas those with uncorrected myopia between -0.50 and -1.00 D inclusive (n = 53) had a mean improvement of 8 points (P = 0.01). CONCLUSIONS: Provision of spectacles to children in this setting had a significant impact on self-reported function, even at modest levels of baseline visual disability. The correlation between presenting vision/refraction and improvement and the failure of children 6/6 at baseline to improve offer evidence for a real effect.
Resumo:
OBJECTIVE: To compare outcomes between adjustable spectacles and conventional methods for refraction in young people. DESIGN: Cross sectional study. SETTING: Rural southern China. PARTICIPANTS: 648 young people aged 12-18 (mean 14.9 (SD 0.98)), with uncorrected visual acuity ≤ 6/12 in either eye. INTERVENTIONS: All participants underwent self refraction without cycloplegia (paralysis of near focusing ability with topical eye drops), automated refraction without cycloplegia, and subjective refraction by an ophthalmologist with cycloplegia. MAIN OUTCOME MEASURES: Uncorrected and corrected vision, improvement of vision (lines on a chart), and refractive error. RESULTS: Among the participants, 59% (384) were girls, 44% (288) wore spectacles, and 61% (393/648) had 2.00 dioptres or more of myopia in the right eye. All completed self refraction. The proportion with visual acuity ≥ 6/7.5 in the better eye was 5.2% (95% confidence interval 3.6% to 6.9%) for uncorrected vision, 30.2% (25.7% to 34.8%) for currently worn spectacles, 96.9% (95.5% to 98.3%) for self refraction, 98.4% (97.4% to 99.5%) for automated refraction, and 99.1% (98.3% to 99.9%) for subjective refraction (P = 0.033 for self refraction v automated refraction, P = 0.001 for self refraction v subjective refraction). Improvements over uncorrected vision in the better eye with self refraction and subjective refraction were within one line on the eye chart in 98% of participants. In logistic regression models, failure to achieve maximum recorded visual acuity of 6/7.5 in right eyes with self refraction was associated with greater absolute value of myopia/hyperopia (P<0.001), greater astigmatism (P = 0.001), and not having previously worn spectacles (P = 0.002), but not age or sex. Significant inaccuracies in power (≥ 1.00 dioptre) were less common in right eyes with self refraction than with automated refraction (5% v 11%, P<0.001). CONCLUSIONS: Though visual acuity was slightly worse with self refraction than automated or subjective refraction, acuity was excellent in nearly all these young people with inadequately corrected refractive error at baseline. Inaccurate power was less common with self refraction than automated refraction. Self refraction could decrease the requirement for scarce trained personnel, expensive devices, and cycloplegia in children's vision programmes in rural China.
Resumo:
The World Health Organization estimates that 13 million children aged 5-15 years worldwide are visually impaired from uncorrected refractive error. School vision screening programs can identify and treat or refer children with refractive error. We concentrate on the findings of various screening studies and attempt to identify key factors in the success and sustainability of such programs in the developing world. We reviewed original and review articles describing children's vision and refractive error screening programs published in English and listed in PubMed, Medline OVID, Google Scholar, and Oxford University Electronic Resources databases. Data were abstracted on study objective, design, setting, participants, and outcomes, including accuracy of screening, quality of refractive services, barriers to uptake, impact on quality of life, and cost-effectiveness of programs. Inadequately corrected refractive error is an important global cause of visual impairment in childhood. School-based vision screening carried out by teachers and other ancillary personnel may be an effective means of detecting affected children and improving their visual function with spectacles. The need for services and potential impact of school-based programs varies widely between areas, depending on prevalence of refractive error and competing conditions and rates of school attendance. Barriers to acceptance of services include the cost and quality of available refractive care and mistaken beliefs that glasses will harm children's eyes. Further research is needed in areas such as the cost-effectiveness of different screening approaches and impact of education to promote acceptance of spectacle-wear. School vision programs should be integrated into comprehensive efforts to promote healthy children and their families.
Resumo:
The KongTM ball test has been used extensively to assess lateral bias in the domestic dog. Implicit in this challenge is the assumption that dogs use their dominant paw to stabilise the ball. This study examined whether or not this is the case. A comparative approach was adopted, exploring limb use in dogs and humans. In Experiment 1, the paw preference of 48 dogs was assessed on the KongTM ball test. Analysis revealed an equal distribution of paw use, although significantly more dogs were paw-preferent than ambilateral. Significantly more male dogs were classified as right-pawed, while more females were ambilateral. There was no significant effect of canine sex or castration status on the dogs’ paw preferences. In Experiment 2, 94 adult humans were assessed on their ability to remove a piece of paper from a KongTM ball with their mouth, using their left, right or both hands to stabilise the ball. 76% of the right-handed people used their left hand, and 82% of the left-handed participants used their right hand, to hold the KongTM steady. It is concluded that dogs, like humans, are most likely using their non-dominant limb to stabilise the KongTM ball and their dominant side for postural support. This has potential applied implications from an animal welfare perspective.
Resumo:
Research points to a relationship between lateralization and emotional functioning in humans and many species of animal. The present study explored the association between paw preferences and emotional functioning, specifically temperament, in a species thus far overlooked in this area, the domestic cat. Thirty left-pawed, 30 right-pawed, and 30 ambilateral pet cats were recruited following an assessment of their paw preferences using a food-reaching challenge. The animals’ temperament was subsequently assessed using the Feline Temperament Profile (FTP). Cats’ owners also completed a purpose-designed cat temperament (CAT) scale. Analysis revealed a significant relationship between lateral bias and FTP and CAT scale scores. Ambilateral cats had lower positive (FTP+) scores, and were perceived as less affectionate, obedient, friendly, and more aggressive, than left or right-pawed animals. Left and right pawed cats differed significantly on 1 trait on the CAT scale, namely playfulness. The strength of the cats’ paw preferences was related to the animals’ FTP and CAT scores. Cats with a greater strength of paw preference had higher FTP + scores than those with a weaker strength of paw preference. Animals with stronger paw preferences were perceived as more confident, affectionate, active, and friendly than those with weaker paw preferences. Results suggest that motor laterality in the cat is strongly related to temperament and that the presence or absence of lateralization has greater implications for the expression of emotion in this species than the direction of the lateralized bias.