973 resultados para score test information matrix artificial regression
Resumo:
The main information sources to study a particular piece of music are symbolic scores and audio recordings. These are complementary representations of the piece and it isvery useful to have a proper linking between the two of the musically meaningful events. For the case of makam music of Turkey, linking the available scores with the correspondingaudio recordings requires taking the specificities of this music into account, such as the particular tunings, the extensive usage of non-notated expressive elements, and the way in which the performer repeats fragmentsof the score. Moreover, for most of the pieces of the classical repertoire, there is no score written by the original composer. In this paper, we propose a methodology to pair sections of a score to the corresponding fragments of audio recording performances. The pitch information obtained from both sources is used as the common representationto be paired. From an audio recording, fundamental frequency estimation and tuning analysis is done to compute a pitch contour. From the corresponding score, symbolic note names and durations are converted to a syntheticpitch contour. Then, a linking operation is performed between these pitch contours in order to find the best correspondences.The method is tested on a dataset of 11 compositions spanning 44 audio recordings, which are mostly monophonic. An F3-score of 82% and 89% are obtained with automatic and semi-automatic karar detection respectively,showing that the methodology may give us a needed tool for further computational tasks such as form analysis, audio-score alignment and makam recognition.
Resumo:
ABSTRACT: BACKGROUND: Chest pain raises concern for the possibility of coronary heart disease. Scoring methods have been developed to identify coronary heart disease in emergency settings, but not in primary care. METHODS: Data were collected from a multicenter Swiss clinical cohort study including 672 consecutive patients with chest pain, who had visited one of 59 family practitioners' offices. Using delayed diagnosis we derived a prediction rule to rule out coronary heart disease by means of a logistic regression model. Known cardiovascular risk factors, pain characteristics, and physical signs associated with coronary heart disease were explored to develop a clinical score. Patients diagnosed with angina or acute myocardial infarction within the year following their initial visit comprised the coronary heart disease group. RESULTS: The coronary heart disease score was derived from eight variables: age, gender, duration of chest pain from 1 to 60 minutes, substernal chest pain location, pain increases with exertion, absence of tenderness point at palpation, cardiovascular risks factors, and personal history of cardiovascular disease. Area under the receiver operating characteristics curve was of 0.95 with a 95% confidence interval of 0.92; 0.97. From this score, 413 patients were considered as low risk for values of percentile 5 of the coronary heart disease patients. Internal validity was confirmed by bootstrapping. External validation using data from a German cohort (Marburg, n = 774) revealed a receiver operating characteristics curve of 0.75 (95% confidence interval, 0.72; 0.81) with a sensitivity of 85.6% and a specificity of 47.2%. CONCLUSIONS: This score, based only on history and physical examination, is a complementary tool for ruling out coronary heart disease in primary care patients complaining of chest pain.
Resumo:
This paper provides regression discontinuity evidence on long-run and intergenerational education impacts of a temporary increase in federal transfers to local governments in Brazil. Revenues and expenditures of the communities benefiting from extra transfers temporarily increased by about 20% during the 4 year period from 1982 to the end of 1985. Schooling and literacy gains for directly exposed cohorts established in previous work that used the 1991 census are attenuated but persist in the 2000 and 2010 censuses. Children and adolescents of the next generation --born after the extra funding had disappeared-- show gains of about 0.08 standard deviation across the entire score distribution of two nationwide exams at the end of the 2000s. While we find no evidence of persistent improvements in school resources, we document discontinuities in education levels, literacy rates and incomes of test takers' parents that are consistent with intergenerational human capital spillovers.
Resumo:
BACKGROUND: Obesity is associated with vitamin D deficiency, and both are areas of active public health concern. We explored the causality and direction of the relationship between body mass index (BMI) and 25-hydroxyvitamin D [25(OH)D] using genetic markers as instrumental variables (IVs) in bi-directional Mendelian randomization (MR) analysis. METHODS AND FINDINGS: We used information from 21 adult cohorts (up to 42,024 participants) with 12 BMI-related SNPs (combined in an allelic score) to produce an instrument for BMI and four SNPs associated with 25(OH)D (combined in two allelic scores, separately for genes encoding its synthesis or metabolism) as an instrument for vitamin D. Regression estimates for the IVs (allele scores) were generated within-study and pooled by meta-analysis to generate summary effects. Associations between vitamin D scores and BMI were confirmed in the Genetic Investigation of Anthropometric Traits (GIANT) consortium (n = 123,864). Each 1 kg/m(2) higher BMI was associated with 1.15% lower 25(OH)D (p = 6.52×10⁻²⁷). The BMI allele score was associated both with BMI (p = 6.30×10⁻⁶²) and 25(OH)D (-0.06% [95% CI -0.10 to -0.02], p = 0.004) in the cohorts that underwent meta-analysis. The two vitamin D allele scores were strongly associated with 25(OH)D (p≤8.07×10⁻⁵⁷ for both scores) but not with BMI (synthesis score, p = 0.88; metabolism score, p = 0.08) in the meta-analysis. A 10% higher genetically instrumented BMI was associated with 4.2% lower 25(OH)D concentrations (IV ratio: -4.2 [95% CI -7.1 to -1.3], p = 0.005). No association was seen for genetically instrumented 25(OH)D with BMI, a finding that was confirmed using data from the GIANT consortium (p≥0.57 for both vitamin D scores). CONCLUSIONS: On the basis of a bi-directional genetic approach that limits confounding, our study suggests that a higher BMI leads to lower 25(OH)D, while any effects of lower 25(OH)D increasing BMI are likely to be small. Population level interventions to reduce BMI are expected to decrease the prevalence of vitamin D deficiency.
Resumo:
Suite à la découverte du génome, les patients peuvent bénéficier aujourd'hui, d'une approche préventive, prédictive, voire personnalisée de leur prise en charge. Si la médecine personnalisée devient courante, la « généralisation» de l'information génétique nous amènera probablement à établir de nouveaux standards sociaux et éthiques, car si celle-ci permet une plus grande efficacité dans les soins en aidant à la prise de décision, elle apporte une connaissance inédite de l'homme en terme de risque et de susceptibilité face à la maladie, mais aussi des informations propres à l'individu pouvant mener à la discrimination. Sommes- nous prêts à gérer cette information ? Dans ce travail, nous allons nous intéresser au traitement de l'information lors des tests génétiques en recherche. L'hypothèse de travail étant que l'information génétique est une nouvelle donnée biologique individuelle dont il faut tenir compte dans la prise en charge des participants à la recherche. Pour entamer la réflexion, une revue de la littérature a permis de mettre en évidence les spécificités de la recherche en génétique. Dans un deuxième temps, nous avons effectué une analyse comparative des feuilles d'information et des formulaires de consentement destinés aux participants à dix-sept protocoles de recherches impliquant des tests génétiques au CHUV à Lausanne en Suisse. Cette analyse a permis de faire un état des lieux des pratiques actuelles dans la région et elle est le point de départ d'une mise en perspective des enjeux éthiques liés la question. Les résultats montrent des inégalités entre les différentes feuilles d'information et formulaires de consentement de notre échantillon lausannois en ce qui concerne la restitution des résultats, la confidentialité des données ou encore la possibilité de participer à de futures recherches. Nous en concluons qu'il serait intéressant de travailler à une standardisation de ces documents et une éducation plus large à ce sujet.
Resumo:
BACKGROUND: In many countries, primary care physicians determine whether or not older drivers are fit to drive. Little, however, is known regarding the effects of cognitive decline on driving performance and the means to detect it. This study explores to what extent the trail making test (TMT) can provide indications to clinicians about their older patients' on-road driving performance in the context of cognitive decline. METHODS: This translational study was nested within a cohort study and an exploratory psychophysics study. The target population of interest was constituted of older drivers in the absence of important cognitive or physical disorders. We therefore recruited and tested 404 home-dwelling drivers, aged 70 years or more and in possession of valid drivers' licenses, who volunteered to participate in a driving refresher course. Forty-five drivers also agreed to undergo further testing at our lab. On-road driving performance was evaluated by instructors during a 45 minute validated open-road circuit. Drivers were classified as either being excellent, good, moderate, or poor depending on their score on a standardized evaluation of on-road driving performance. RESULTS: The area under the receiver operator curve for detecting poorly performing drivers was 0.668 (CI95% 0.558 to 0.778) for the TMT-A, and 0.662 (CI95% 0.542 to 0.783) for the TMT-B. TMT was related to contrast sensitivity, motion direction, orientation discrimination, working memory, verbal fluency, and literacy. Older patients with a TMT-A ≥ 54 seconds or a TMT-B ≥ 150 seconds have a threefold (CI95% 1.3 to 7.0) increased risk of performing poorly during the on-road evaluation. TMT had a sensitivity of 63.6%, a specificity of 64.9%, a positive predictive value of 9.5%, and a negative predictive value of 96.9%. CONCLUSION: In screening settings, the TMT would have clinicians uselessly consider driving cessation in nine drivers out of ten. Given the important negative impact this could have on older drivers, this study confirms the TMT not to be specific enough for clinicians to justify driving cessation without complementary investigations on driving behaviors.
Resumo:
BACKGROUND AND PURPOSE: Beyond the Framingham Stroke Risk Score, prediction of future stroke may improve with a genetic risk score (GRS) based on single-nucleotide polymorphisms associated with stroke and its risk factors. METHODS: The study includes 4 population-based cohorts with 2047 first incident strokes from 22,720 initially stroke-free European origin participants aged ≥55 years, who were followed for up to 20 years. GRSs were constructed with 324 single-nucleotide polymorphisms implicated in stroke and 9 risk factors. The association of the GRS to first incident stroke was tested using Cox regression; the GRS predictive properties were assessed with area under the curve statistics comparing the GRS with age and sex, Framingham Stroke Risk Score models, and reclassification statistics. These analyses were performed per cohort and in a meta-analysis of pooled data. Replication was sought in a case-control study of ischemic stroke. RESULTS: In the meta-analysis, adding the GRS to the Framingham Stroke Risk Score, age and sex model resulted in a significant improvement in discrimination (all stroke: Δjoint area under the curve=0.016, P=2.3×10(-6); ischemic stroke: Δjoint area under the curve=0.021, P=3.7×10(-7)), although the overall area under the curve remained low. In all the studies, there was a highly significantly improved net reclassification index (P<10(-4)). CONCLUSIONS: The single-nucleotide polymorphisms associated with stroke and its risk factors result only in a small improvement in prediction of future stroke compared with the classical epidemiological risk factors for stroke.
Resumo:
BACKGROUND/AIMS: Cannabis use is a growing challenge for public health, calling for adequate instruments to identify problematic consumption patterns. The Cannabis Use Disorders Identification Test (CUDIT) is a 10-item questionnaire used for screening cannabis abuse and dependency. The present study evaluated that screening instrument. METHODS: In a representative population sample of 5,025 Swiss adolescents and young adults, 593 current cannabis users replied to the CUDIT. Internal consistency was examined by means of Cronbach's alpha and confirmatory factor analysis. In addition, the CUDIT was compared to accepted concepts of problematic cannabis use (e.g. using cannabis and driving). ROC analyses were used to test the CUDIT's discriminative ability and to determine an appropriate cut-off. RESULTS: Two items ('injuries' and 'hours being stoned') had loadings below 0.5 on the unidimensional construct and correlated lower than 0.4 with the total CUDIT score. All concepts of problematic cannabis use were related to CUDIT scores. An ideal cut-off between six and eight points was found. CONCLUSIONS: Although the CUDIT seems to be a promising instrument to identify problematic cannabis use, there is a need to revise some of its items.
Resumo:
Investigations of solute transport in fractured rock aquifers often rely on tracer test data acquired at a limited number of observation points. Such data do not, by themselves, allow detailed assessments of the spreading of the injected tracer plume. To better understand the transport behavior in a granitic aquifer, we combine tracer test data with single-hole ground-penetrating radar (GPR) reflection monitoring data. Five successful tracer tests were performed under various experimental conditions between two boreholes 6 m apart. For each experiment, saline tracer was injected into a previously identified packed-off transmissive fracture while repeatedly acquiring single-hole GPR reflection profiles together with electrical conductivity logs in the pumping borehole. By analyzing depth-migrated GPR difference images together with tracer breakthrough curves and associated simplified flow and transport modeling, we estimate (1) the number, the connectivity, and the geometry of fractures that contribute to tracer transport, (2) the velocity and the mass of tracer that was carried along each flow path, and (3) the effective transport parameters of the identified flow paths. We find a qualitative agreement when comparing the time evolution of GPR reflectivity strengths at strategic locations in the formation with those arising from simulated transport. The discrepancies are on the same order as those between observed and simulated breakthrough curves at the outflow locations. The rather subtle and repeatable GPR signals provide useful and complementary information to tracer test data acquired at the outflow locations and may help us to characterize transport phenomena in fractured rock aquifers.
Batch effect confounding leads to strong bias in performance estimates obtained by cross-validation.
Resumo:
BACKGROUND: With the large amount of biological data that is currently publicly available, many investigators combine multiple data sets to increase the sample size and potentially also the power of their analyses. However, technical differences ("batch effects") as well as differences in sample composition between the data sets may significantly affect the ability to draw generalizable conclusions from such studies. FOCUS: The current study focuses on the construction of classifiers, and the use of cross-validation to estimate their performance. In particular, we investigate the impact of batch effects and differences in sample composition between batches on the accuracy of the classification performance estimate obtained via cross-validation. The focus on estimation bias is a main difference compared to previous studies, which have mostly focused on the predictive performance and how it relates to the presence of batch effects. DATA: We work on simulated data sets. To have realistic intensity distributions, we use real gene expression data as the basis for our simulation. Random samples from this expression matrix are selected and assigned to group 1 (e.g., 'control') or group 2 (e.g., 'treated'). We introduce batch effects and select some features to be differentially expressed between the two groups. We consider several scenarios for our study, most importantly different levels of confounding between groups and batch effects. METHODS: We focus on well-known classifiers: logistic regression, Support Vector Machines (SVM), k-nearest neighbors (kNN) and Random Forests (RF). Feature selection is performed with the Wilcoxon test or the lasso. Parameter tuning and feature selection, as well as the estimation of the prediction performance of each classifier, is performed within a nested cross-validation scheme. The estimated classification performance is then compared to what is obtained when applying the classifier to independent data.
Resumo:
Postoperative delirium after cardiac surgery is associated with increased morbidity and mortality as well as prolonged stay in both the intensive care unit and the hospital. The authors sought to identify modifiable risk factors associated with the development of postoperative delirium in elderly patients after elective cardiac surgery in order to be able to design follow-up studies aimed at the prevention of delirium by optimizing perioperative management. A post hoc analysis of data from patients enrolled in a randomized controlled trial was performed. A single university hospital. One hundred thirteen patients aged 65 or older undergoing elective cardiac surgery with cardiopulmonary bypass. None. MEASUREMENTS AND MAINS RESULTS: Screening for delirium was performed using the Confusion Assessment Method (CAM) on the first 6 postoperative days. A multivariable logistic regression model was developed to identify significant risk factors and to control for confounders. Delirium developed in 35 of 113 patients (30%). The multivariable model showed the maximum value of C-reactive protein measured postoperatively, the dose of fentanyl per kilogram of body weight administered intraoperatively, and the duration of mechanical ventilation to be independently associated with delirium. In this post hoc analysis, larger doses of fentanyl administered intraoperatively and longer duration of mechanical ventilation were associated with postoperative delirium in the elderly after cardiac surgery. Prospective randomized trials should be performed to test the hypotheses that a reduced dose of fentanyl administered intraoperatively, the use of a different opioid, or weaning protocols aimed at early extubation prevent delirium in these patients.
Resumo:
Using Monte Carlo simulations and reanalyzing the data of a validation study of the AEIM emotional intelligence test, we demonstrated that an atheoretical approach and the use of weak statistical procedures can result in biased validity estimates. These procedures included stepwise regression-and the general case of failing to include important theoretical controls-extreme scores analysis, and ignoring heteroscedasticity as well as measurement error. The authors of the AEIM test responded by offering more complete information about their analyses, allowing us to further examine the perils of ignoring theory and correct statistical procedures. In this paper we show with extended analyses that the AEIM test is invalid.
Resumo:
OBJECTIVE: Best long-term practice in primary HIV-1 infection (PHI) remains unknown for the individual. A risk-based scoring system associated with surrogate markers of HIV-1 disease progression could be helpful to stratify patients with PHI at highest risk for HIV-1 disease progression. METHODS: We prospectively enrolled 290 individuals with well-documented PHI in the Zurich Primary HIV-1 Infection Study, an open-label, non-randomized, observational, single-center study. Patients could choose to undergo early antiretroviral treatment (eART) and stop it after one year of undetectable viremia, to go on with treatment indefinitely, or to defer treatment. For each patient we calculated an a priori defined "Acute Retroviral Syndrome Severity Score" (ARSSS), consisting of clinical and basic laboratory variables, ranging from zero to ten points. We used linear regression models to assess the association between ARSSS and log baseline viral load (VL), baseline CD4+ cell count, and log viral setpoint (sVL) (i.e. VL measured ≥90 days after infection or treatment interruption). RESULTS: Mean ARSSS was 2.89. CD4+ cell count at baseline was negatively correlated with ARSSS (p = 0.03, n = 289), whereas HIV-RNA levels at baseline showed a strong positive correlation with ARSSS (p<0.001, n = 290). In the regression models, a 1-point increase in the score corresponded to a 0.10 log increase in baseline VL and a CD4+cell count decline of 12/µl, respectively. In patients with PHI and not undergoing eART, higher ARSSS were significantly associated with higher sVL (p = 0.029, n = 64). In contrast, in patients undergoing eART with subsequent structured treatment interruption, no correlation was found between sVL and ARSSS (p = 0.28, n = 40). CONCLUSION: The ARSSS is a simple clinical score that correlates with the best-validated surrogate markers of HIV-1 disease progression. In regions where ART is not universally available and eART is not standard this score may help identifying patients who will profit the most from early antiretroviral therapy.
Resumo:
ABSTRACT: BACKGROUND: Serologic testing algorithms for recent HIV seroconversion (STARHS) provide important information for HIV surveillance. We have shown that a patient's antibody reaction in a confirmatory line immunoassay (INNO-LIATM HIV I/II Score, Innogenetics) provides information on the duration of infection. Here, we sought to further investigate the diagnostic specificity of various Inno-Lia algorithms and to identify factors affecting it. METHODS: Plasma samples of 714 selected patients of the Swiss HIV Cohort Study infected for longer than 12 months and representing all viral clades and stages of chronic HIV-1 infection were tested blindly by Inno-Lia and classified as either incident (up to 12 m) or older infection by 24 different algorithms. Of the total, 524 patients received HAART, 308 had HIV-1 RNA below 50 copies/mL, and 620 were infected by a HIV-1 non-B clade. Using logistic regression analysis we evaluated factors that might affect the specificity of these algorithms. RESULTS: HIV-1 RNA <50 copies/mL was associated with significantly lower reactivity to all five HIV-1 antigens of the Inno-Lia and impaired specificity of most algorithms. Among 412 patients either untreated or with HIV-1 RNA ≥50 copies/mL despite HAART, the median specificity of the algorithms was 96.5% (range 92.0-100%). The only factor that significantly promoted false-incident results in this group was age, with false-incident results increasing by a few percent per additional year. HIV-1 clade, HIV-1 RNA, CD4 percentage, sex, disease stage, and testing modalities exhibited no significance. Results were similar among 190 untreated patients. CONCLUSIONS: The specificity of most Inno-Lia algorithms was high and not affected by HIV-1 variability, advanced disease and other factors promoting false-recent results in other STARHS. Specificity should be good in any group of untreated HIV-1 patients.
Resumo:
Developing a novel technique for the efficient, noninvasive clinical evaluation of bone microarchitecture remains both crucial and challenging. The trabecular bone score (TBS) is a new gray-level texture measurement that is applicable to dual-energy X-ray absorptiometry (DXA) images. Significant correlations between TBS and standard 3-dimensional (3D) parameters of bone microarchitecture have been obtained using a numerical simulation approach. The main objective of this study was to empirically evaluate such correlations in anteroposterior spine DXA images. Thirty dried human cadaver vertebrae were evaluated. Micro-computed tomography acquisitions of the bone pieces were obtained at an isotropic resolution of 93μm. Standard parameters of bone microarchitecture were evaluated in a defined region within the vertebral body, excluding cortical bone. The bone pieces were measured on a Prodigy DXA system (GE Medical-Lunar, Madison, WI), using a custom-made positioning device and experimental setup. Significant correlations were detected between TBS and 3D parameters of bone microarchitecture, mostly independent of any correlation between TBS and bone mineral density (BMD). The greatest correlation was between TBS and connectivity density, with TBS explaining roughly 67.2% of the variance. Based on multivariate linear regression modeling, we have established a model to allow for the interpretation of the relationship between TBS and 3D bone microarchitecture parameters. This model indicates that TBS adds greater value and power of differentiation between samples with similar BMDs but different bone microarchitectures. It has been shown that it is possible to estimate bone microarchitecture status derived from DXA imaging using TBS.