916 resultados para Informative Censoring


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bargaining is the building block of many economic interactions, ranging from bilateral to multilateral encounters and from situations in which the actors are individuals to negotiations between firms or countries. In all these settings, economists have been intrigued for a long time by the fact that some projects, trades or agreements are not realized even though they are mutually beneficial. On the one hand, this has been explained by incomplete information. A firm may not be willing to offer a wage that is acceptable to a qualified worker, because it knows that there are also unqualified workers and cannot distinguish between the two types. This phenomenon is known as adverse selection. On the other hand, it has been argued that even with complete information, the presence of externalities may impede efficient outcomes. To see this, consider the example of climate change. If a subset of countries agrees to curb emissions, non-participant regions benefit from the signatories’ efforts without incurring costs. These free riding opportunities give rise to incentives to strategically improve ones bargaining power that work against the formation of a global agreement. This thesis is concerned with extending our understanding of both factors, adverse selection and externalities. The findings are based on empirical evidence from original laboratory experiments as well as game theoretic modeling. On a very general note, it is demonstrated that the institutions through which agents interact matter to a large extent. Insights are provided about which institutions we should expect to perform better than others, at least in terms of aggregate welfare. Chapters 1 and 2 focus on the problem of adverse selection. Effective operation of markets and other institutions often depends on good information transmission properties. In terms of the example introduced above, a firm is only willing to offer high wages if it receives enough positive signals about the worker’s quality during the application and wage bargaining process. In Chapter 1, it will be shown that repeated interaction coupled with time costs facilitates information transmission. By making the wage bargaining process costly for the worker, the firm is able to obtain more accurate information about the worker’s type. The cost could be pure time cost from delaying agreement or cost of effort arising from a multi-step interviewing process. In Chapter 2, I abstract from time cost and show that communication can play a similar role. The simple fact that a worker states to be of high quality may be informative. In Chapter 3, the focus is on a different source of inefficiency. Agents strive for bargaining power and thus may be motivated by incentives that are at odds with the socially efficient outcome. I have already mentioned the example of climate change. Other examples are coalitions within committees that are formed to secure voting power to block outcomes or groups that commit to different technological standards although a single standard would be optimal (e.g. the format war between HD and BlueRay). It will be shown that such inefficiencies are directly linked to the presence of externalities and a certain degree of irreversibility in actions. I now discuss the three articles in more detail. In Chapter 1, Olivier Bochet and I study a simple bilateral bargaining institution that eliminates trade failures arising from incomplete information. In this setting, a buyer makes offers to a seller in order to acquire a good. Whenever an offer is rejected by the seller, the buyer may submit a further offer. Bargaining is costly, because both parties suffer a (small) time cost after any rejection. The difficulties arise, because the good can be of low or high quality and the quality of the good is only known to the seller. Indeed, without the possibility to make repeated offers, it is too risky for the buyer to offer prices that allow for trade of high quality goods. When allowing for repeated offers, however, at equilibrium both types of goods trade with probability one. We provide an experimental test of these predictions. Buyers gather information about sellers using specific price offers and rates of trade are high, much as the model’s qualitative predictions. We also observe a persistent over-delay before trade occurs, and this mitigates efficiency substantially. Possible channels for over-delay are identified in the form of two behavioral assumptions missing from the standard model, loss aversion (buyers) and haggling (sellers), which reconcile the data with the theoretical predictions. Chapter 2 also studies adverse selection, but interaction between buyers and sellers now takes place within a market rather than isolated pairs. Remarkably, in a market it suffices to let agents communicate in a very simple manner to mitigate trade failures. The key insight is that better informed agents (sellers) are willing to truthfully reveal their private information, because by doing so they are able to reduce search frictions and attract more buyers. Behavior observed in the experimental sessions closely follows the theoretical predictions. As a consequence, costless and non-binding communication (cheap talk) significantly raises rates of trade and welfare. Previous experiments have documented that cheap talk alleviates inefficiencies due to asymmetric information. These findings are explained by pro-social preferences and lie aversion. I use appropriate control treatments to show that such consideration play only a minor role in our market. Instead, the experiment highlights the ability to organize markets as a new channel through which communication can facilitate trade in the presence of private information. In Chapter 3, I theoretically explore coalition formation via multilateral bargaining under complete information. The environment studied is extremely rich in the sense that the model allows for all kinds of externalities. This is achieved by using so-called partition functions, which pin down a coalitional worth for each possible coalition in each possible coalition structure. It is found that although binding agreements can be written, efficiency is not guaranteed, because the negotiation process is inherently non-cooperative. The prospects of cooperation are shown to crucially depend on i) the degree to which players can renegotiate and gradually build up agreements and ii) the absence of a certain type of externalities that can loosely be described as incentives to free ride. Moreover, the willingness to concede bargaining power is identified as a novel reason for gradualism. Another key contribution of the study is that it identifies a strong connection between the Core, one of the most important concepts in cooperative game theory, and the set of environments for which efficiency is attained even without renegotiation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVES To assess the presence of within-group comparisons with baseline in a subset of leading dental journals and to explore possible associations with a range of study characteristics including journal and study design. STUDY DESIGN AND SETTING Thirty consecutive issues of five leading dental journals were electronically searched. The conduct and reporting of statistical analysis in respect of comparisons against baseline or otherwise along with the manner of interpretation of the results were assessed. Descriptive statistics were obtained, and chi-square test and Fisher's exact were undertaken to test the association between trial characteristics and overall study interpretation. RESULTS A total of 184 studies were included with the highest proportion published in Journal of Endodontics (n = 84, 46%) and most involving a single center (n = 157, 85%). Overall, 43 studies (23%) presented interpretation of their outcomes based solely on comparisons against baseline. Inappropriate use of baseline testing was found to be less likely in interventional studies (P < 0.001). CONCLUSION Use of comparisons with baseline appears to be common among both observational and interventional research studies in dentistry. Enhanced conduct and reporting of statistical tests are required to ensure that inferences from research studies are appropriate and informative.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cet article se propose d’étudier les relations entre les figures et les registres discursifs définis comme des matrices d’écriture qui conditionnent la tonalité des discours. Nous verrons d’abord comment, par leur saillance et leur densité informative, les figures donnent une portée accrue à la mise en texte des registres. Nous nous interrogerons par ailleurs sur les motivations qui incitent les figures à s’intégrer dans les registres. Si certaines d’entre elles offrent une forte affinité avec ces derniers (comme l’hyperbole avec le registre épidictique), d’autres figures – telle la métaphore – ne doivent leur présence dans les registres qu’à des facteurs contextuels. Nous examinerons ensuite comment les figures participent à la construction ou à la déconstruction d’effets-registres dans le déroulement des textes, avant d’analyser l’influence des genres dans l’interaction entre figures et registres. Au bout du compte, cet article confirme la nature pragmatique des figures, dans la mesure où leur fonctionnalité provient en partie des rapports qu’elles entretiennent avec leur cadre tonal et illocutoire constitué par les registres.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose In recent years, selective retina laser treatment (SRT), a sub-threshold therapy method, avoids widespread damage to all retinal layers by targeting only a few. While these methods facilitate faster healing, their lack of visual feedback during treatment represents a considerable shortcoming as induced lesions remain invisible with conventional imaging and make clinical use challenging. To overcome this, we present a new strategy to provide location-specific and contact-free automatic feedback of SRT laser applications. Methods We leverage time-resolved optical coherence tomography (OCT) to provide informative feedback to clinicians on outcomes of location-specific treatment. By coupling an OCT system to SRT treatment laser, we visualize structural changes in the retinal layers as they occur via time-resolved depth images. We then propose a novel strategy for automatic assessment of such time-resolved OCT images. To achieve this, we introduce novel image features for this task that when combined with standard machine learning classifiers yield excellent treatment outcome classification capabilities. Results Our approach was evaluated on both ex vivo porcine eyes and human patients in a clinical setting, yielding performances above 95 % accuracy for predicting patient treatment outcomes. In addition, we show that accurate outcomes for human patients can be estimated even when our method is trained using only ex vivo porcine data. Conclusion The proposed technique presents a much needed strategy toward noninvasive, safe, reliable, and repeatable SRT applications. These results are encouraging for the broader use of new treatment options for neovascularization-based retinal pathologies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Up to 10% of all breast and ovarian cancers are attributable to mutations in cancer susceptibility genes. Clinical genetic testing for deleterious gene mutations that predispose to hereditary breast and ovarian cancer (HBOC) syndrome is available. Mutation carriers may benefit from following high-risk guidelines for cancer prevention and early detection; however, few studies have reported the uptake of clinical genetic testing for HBOC. This study identified predictors of HBOC genetic testing uptake among a case series of 268 women who underwent genetic counseling at The University of Texas M. D. Anderson Cancer Center from October, 1996, through July, 2000. Women completed a baseline questionnaire that measured psychosocial and demographic variables. Additional medical characteristics were obtained from the medical charts. Logistic regression modeling identified predictors of participation in HBOC genetic testing. Psychological variables were hypothesized to be the strongest predictors of testing uptake—in particular, one's readiness (intention) to have testing. Testing uptake among all women in this study was 37% (n = 99). Contrary to the hypotheses, one's actual risk of carrying a BRCA1 or BRCA2 gene mutation was the strongest predictor of testing participation (OR = 15.37, CI = 5.15, 45.86). Other predictors included religious background, greater readiness to have testing, knowledge about HBOC and genetic testing, not having female children, and adherence to breast self-exam. Among the subgroup of women who were at ≥10% risk of carrying a mutation, 51% (n = 90) had genetic testing. Consistent with the hypotheses, predictors of testing participation in the high-risk subgroup included greater readiness to have testing, knowledge, and greater self-efficacy regarding one's ability to cope with test results. Women with CES-D scores ≥16, indicating the presence of depressive symptoms, were less likely to have genetic testing. Results indicate that among women with a wide range of risk for HBOC, actual risk of carrying an HBOC-predisposing mutation may be the strongest predictor of their decision to have genetic testing. Psychological variables (e.g., distress and self-efficacy) may influence testing participation only among women at highest risk of carrying a mutation, for whom genetic testing is most likely to be informative. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With hundreds of single nucleotide polymorphisms (SNPs) in a candidate gene and millions of SNPs across the genome, selecting an informative subset of SNPs to maximize the ability to detect genotype-phenotype association is of great interest and importance. In addition, with a large number of SNPs, analytic methods are needed that allow investigators to control the false positive rate resulting from large numbers of SNP genotype-phenotype analyses. This dissertation uses simulated data to explore methods for selecting SNPs for genotype-phenotype association studies. I examined the pattern of linkage disequilibrium (LD) across a candidate gene region and used this pattern to aid in localizing a disease-influencing mutation. The results indicate that the r2 measure of linkage disequilibrium is preferred over the common D′ measure for use in genotype-phenotype association studies. Using step-wise linear regression, the best predictor of the quantitative trait was not usually the single functional mutation. Rather it was a SNP that was in high linkage disequilibrium with the functional mutation. Next, I compared three strategies for selecting SNPs for application to phenotype association studies: based on measures of linkage disequilibrium, based on a measure of haplotype diversity, and random selection. The results demonstrate that SNPs selected based on maximum haplotype diversity are more informative and yield higher power than randomly selected SNPs or SNPs selected based on low pair-wise LD. The data also indicate that for genes with small contribution to the phenotype, it is more prudent for investigators to increase their sample size than to continuously increase the number of SNPs in order to improve statistical power. When typing large numbers of SNPs, researchers are faced with the challenge of utilizing an appropriate statistical method that controls the type I error rate while maintaining adequate power. We show that an empirical genotype based multi-locus global test that uses permutation testing to investigate the null distribution of the maximum test statistic maintains a desired overall type I error rate while not overly sacrificing statistical power. The results also show that when the penetrance model is simple the multi-locus global test does as well or better than the haplotype analysis. However, for more complex models, haplotype analyses offer advantages. The results of this dissertation will be of utility to human geneticists designing large-scale multi-locus genotype-phenotype association studies. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Economists and other social scientists often face situations where they have access to two datasets that they can use but one set of data suffers from censoring or truncation. If the censored sample is much bigger than the uncensored sample, it is common for researchers to use the censored sample alone and attempt to deal with the problem of partial observation in some manner. Alternatively, they simply use only the uncensored sample and ignore the censored one so as to avoid biases. It is rarely the case that researchers use both datasets together, mainly because they lack guidance about how to combine them. In this paper, we develop a tractable semiparametric framework for combining the censored and uncensored datasets so that the resulting estimators are consistent, asymptotically normal, and use all information optimally. When the censored sample, which we refer to as the master sample, is much bigger than the uncensored sample (which we call the refreshment sample), the latter can be thought of as providing identification where it is otherwise absent. In contrast, when the refreshment sample is large and could typically be used alone, our methodology can be interpreted as using information from the censored sample to increase effciency. To illustrate our results in an empirical setting, we show how to estimate the effect of changes in compulsory schooling laws on age at first marriage, a variable that is censored for younger individuals. We also demonstrate how refreshment samples for this application can be created by matching cohort information across census datasets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many studies have shown relationships between air pollution and the rate of hospital admissions for asthma. A few studies have controlled for age-specific effects by adding separate smoothing functions for each age group. However, it has not yet been reported whether air pollution effects are significantly different for different age groups. This lack of information is the motivation for this study, which tests the hypothesis that air pollution effects on asthmatic hospital admissions are significantly different by age groups. Each air pollutant's effect on asthmatic hospital admissions by age groups was estimated separately. In this study, daily time-series data for hospital admission rates from seven cities in Korea from June 1999 through 2003 were analyzed. The outcome variable, daily hospital admission rates for asthma, was related to five air pollutants which were used as the independent variables, namely particulate matter <10 micrometers (μm) in aerodynamic diameter (PM10), carbon monoxide (CO), ozone (O3), nitrogen dioxide (NO2), and sulfur dioxide (SO2). Meteorological variables were considered as confounders. Admission data were divided into three age groups: children (<15 years of age), adults (ages 15-64), and elderly (≥ 65 years of age). The adult age group was considered to be the reference group for each city. In order to estimate age-specific air pollution effects, the analysis was separated into two stages. In the first stage, Generalized Additive Models (GAMs) with cubic spline for smoothing were applied to estimate the age-city-specific air pollution effects on asthmatic hospital admission rates by city and age group. In the second stage, the Bayesian Hierarchical Model with non-informative prior which has large variance was used to combine city-specific effects by age groups. The hypothesis test showed that the effects of PM10, CO and NO2 were significantly different by age groups. Assuming that the air pollution effect for adults is zero as a reference, age-specific air pollution effects were: -0.00154 (95% confidence interval(CI)= (-0.0030,-0.0001)) for children and 0.00126 (95% CI = (0.0006, 0.0019)) for the elderly for PM 10; -0.0195 (95% CI = (-0.0386,-0.0004)) for children for CO; and 0.00494 (95% CI = (0.0028, 0.0071)) for the elderly for NO2. Relative rates (RRs) were 1.008 (95% CI = (1.000-1.017)) in adults and 1.021 (95% CI = (1.012-1.030)) in the elderly for every 10 μg/m3 increase of PM10 , 1.019 (95% CI = (1.005-1.033)) in adults and 1.022 (95% CI = (1.012-1.033)) in the elderly for every 0.1 part per million (ppm) increase of CO; 1.006 (95%CI = (1.002-1.009)) and 1.019 (95%CI = (1.007-1.032)) in the elderly for every 1 part per billion (ppb) increase of NO2 and SO2, respectively. Asthma hospital admissions were significantly increased for PM10 and CO in adults, and for PM10, CO, NO2 and SO2 in the elderly.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Research provides evidence of the positive health effects associated with regular physical activity participation in all populations. Activity may prove to be especially beneficial in those with chronic conditions such as cancer. However, the majority of cancer patients and survivors do not participate in the recommended amount of physical activity. The purpose of this dissertation was to identify factors associated with physical activity participation, describe how these factors change as result of a diet and exercise intervention, and to evaluate correlates of long term physical activity maintenance. ^ For this dissertation, I analyzed data from the FRESH START trial, a randomized, single-blind, phase II clinical trial focused on improving diet and physical activity among recently diagnosed breast and prostate cancer survivors. Analyses included both parametric and non-parametric statistical tests. Three separate studies were conducted, with sample sizes ranging from 400 to 486. ^ Common barriers to exercise, such as “no willpower,” “too busy,” and “I have pain,” were reported among breast and prostate cancer survivors; however, these barriers were not significantly associated with minutes of physical activity. Breast cancer survivors reported a greater number of total barriers to exercise as well as higher proportions reporting individual barriers, compared to prostate cancer survivors. Just less than half of participants reduced their total number of barriers to exercise from baseline to 1-year follow-up, and those who did reduce barriers reported greater increases in minutes of physical activity compared to those who reported no change in barriers to exercise. Participants in both the tailored and standardized intervention groups reported greater minutes of physical activity at 2-year follow-up compared to baseline. Overall, twelve percent of participants reached recommended levels of physical activity at both 1- and 2-year follow-up. Self-efficacy was positively associated with physical activity maintenance, and the number of total barriers to exercise was inversely associated with physical activity maintenance. ^ Results from this dissertation are novel and informative, and will help to guide future physical activity interventions among cancer survivors. Thoughtfully designed interventions may encourage greater participation in physical activity and ultimately improve overall quality of life in this population. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Among Mexican Americans, the second largest minority group in the United States, the prevalence of gallbladder disease is markedly elevated. Previous data from both genetic admixture and family studies indicate that there is a genetic component to the occurrence of gallbladder disease in Mexican Americans. However, prior to this thesis no formal genetic analysis of gallbladder disease had been carried out nor had any contributing genes been identified.^ The results of complex segregation analysis in a sample of 232 Mexican American pedigrees documented the existence of a major gene having two alleles with age- and gender-specific effects influencing the occurrence of gallbladder disease. The estimated frequency of the allele increasing susceptibility was 0.39. The lifetime probabilities that an individual will be affected by gallbladder disease were 1.0, 0.54, and 0.00 for females of genotypes "AA", "Aa", and "aa", respectively, and 0.68, 0.30, and 0.00 for males, respectively. This analysis provided the first conclusive evidence for the existence of a common single gene having a large effect on the occurrence of gallbladder disease.^ Human cholesterol 7$\alpha$-hydroxylase is the rate-limiting enzyme in bile acid synthesis. The results of an association study in both a random sample and a matched case/control sample showed that there is a significant association between cholesterol 7$\alpha$-hydroxylase gene variation and the occurrence of gallbladder disease in Mexican Americans males but not in females. These data have implicated a specific gene, 7$\alpha$-hydroxylase, in the etiology of gallbladder disease in this population.^ Finally, I asked whether the inferred major gene from complex segregation analysis is genetically linked to the cholesterol 7$\alpha$-hydroxylase gene. Three pedigrees predicted to be informative for linkage analysis by virtue of supporting the major gene hypothesis and having parents with informative genotypes and multiple offspring were selected for this linkage analysis. In each of these pedigrees, the recombination fractions maximized at 0 with a positive, albeit low, LOD score. The results of this linkage analysis provide preliminary and suggestive evidence that the cholesterol 7$\alpha$-hydroxylase gene and the inferred gallbladder disease susceptibility gene are genetically linked. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sizes and power of selected two-sample tests of the equality of survival distributions are compared by simulation for small samples from unequally, randomly-censored exponential distributions. The tests investigated include parametric tests (F, Score, Likelihood, Asymptotic), logrank tests (Mantel, Peto-Peto), and Wilcoxon-Type tests (Gehan, Prentice). Equal sized samples, n = 18, 16, 32 with 1000 (size) and 500 (power) simulation trials, are compared for 16 combinations of the censoring proportions 0%, 20%, 40%, and 60%. For n = 8 and 16, the Asymptotic, Peto-Peto, and Wilcoxon tests perform at nominal 5% size expectations, but the F, Score and Mantel tests exceeded 5% size confidence limits for 1/3 of the censoring combinations. For n = 32, all tests showed proper size, with the Peto-Peto test most conservative in the presence of unequal censoring. Powers of all tests are compared for exponential hazard ratios of 1.4 and 2.0. There is little difference in power characteristics of the tests within the classes of tests considered. The Mantel test showed 90% to 95% power efficiency relative to parametric tests. Wilcoxon-type tests have the lowest relative power but are robust to differential censoring patterns. A modified Peto-Peto test shows power comparable to the Mantel test. For n = 32, a specific Weibull-exponential comparison of crossing survival curves suggests that the relative powers of logrank and Wilcoxon-type tests are dependent on the scale parameter of the Weibull distribution. Wilcoxon-type tests appear more powerful than logrank tests in the case of late-crossing and less powerful for early-crossing survival curves. Guidelines for the appropriate selection of two-sample tests are given. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Obama administration's recurring policy emphasis on high-performing charter schools begs the obvious question: how do you identify a high-performing charter school? That is a crucially important policy question because any evaluation strategy that incorrectly identifies charter school performance could have negative effects on the economically and/or academically disadvantaged students who frequently attend charter schools. If low-performing schools are mislabeled and allowed to persist or encouraged to expand, then students may be harmed directly. If high-performing schools are driven from the market by misinformation, then students will lose access to programs and services that can make a difference in their lives. Most of the scholarly analysis to date has focused on comparing the performance of students in charter schools to that of similar students in traditional public schools (TPS). By design, that research measures charter school performance only in relative terms. Charter schools that outperform similarly situated, but low performing, TPSs have positive effects, even if the charter schools are mediocre in an absolute sense. This analysis describes strategies for identifying high-performing charter schools by comparing charter schools with one another. We begin by describing salient characteristics of Texas charter schools. We follow that discussion with a look at how other researchers across the country have compared charter school effectiveness with TPS effectiveness. We then present several metrics that can be used to identify high-performing charter schools. Those metrics are not mutually exclusive—one could easily justify using multiple measures to evaluate school effectiveness—but they are also not equally informative. If the goal is to measure the contributions that schools are making to student knowledge and skills, then a value-added approach like the ones highlighted in this report is clearly superior to a levels-based approach like that taken under the current accountability system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study represents a secondary analysis of the merging of emergency room visits and daily ozone and PM2.5. Although the adverse health effects of ozone and fine particulate matter have been documented in the literature, evidence regarding the health risks of these two pollutants in Harris County, Texas, is limited. Harris County (Houston) has sufficiently unique characteristics that analysis of these relationships in this setting and with the ozone and industry issues in Houston is informative. The objective of this study was to investigate the association between the joint exposure to ozone and fine particulate matter, and emergency room diagnoses of chronic obstructive pulmonary disease and cardiovascular disease in Harris County, Texas, from 2004 to 2009, with zero and one day lags. ^ The study variables were daily emergency room visits for Harris County, Texas, from 2004 to 2009, temperature, relative humidity, east wind component, north wind component, ozone, and fine particulate matter. Information about each patient's age, race, and gender was also included. The two dichotomous outcomes were emergency room visits diagnoses for chronic obstructive pulmonary disease and cardiovascular disease. Estimates of ozone and PM2.5 were interpolated using kriging, in which estimates of the two pollutants were predicted from monitoring data for every case residence zip code for every day of the six years, over 3 million estimates (one of each pollutant for each case in the database). ^ Logistic regressions were conducted to estimate odds ratios of the two outcomes. Three analyses were conducted: one for all records, another for visits during the four months of April and September of 2005 and 2009, and a third one for visits from zip codes that are close to PM2.5 monitoring stations (east area of Harris County). The last two analyses were designed to investigate special temporal and spatial characteristics of the associations. ^ The dataset included all ER visits surveyed by Safety Net from 2004 to 2009, exceeding 3 million visits for all causes. There were 95,765 COPD and 96,596 CVD cases during this six year period. A 1-μg/m3 increase in PM2.5 on the same day was associated with a 1.0% increase in the odds of chronic obstructive pulmonary disease emergency room diagnoses, a 0.4% increase in the odds of cardiovascular disease emergency room diagnoses, and a 0.2% increase in the odds of cardiovascular disease emergency room diagnoses on the following day. A 1-ppb increase in ozone was associated with a 0.1% increase in the odds of chronic obstructive pulmonary disease emergency room diagnoses on the same day. These four percentages add up to 1.7% of ER visits. That is, over the period of six years, one unit increase for both ozone and PM2.5 (joint increase), resulted in about 55,286 (3,252,102 * 0.017) extra ER visits for CVD or COPD, or 9,214 extra ER visits per year. ^ After adjustment for age, race, gender, day of the week, temperature, relative humidity, east wind component, north wind component, and wind speed, there were statistically significant associations between emergency room chronic obstructive pulmonary disease diagnosis in Harris County, Texas, with joint exposure to ozone and fine particulate matter for the same day; and between emergency room cardiovascular disease diagnosis and exposure to PM2.5 of the same day and the previous day. ^ Despite the small association between the two air pollutants and the health outcomes, this study points to important findings. Namely, the need to identify reasons for the increase of CVD and COPD ER visits over the course of the project, the statistical association between humidity (or whatever other variables for which it may serve as a surrogate) and CVD and COPD cases, and the confirmatory finding that males and blacks have higher odds for the two outcomes, as consistent with other studies. ^ An important finding of this research suggests that the number and distribution of PM2.5 monitors in Harris County - although not evenly spaced geographically—are adequate to detect significant association between exposure and the two outcomes. In addition, this study points to other potential factors that contribute to the rising incidence rates of CVD and COPD ER visits in Harris County such as population increases, patient history, life style, and other pollutants. Finally, results of validation, using a subset of the data demonstrate the robustness of the models.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Radiomics is the high-throughput extraction and analysis of quantitative image features. For non-small cell lung cancer (NSCLC) patients, radiomics can be applied to standard of care computed tomography (CT) images to improve tumor diagnosis, staging, and response assessment. The first objective of this work was to show that CT image features extracted from pre-treatment NSCLC tumors could be used to predict tumor shrinkage in response to therapy. This is important since tumor shrinkage is an important cancer treatment endpoint that is correlated with probability of disease progression and overall survival. Accurate prediction of tumor shrinkage could also lead to individually customized treatment plans. To accomplish this objective, 64 stage NSCLC patients with similar treatments were all imaged using the same CT scanner and protocol. Quantitative image features were extracted and principal component regression with simulated annealing subset selection was used to predict shrinkage. Cross validation and permutation tests were used to validate the results. The optimal model gave a strong correlation between the observed and predicted shrinkages with . The second objective of this work was to identify sets of NSCLC CT image features that are reproducible, non-redundant, and informative across multiple machines. Feature sets with these qualities are needed for NSCLC radiomics models to be robust to machine variation and spurious correlation. To accomplish this objective, test-retest CT image pairs were obtained from 56 NSCLC patients imaged on three CT machines from two institutions. For each machine, quantitative image features with concordance correlation coefficient values greater than 0.90 were considered reproducible. Multi-machine reproducible feature sets were created by taking the intersection of individual machine reproducible feature sets. Redundant features were removed through hierarchical clustering. The findings showed that image feature reproducibility and redundancy depended on both the CT machine and the CT image type (average cine 4D-CT imaging vs. end-exhale cine 4D-CT imaging vs. helical inspiratory breath-hold 3D CT). For each image type, a set of cross-machine reproducible, non-redundant, and informative image features was identified. Compared to end-exhale 4D-CT and breath-hold 3D-CT, average 4D-CT derived image features showed superior multi-machine reproducibility and are the best candidates for clinical correlation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Maximizing data quality may be especially difficult in trauma-related clinical research. Strategies are needed to improve data quality and assess the impact of data quality on clinical predictive models. This study had two objectives. The first was to compare missing data between two multi-center trauma transfusion studies: a retrospective study (RS) using medical chart data with minimal data quality review and the PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study with standardized quality assurance. The second objective was to assess the impact of missing data on clinical prediction algorithms by evaluating blood transfusion prediction models using PROMMTT data. RS (2005-06) and PROMMTT (2009-10) investigated trauma patients receiving ≥ 1 unit of red blood cells (RBC) from ten Level I trauma centers. Missing data were compared for 33 variables collected in both studies using mixed effects logistic regression (including random intercepts for study site). Massive transfusion (MT) patients received ≥ 10 RBC units within 24h of admission. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation based on the multivariate normal distribution. A sensitivity analysis for missing data was conducted to estimate the upper and lower bounds of correct classification using assumptions about missing data under best and worst case scenarios. Most variables (17/33=52%) had <1% missing data in RS and PROMMTT. Of the remaining variables, 50% demonstrated less missingness in PROMMTT, 25% had less missingness in RS, and 25% were similar between studies. Missing percentages for MT prediction variables in PROMMTT ranged from 2.2% (heart rate) to 45% (respiratory rate). For variables missing >1%, study site was associated with missingness (all p≤0.021). Survival time predicted missingness for 50% of RS and 60% of PROMMTT variables. MT models complete case proportions ranged from 41% to 88%. Complete case analysis and multiple imputation demonstrated similar correct classification results. Sensitivity analysis upper-lower bound ranges for the three MT models were 59-63%, 36-46%, and 46-58%. Prospective collection of ten-fold more variables with data quality assurance reduced overall missing data. Study site and patient survival were associated with missingness, suggesting that data were not missing completely at random, and complete case analysis may lead to biased results. Evaluating clinical prediction model accuracy may be misleading in the presence of missing data, especially with many predictor variables. The proposed sensitivity analysis estimating correct classification under upper (best case scenario)/lower (worst case scenario) bounds may be more informative than multiple imputation, which provided results similar to complete case analysis.^