16 resultados para positive predictive values
em DigitalCommons@The Texas Medical Center
Resumo:
The use of exercise electrocardiography (ECG) to detect latent coronary heart disease (CHD) is discouraged in apparently healthy populations because of low sensitivity. These recommendations however, are based on the efficacy of evaluation of ischemia (ST segment changes) with little regard for other measures of cardiac function that are available during exertion. The purpose of this investigation was to determine the association of maximal exercise hemodynamic responses with risk of mortality due to all-causes, cardiovascular disease (CVD), and coronary heart disease (CHD) in apparently healthy individuals. Study participants were 20,387 men (mean age = 42.2 years) and 6,234 women (mean age = 41.9 years) patients of a preventive medicine center in Dallas, TX examined between 1971 and 1989. During an average of 8.1 years of follow-up, there were 348 deaths in men and 66 deaths in women. In men, age-adjusted all-cause death rates (per 10,000 person years) across quartiles of maximal systolic blood pressure (SBP) (low to high) were: 18.2, 16.2, 23.8, and 24.6 (p for trend $<$0.001). Corresponding rates for maximal heart rate were: 28.9, 15.9, 18.4, and 15.1 (p trend $<$0.001). After adjustment for confounding variables including age, resting systolic pressure, serum cholesterol and glucose, body mass index, smoking status, physical fitness and family history of CVD, risks (and 95% confidence interval (CI)) of all-cause mortality for quartiles of maximal SBP, relative to the lowest quartile, were: 0.96 (0.70-1.33), 1.36 (1.01-1.85), and 1.37 (0.98-1.92) for quartiles 2-4 respectively. Similar risks for maximal heart rate were: 0.61 (0.44-0.85), 0.69 (0.51-0.93), and 0.60 (0.41-0.87). No associations were noted between maximal exercise rate-pressure product mortality. Similar results were seen for risk of CVD and CHD death. In women, similar trends in age-adjusted all-cause and CVD death rates across maximal SBP and heart rate categories were observed. Sensitivity of the exercise test in predicting mortality was enhanced when ECG results were evaluated together with maximal exercise SBP or heart rate with a concomitant decrease in specificity. Positive predictive values were not improved. The efficacy of the exercise test in predicting mortality in apparently healthy men and women was not enhanced by using maximal exercise hemodynamic responses. These results suggest that an exaggerated systolic blood pressure or an attenuated heart rate response to maximal exercise are risk factors for mortality in apparently healthy individuals. ^
Resumo:
Nutrient intake and specific food item data from 24-hour dietary recalls were utilized to study the relationship between measures of diet diversity and dietary adequacy in a population of white females of child-bearing age and socioeconomic subgroups of that population. As the basis of the diet diversity measures, twelve food groups were constructed from the 24-hour recall data and the number of unique foods per food group counted and weighted according to specified weighting schemes. Utilizing these food groups, nine diet diversity indices were developed.^ Sensitivity/specificity analysis was used to determine the ability of varying levels of selected diet diversity indices to identify individuals above and below preselected intakes of different nutrients. The true prevalence proportions, sensitivity and specificity, false positive and false negative rates, and positive predictive values observed at the selected levels of diet diversity indices were investigated in relation to the objectives and resources of a variety of nutrition improvement programs. Diet diversity indices constructed from the total population data were evaluated as screening tools for respondent nutrient intakes in each of the socioeconomic subgroups as well.^ The results of the sensitivity/specificity analysis demonstrated that the false positive rate, the false negative rate, or both were too high at each diversity cut-off level to validate the widespread use of any of the diversity indices in the dietary assessment of the study population. Although diet diversity has been shown to be highly correlated with the intakes of a number of nutrients, the diet diversity indices constructed in this study did not adequately represent nutrient intakes in the diet as reported, in this study, intakes as reported in the 24-hour dietary recall. Specific cut-off levels of selected diversity indices might have limited application in some nutrition programs. The results were applicable to the sensitivity/specificity analyses in the socioeconomic subgroups as well as in the total population. ^
Resumo:
Background. Cardiac tamponade can occur when a large amount of fluid, gas, singly or in combination, accumulating within the pericardium, compresses the heart causing circulatory compromise. Although previous investigators have found the 12-lead ECG to have a poor predictive value in diagnosing cardiac tamponade, very few studies have evaluated it as a follow up tool for ruling in or ruling out tamponade in patients with previously diagnosed malignant pericardial effusions. ^ Methods. 127 patients with malignant pericardial effusions at the MD Anderson Cancer Center were included in this retrospective study. While 83 of these patients had a cardiac tamponade diagnosed by echocardiographic criteria (Gold standard), 44 did not. We computed the sensitivity (Se), specificity (Sp), positive (PPV) and negative predictive values (NPV) for individual and combinations of ECG abnormalities. Individual ECG abnormalities were also entered singly into a univariate logistic regression model to predict tamponade. ^ Results. For patients with effusions of all sizes, electrical alternans had a Se, Sp, PPV and NPV of 22.61%, 97.61%, 95% and 39.25% respectively. These parameters for low voltage complexes were 55.95%, 74.44%, 81.03%, 46.37% respectively. The presence of all three ECG abnormalities had a Se = 8.33%, Sp = 100%, PPV = 100% and NPV = 35.83% while the presence of at least one of the three ECG abnormalities had a Se = 89.28%, Sp = 46.51%, PPV = 76.53%, NPV = 68.96%. For patients with effusions of all sizes electrical alternans had an OR of 12.28 (1.58–95.17, p = 0.016), while the presence of at least one ECG abnormality had an OR of 7.25 (2.9–18.1, p = 0.000) in predicting tamponade. ^ Conclusions. Although individual ECG abnormalities had low sensitivities, specificities, NPVs and PPVs with the exception of electrical alternans, the presence of at least one of the three ECG abnormalities had a high sensitivity in diagnosing cardiac tamponade. This could point to its potential use as a screening test with a correspondingly high NPV to rule out a diagnosis of tamponade in patients with malignant pericardial effusions. This could save expensive echocardiographic assessments in patients with previously diagnosed pericardial effusions. ^
Resumo:
Alcohol consumption has a long-standing tradition in the United States Air Force (USAF). From squadron bars to officers and enlisted clubs, alcohol has been used in social settings to increase morale and also as a way to help decrease the stress of military operations. Surveys have demonstrated that the USAF has more than double the percentage of heavy drinkers than the US population. More than one-third of the Air Force reports binge drinking in the last month while only six percent of the nation reports the same consumption pattern.^ However, alcohol has a significant harmful health effect if consumed in excess. As part of an overall prevention and treatment program aimed at curbing the harmful effects of alcohol consumption, the USAF uses the Alcohol Use Disorder Identification Test (AUDIT) to screen for high-risk alcohol consumption patterns before alcohol disorder and disability occur. All Air Force active-duty members are required to complete a yearly Preventive Health Assessment questionnaire. Various health topics are included in this questionnaire including nutrition, exercise, tobacco use, family history, mental health and alcohol use. While this questionnaire has been available in a web-based format for several years, mandatory use was not implemented until 2009.^ Although the AUDIT was selected due to its effectiveness in assessing high-risk alcohol consumption in other populations, its effectiveness in the Air Force population had not been studied previously. In order to assess the sensitivity, specificity, and positive predictive value of this screening tool, the Air Force Web-based Preventive Health Assessment alcohol screening results were compared to whether any alcohol-related diagnosis was made from January 1, 2009 to March 31, 2010.^ While the AUDIT has previously been shown to have a high sensitivity and specificity, the Air Force screening values were 27.9% and 93.0% respectively. Positive predictive value was only 4.9%. With the screening statistics found, less than one-third of those having an alcohol disorder will be found with this screening tool and only 1 out of 20 Airmen who require further evaluation actually have an alcohol-related diagnosis.^
Resumo:
Quantitative imaging with 18F-FDG PET/CT has the potential to provide an in vivo assessment of response to radiotherapy (RT). However, comparing tissue tracer uptake in longitudinal studies is often confounded by variations in patient setup and potential treatment induced gross anatomic changes. These variations make true response monitoring for the same anatomic volume a challenge, not only for tumors, but also for normal organs-at-risk (OAR). The central hypothesis of this study is that more accurate image registration will lead to improved quantitation of tissue response to RT with 18F-FDG PET/CT. Employing an in-house developed “demons” based deformable image registration algorithm, pre-RT tumor and parotid gland volumes can be more accurately mapped to serial functional images. To test the hypothesis, specific aim 1 was designed to analyze whether deformably mapping tumor volumes rather than aligning to bony structures leads to superior tumor response assessment. We found that deformable mapping of the most metabolically avid regions improved response prediction (P<0.05). The positive predictive power for residual disease was 63% compared to 50% for contrast enhanced post-RT CT. Specific aim 2 was designed to use parotid gland standardized uptake value (SUV) as an objective imaging biomarker for salivary toxicity. We found that relative change in parotid gland SUV correlated strongly with salivary toxicity as defined by the RTOG/EORTC late effects analytic scale (Spearman’s ρ = -0.96, P<0.01). Finally, the goal of specific aim 3 was to create a phenomenological dose-SUV response model for the human parotid glands. Utilizing only baseline metabolic function and the planned dose distribution, predicting parotid SUV change or salivary toxicity, based upon specific aim 2, became possible. We found that the predicted and observed parotid SUV relative changes were significantly correlated (Spearman’s ρ = 0.94, P<0.01). The application of deformable image registration to quantitative treatment response monitoring with 18F-FDG PET/CT could have a profound impact on patient management. Accurate and early identification of residual disease may allow for more timely intervention, while the ability to quantify and predict toxicity of normal OAR might permit individualized refinement of radiation treatment plan designs.
Resumo:
In order to identify optimal therapy for children with bacterial pneumonia, Pakistan's ARI Program, in collaboration with the National Institute of Health (NIH), Islamabad, undertook a national surveillance of antimicrobial resistance in S. pneumoniae and H. influenzae. The project was carried out at selected urban and peripheral sites in 6 different regions of Pakistan, in 1991–92. Nasopharyngeal (NP) specimens and blood cultures were obtained from children with pneumonia diagnosed in the outpatient clinic of participating facilities. Organisms were isolated by local hospital laboratories and sent to NIH for confirmation, serotyping and antimicrobial susceptibility testing. Following were the aims of the study (i) to determine the antimicrobial resistance patterns of S. pneumoniae and H. influenzae in children aged 2–59 months; (ii) to determine the ability of selected laboratories to identify and effectively transport isolates of S. pneumoniae and H. influenzae cultured from nasopharyngeal and blood specimens; (iii) to validate the comparability of resistance patterns for nasopharyngeal and blood isolates of S. pneumoniae and H. influenzae from children with pneumonia; and (iv) to examine the effect of drug resistance and laboratory error on the cost of effectively treating children with ARI. ^ A total of 1293 children with ARI were included in the study: 969 (75%) from urban areas and 324 (25%) from rural parts of the country. Of 1293, there were 786 (61%) male and 507 (39%) female children. The resistance rate of S. pneumoniae to various antibiotics among the urban children with ARI was: TMP/SMX (62%); chloramphenicol (23%); penicillin (5%); tetracycline (16%); and ampicillin/amoxicillin (0%). The rates of resistance of H. influenzae were higher than S. pneumoniae: TMP/SMX (85%); chloramphenicol (62%); penicillin (59%); ampicillin/amoxicillin (46%); and tetracycline (100%). There were similar rates of resistance to each antimicrobial agent among isolates from the rural children. ^ Of a total 614 specimens that were tested for antimicrobial susceptibility, 432 (70.4%) were resistant to TMP/SMX and 93 (15.2%) were resistant to antimicrobial agents other than TMP/SMX viz. ampicillin/amoxicillin, chloramphenicol, penicillin, and tetracycline. ^ The sensitivity and positive predictive value of peripheral laboratories for H. influenzae were 99% and 65%, respectively. Similarly, the sensitivity and positive predictive value of peripheral laboratory tests compared to gold standard i.e. NIH laboratory, for S. pneumoniae were 99% and 54%, respectively. ^ The sensitivity and positive predictive value of nasopharyngeal specimens compared to blood cultures (gold standard), isolated by the peripheral laboratories, for H. influenzae were 88% and 11%, and for S. pneumoniae 92% and 39%, respectively. (Abstract shortened by UMI.)^
Resumo:
This study compared four alternative approaches (Taylor, Fieller, percentile bootstrap, and bias-corrected bootstrap methods) to estimating confidence intervals (CIs) around cost-effectiveness (CE) ratio. The study consisted of two components: (1) Monte Carlo simulation was conducted to identify characteristics of hypothetical cost-effectiveness data sets which might lead one CI estimation technique to outperform another. These results were matched to the characteristics of an (2) extant data set derived from the National AIDS Demonstration Research (NADR) project. The methods were used to calculate (CIs) for data set. These results were then compared. The main performance criterion in the simulation study was the percentage of times the estimated (CIs) contained the “true” CE. A secondary criterion was the average width of the confidence intervals. For the bootstrap methods, bias was estimated. ^ Simulation results for Taylor and Fieller methods indicated that the CIs estimated using the Taylor series method contained the true CE more often than did those obtained using the Fieller method, but the opposite was true when the correlation was positive and the CV of effectiveness was high for each value of CV of costs. Similarly, the CIs obtained by applying the Taylor series method to the NADR data set were wider than those obtained using the Fieller method for positive correlation values and for values for which the CV of effectiveness were not equal to 30% for each value of the CV of costs. ^ The general trend for the bootstrap methods was that the percentage of times the true CE ratio was contained in CIs was higher for the percentile method for higher values of the CV of effectiveness, given the correlation between average costs and effects and the CV of effectiveness. The results for the data set indicated that the bias corrected CIs were wider than the percentile method CIs. This result was in accordance with the prediction derived from the simulation experiment. ^ Generally, the bootstrap methods are more favorable for parameter specifications investigated in this study. However, the Taylor method is preferred for low CV of effect, and the percentile method is more favorable for higher CV of effect. ^
Resumo:
Sexually transmitted infections (STIs) are a major public health problem, and controlling their spread is a priority. According to the World Health Organization (WHO), there are 340 million new cases of treatable STIs among 15–49 year olds that occur yearly around the world (1). Infection with STIs can lead to several complications such as pelvic inflammatory disorder (PID), cervical cancer, infertility, ectopic pregnancy, and even death (1). Additionally, STIs and associated complications are among the top disease types for which healthcare is sought in developing nations (1), and according to the UNAIDS report, there is a strong connection between STIs and the sexual spread of HIV infection (2). In fact, it is estimated that the presence of an untreated STI can increase the likelihood of contracting and spreading HIV by a factor up to 10 (2). In addition, developing countries are poorer in resources and lack inexpensive and precise diagnostic laboratory tests for STIs, thereby exacerbating the problem. Thus, the WHO recommends syndromic management of STIs for delivering care where lab testing is scarce or unattainable (1). This approach utilizes the use of an easy to use algorithm to help healthcare workers recognize symptoms/signs so as to provide treatment for the likely cause of the syndrome. Furthermore, according to the WHO, syndromic management offers instant and legitimate treatment compared to clinical diagnosis, and that it is also more cost-effective for some syndromes over the use of laboratory testing (1). In addition, even though it has been shown that the vaginal discharge syndrome has low specificity for gonorrhea and Chlamydia and can lead to over treatment (1), this is the recommended way to manage STIs in developing nations. Thus, the purpose of this paper is to specifically address the following questions: is syndromic management working to lower the STI burden in developing nations? How effective is it, and should it still be recommended? To answer these questions, a systematic literature review was conducted to evaluate the current effectiveness of syndromic management in developing nations. This review examined published articles over the past 5 years that compared syndromic management to laboratory testing and had published sensitivity, specificity, and positive predicative value data. Focusing mainly on vaginal discharge, urethral discharge, and genital ulcer algorithms, it was seen that though syndromic management is more effective in diagnosing and treating urethral and genial ulcer syndromes in men, there still remains an urgent need to revise the WHO recommendations for managing STIs in developing nations. Current studies have continued to show decreased specificity, sensitivity and positive predicative values for the vaginal discharge syndrome, and high rates of asymptomatic infections and healthcare workers neglecting to follow guidelines limit the usefulness of syndromic management. Furthermore, though advocate d as cost-effective by the WHO, there is a cost incurred from treating uninfected people. Instead of improving this system, it is recommended that better and less expensive point of care and the development of rapid test diagnosis kits be the focus and method of diagnosis and treatment in developing nations for STI management. ^
Resumo:
The U.S. Air Force assesses Active Duty Air Force (ADAF) health annually using the Air Force Web-based Preventative Health Assessment (AF WebPHA). The assessment is based on a self-administered survey used to determine the overall Air Force health and readiness, as well as, the individual health of each airman. Individual survey responses as well as groups of responses generate further computer generated assessment and result in a classification of 'Critical', 'Priority', or 'Routine', depending on the need and urgency for further evaluation by a health care provider. The importance of the 'Priority' and 'Critical' classifications is to provide timely intervention to prevent or limit unfavorable outcomes that may threaten an airman. Though the USAF has been transitioning from a paper form to the online WebPHA survey for the last three years it was not made mandatory for all airmen until 2009. The survey covers many health aspects including family history, tobacco use, exercise, alcohol use, and mental health. ^ Military stressors such as deployment, change of station, and the trauma of war can aggravate and intensify the common baseline worries experienced by the general population and place airmen at additional risks for mental health concerns and illness. This study assesses the effectiveness of the AF WebPHA mental health screening questions in predicting a mental health disorder diagnosis according to International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) codes generated by physicians or their surrogates. In order to assess the sensitivity, specificity, and positive predictive value of the AF WebPHA as a screening tool for mental health, survey results were compared to ascertain if they generated any mental health disorder related diagnosis for the period from January 1, 2009 to March 31, 2010. ^ Statistical analysis of the AF WebPHA mental health responses when compared with matching ICD-9-CM codes found that the sensitivity for 'Critical' or 'Priority' responses was only 3.4% and that it would correctly predict those who had the selected mental health diagnosis 9% of the time.^
Resumo:
Introduction and objective. A number of prognostic factors have been reported for predicting survival in patients with renal cell carcinoma. Yet few studies have analyzed the effects of those factors at different stages of the disease process. In this study, different stages of disease progression starting from nephrectomy to metastasis, from metastasis to death, and from evaluation to death were evaluated. ^ Methods. In this retrospective follow-up study, records of 97 deceased renal cell carcinoma (RCC) patients were reviewed between September 2006 to October 2006. Patients with TNM Stage IV disease before nephrectomy or with cancer diagnoses other than RCC were excluded leaving 64 records for analysis. Patient TNM staging, Furhman Grade, age, tumor size, tumor volume, histology and patient gender were analyzed in relation to time to metastases. Time from nephrectomy to metastasis, TNM staging, Furhman Grade, age, tumor size, tumor volume, histology and patient gender were tested for significance in relation to time from metastases to death. Finally, analysis of laboratory values at time of evaluation, Eastern Cooperative Oncology Group performance status (ECOG), UCLA Integrated Staging System (UISS), time from nephrectomy to metastasis, TNM staging, Furhman Grade, age, tumor size, tumor volume, histology and patient gender were tested for significance in relation to time from evaluation to death. Linear regression and Cox Proportional Hazard (univariate and multivariate) was used for testing significance. Kaplan-Meier Log-Rank test was used to detect any significance between groups at various endpoints. ^ Results. Compared to negative lymph nodes at time of nephrectomy, a single positive lymph node had significantly shorter time to metastasis (p<0.0001). Compared to other histological types, clear cell histology had significant metastasis free survival (p=0.003). Clear cell histology compared to other types (p=0.0002 univariate, p=0.038 multivariate) and time to metastasis with log conversion (p=0.028) significantly affected time from metastasis to death. A greater than one year and greater than two year metastasis free interval, compared to patients that had metastasis before one and two years, had statistically significant survival benefit (p=0.004 and p=0.0318). Time from evaluation to death was affected by greater than one year metastasis free interval (p=0.0459), alcohol consumption (p=0.044), LDH (p=0.006), ECOG performance status (p<0.001), and hemoglobin level (p=0.0092). The UISS risk stratified the patient population in a statistically significant manner for survival (p=0.001). No other factors were found to be significant. ^ Conclusion. Clear cell histology is predictive for both time to metastasis and metastasis to death. Nodal status at time of nephrectomy may predict risk of metastasis. The time interval to metastasis significantly predicts time from metastasis to death and time from evaluation to death. ECOG performance status, and hemoglobin levels predicts survival outcome at evaluation. Finally, UISS appropriately stratifies risk in our population. ^
Resumo:
Background. The population-based Houston Tuberculosis Initiative (HTI) study has enrolled and gathered demographic, social, behavioral, and disease related data on more than 80% of all reported Mycobacterium Tuberculosis (MTB) cases and 90% of all culture positive patients in Houston/Harris County over a 9 year period (from October 1995-September 2004). During this time period 33% (n=1210) of HTI MTB cases have reported a history of drug use. Of those MTB cases reporting a history of drug use, a majority of them (73.6%), are non-injection drug users (NIDUs). ^ Other than HIV, drug use is the single most important risk factor for progression from latent to infectious tuberculosis (TB). In addition, drug use is associated with increased transmission of active TB, as seen by the increased number of clonally related strains or clusters (see definition on page 30) found in this population. The deregulatory effects of drug use on immune function are well documented. Associations between drug use and increased morbidity have been reported since the late 1970's. However, limited research focused on the immunological consequence of non-injection drug use and its relation to tuberculosis infection among TB patients is available. ^ Methods. TB transmission patterns, symptoms, and prevalence of co-morbidities were a focus of this project. Smoking is known to suppress Nitric Oxide (NO) production and interfere with immune function. In order to limit any possible confounding due to smoking two separate analyses were done. Non-injection drug user smokers (NIDU-S) were compared to non-drug user smokers (NDU-S) and non-injection drug user non-smokers (NIDU-NS) were compared to non-drug user non-smokers (NDU-NS) individually. Specifically proportions, chi-square p-values, and (where appropriate) odds ratios with 95% confidence intervals were calculated to assess characteristics and potential associations of co-morbidities and symptoms of TB among NIDUs HTI TB cases. ^ Results. Significant differences in demographic characteristics and risk factors were found. In addition drug users were found to have a decreased risk for cancer, diabetes mellitus, and chronic pulmonary disease. They were at increased risk of having HIV/AIDS diagnosis, liver disease, and trauma related morbidities. Drug users were more likely to have pulmonary TB disease, and a significantly increased amount of clonally related strains of TB or "clusters" were seen in both smokers and non-smoker drug users when compared to their non-drug user counterparts. Drug users are more likely to belong to print groups (clonally related TB strains with matching spoligotypes) including print one and print three and the Beijing family group, s1. Drug users were found to be no more likely to experience drug resistance to TB therapy and were likely to be cured of disease upon completion of therapy. ^ Conclusion. Drug users demographic and behavioral risk factors put them at an increased risk contracting and spreading TB disease throughout the community. Their increased levels of clustering are evidence of recent transmission and the significance of certain print groups among this population indicate the transmission is from within the social family. For these reasons a focus on this "at risk population" is critical to the success of future public health interventions. Successful completion of directly observed therapy (DOT), the tracking of TB outbreaks and incidence through molecular characterization, and increased diagnostic strategies have led to the stabilization of TB incidence in Houston, Harris County over the past 9 years and proven that the Houston Tuberculosis Initiative has played a critical role in the control and prevention of TB transmission. ^
Resumo:
The use of feminine products such as vaginal douches, tampons, and sanitary napkins are common among women. Despite the results of some studies that suggest an association between douching and bacterial vaginosis, douching remains a topic that is understudied. The possibility of an association between tampon use and infection has not been significantly investigated since the toxic shock outbreak in the 1980s. The first objective of our study was to evaluate demographic, reproductive health, and sexual behavior variables to establish an epidemiologic profile of menstruating women who reported douching and women who reported using sanitary napkins only. The second objective of our study was to evaluate whether the behaviors of douching and using tampons were associated with an increased risk of bacterial vaginosis or trichomonas. We analyzed these factors, using logistic regression, among the 3,174 women from the NHANES cross sectional data from 2001-2004, who met the inclusion criteria determined for our study. We established an epidemiologic profile for women who had the highest frequency of douching reported as women who were age 36-49, had a high school education or GED, black race, not taking oral contraceptives, reported vaginal symptoms in the last month, two or more sexual partners in the last year, or tested positive for bacterial vaginosis or trichomonas. The profile for those who had the highest frequency of exclusive sanitary napkin use included women with less than a high school education, married women, women classified as black or "other" in race, and women who were not on oral contraceptives. While we were able to establish a significant increase in the odds of douching among women who tested positive for bacterial vaginosis or trichomonas, we did not find any significant difference in the odds of exclusive napkin use and testing negative for bacterial vaginosis or trichomonas.^
Resumo:
HANES 1 detailed sample data were used to operationalize a definition of health in the absence of disease and to describe and compare the characteristics of the normal (healthy) group versus an abnormal (unhealthy) group.^ Parallel screening gave a 3.8 percent prevalence proportion of physical health, with a female:male ratio of 2:1 and younger ages in the healthy group. Statistically significant Mantel-Haenszel gender-age-adjusted odds ratios (MHOR) were estimated among abnormal non-migrants (1.53), skilled workers/unemployed (1.76), annual family incomes of less than $10,000 (1.56), having ever smoked (1.58), and started smoking before 18 years of age (1.58). Significant MHOR were also found for abnormals for health promoting measures: non-iodized salt use (1.94), needed dental care (1.91); and for fair to poor perceived health (4.28), perceiving health problems (2.52), and low energy level (1.68). Significant protective effects for much to moderate recreational exercise (MHOR 0.42) and very active to moderate non-recreational activity (MHOR 0.49) were also obtained. Covariance analysis additive models detected statistically significant higher mean values for abnormals than normals for serum magnesium, hemoglobin, hematocrit, urinary creatinine, and systolic and diastolic blood pressures, and lower values for abnormals than normals for serum iron. No difference was detected for serum cholesterol. Significant non-additive joint effects were found for body mass index.^ The results suggest positive physical health can be measured with cross-sectional survey data. Gender differentials, and associations between ecologic, socioeconomic, hazardous risk factors, health promoting activities and physical health are in general agreement with published findings on studies of morbidity. Longitudinal prospective studies are suggested to establish the direction of the associations and to enhance present knowledge of health and its promoting factors. ^
Resumo:
Multiple Endocrine Neoplasia type 1 (MEN1) is a hereditary cancer syndrome characterized by tumors of the endocrine system. Tumors most commonly develop in the parathyroid glands, pituitary gland, and the gastro-entero pancreatic tract. MEN1 is a highly penetrant condition and age of onset is variable. Most patients are diagnosed in early adulthood; however, rare cases of MEN1 present in early childhood. Expert consensus opinion is that predictive genetic testing should be offered at age 5 years, however there are no evidence-based studies that clearly establish that predictive genetic testing at this age would be beneficial since most symptoms do not present until later in life. This study was designed to explore attitudes about the most appropriate age for predictive genetic testing from individuals at risk of having a child with MEN1. Participants who had an MEN1 mutation were invited to complete a survey and were asked to invite their spouses to participate as well. The survey included several validated measures designed to assess participants’ attitudes about predictive testing in minors. Fifty-eight affected participants and twenty-two spouses/partners completed the survey. Most participants felt that MEN1 genetic testing was appropriate in healthy minors. Younger age and increased knowledge of MEN1 genetics and inheritance predicted genetic testing at a younger age. Additionally, participants who saw more positive than negative general outcomes from genetic testing were more likely to favor genetic testing at younger ages. Overall, participants felt genetic testing should be offered at a younger age than most adult onset conditions and most felt the appropriate time for testing was when a child could understand and participate in the testing process. Psychological concerns seemed to be the primary focus of participants who favored later ages for genetic testing, while medical benefits were more commonly cited for younger age. This exploratory study has implications for counseling patients whose children are at risk of developing MEN1 and illustrates issues that are important to patients and their spouses when considering testing in children.
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.