801 resultados para exclusion criteria


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introduction: Although it seems plausible that sports performance relies on high-acuity foveal vision, it could be empirically shown that myoptic blur (up to +2 diopters) does not harm performance in sport tasks that require foveal information pick-up like golf putting (Bulson, Ciuffreda, & Hung, 2008). How myoptic blur affects peripheral performance is yet unknown. Attention might be less needed for processing visual cues foveally and lead to better performance because peripheral cues are better processed as a function of reduced foveal vision, which will be tested in the current experiment. Methods: 18 sport science students with self-reported myopia volunteered as participants, all of them regularly wearing contact lenses. Exclusion criteria comprised visual correction other than myopic, correction of astigmatism and use of contact lenses out of Swiss delivery area. For each of the participants, three pairs of additional contact lenses (besides their regular lenses; used in the “plano” condition) were manufactured with an individual overcorrection to a retinal defocus of +1 to +3 diopters (referred to as “+1.00 D”, “+2.00 D”, and “+3.00 D” condition, respectively). Gaze data were acquired while participants had to perform a multiple object tracking (MOT) task that required to track 4 out of 10 moving stimuli. In addition, in 66.7 % of all trials, one of the 4 targets suddenly stopped during the motion phase for a period of 0.5 s. Stimuli moved in front of a picture of a sports hall to allow for foveal processing. Due to the directional hypotheses, the level of significance for one-tailed tests on differences was set at α = .05 and posteriori effect sizes were computed as partial eta squares (ηρ2). Results: Due to problems with the gaze-data collection, 3 participants had to be excluded from further analyses. The expectation of a centroid strategy was confirmed because gaze was closer to the centroid than the target (all p < .01). In comparison to the plano baseline, participants more often recalled all 4 targets under defocus conditions, F(1,14) = 26.13, p < .01, ηρ2 = .65. The three defocus conditions differed significantly, F(2,28) = 2.56, p = .05, ηρ2 = .16, with a higher accuracy as a function of a defocus increase and significant contrasts between conditions +1.00 D and +2.00 D (p = .03) and +1.00 D and +3.00 D (p = .03). For stop trials, significant differences could neither be found between plano baseline and defocus conditions, F(1,14) = .19, p = .67, ηρ2 = .01, nor between the three defocus conditions, F(2,28) = 1.09, p = .18, ηρ2 = .07. Participants reacted faster in “4 correct+button” trials under defocus than under plano-baseline conditions, F(1,14) = 10.77, p < .01, ηρ2 = .44. The defocus conditions differed significantly, F(2,28) = 6.16, p < .01, ηρ2 = .31, with shorter response times as a function of a defocus increase and significant contrasts between +1.00 D and +2.00 D (p = .01) and +1.00 D and +3.00 D (p < .01). Discussion: The results show that gaze behaviour in MOT is not affected to a relevant degree by a visual overcorrection up to +3 diopters. Hence, it can be taken for granted that peripheral event detection was investigated in the present study. This overcorrection, however, does not harm the capability to peripherally track objects. Moreover, if an event has to be detected peripherally, neither response accuracy nor response time is negatively affected. Findings could claim considerable relevance for all sport situations in which peripheral vision is required which now needs applied studies on this topic. References: Bulson, R. C., Ciuffreda, K. J., & Hung, G. K. (2008). The effect of retinal defocus on golf putting. Ophthalmic and Physiological Optics, 28, 334-344.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND No data are available on the long-term performance of ultrathin strut biodegradable polymer sirolimus-eluting stents (BP-SES). We reported 2-year clinical outcomes of the BIOSCIENCE (Ultrathin Strut Biodegradable Polymer Sirolimus-Eluting Stent Versus Durable Polymer Everolimus-Eluting Stent for Percutaneous Coronary Revascularisation) trial, which compared BP-SES with durable-polymer everolimus-eluting stents (DP-EES) in patients undergoing percutaneous coronary intervention. METHODS AND RESULTS A total of 2119 patients with minimal exclusion criteria were assigned to treatment with BP-SES (n=1063) or DP-EES (n=1056). Follow-up at 2 years was available for 2048 patients (97%). The primary end point was target-lesion failure, a composite of cardiac death, target-vessel myocardial infarction, or clinically indicated target-lesion revascularization. At 2 years, target-lesion failure occurred in 107 patients (10.5%) in the BP-SES arm and 107 patients (10.4%) in the DP-EES arm (risk ratio [RR] 1.00, 95% CI 0.77-1.31, P=0.979). There were no significant differences between BP-SES and DP-EES with respect to cardiac death (RR 1.01, 95% CI 0.62-1.63, P=0.984), target-vessel myocardial infarction (RR 0.91, 95% CI 0.60-1.39, P=0.669), target-lesion revascularization (RR 1.17, 95% CI 0.81-1.71, P=0.403), and definite stent thrombosis (RR 1.38, 95% CI 0.56-3.44, P=0.485). There were 2 cases (0.2%) of definite very late stent thrombosis in the BP-SES arm and 4 cases (0.4%) in the DP-EES arm (P=0.423). In the prespecified subgroup of patients with ST-segment elevation myocardial infarction, BP-SES was associated with a lower risk of target-lesion failure compared with DP-EES (RR 0.48, 95% CI 0.23-0.99, P=0.043, Pinteraction=0.026). CONCLUSIONS Comparable safety and efficacy profiles of BP-SES and DP-EES were maintained throughout 2 years of follow-up. CLINICAL TRIAL REGISTRATION URL: https://www.clinicaltrials.gov. Unique identifier: NCT01443104.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: The efficacy of cognitive behavioral therapy (CBT) for the treatment of depressive disorders has been demonstrated in many randomized controlled trials (RCTs). This study investigated whether for CBT similar effects can be expected under routine care conditions when the patients are comparable to those examined in RCTs. Method: N=574 CBT patients from an outpatient clinic were stepwise matched to the patients undergoing CBT in the National Institute of Mental Health Treatment of Depression Collaborative Research Program (TDCRP). First, the exclusion criteria of the RCT were applied to the naturalistic sample of the outpatient clinic. Second, propensity score matching (PSM) was used to adjust the remaining naturalistic sample on the basis of baseline covariate distributions. Matched samples were then compared regarding treatment effects using effect sizes, average treatment effect on the treated (ATT) and recovery rates. Results: CBT in the adjusted naturalistic subsample was as effective as in the RCT. However, treatments lasted significantly longer under routine care conditions. Limitations: The samples included only a limited amount of common predictor variables and stemmed from different countries. There might be additional covariates, which could potentially further improve the matching between the samples. Conclusions: CBT for depression in clinical practice might be equally effective as manual-based treatments in RCTs when they are applied to comparable patients. The fact that similar effects under routine conditions were reached with more sessions, however, points to the potential to optimize treatments in clinical practice with respect to their efficiency.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

INTRODUCTION Optic neuritis leads to degeneration of retinal ganglion cells whose axons form the optic nerve. The standard treatment is a methylprednisolone pulse therapy. This treatment slightly shortens the time of recovery but does not prevent neurodegeneration and persistent visual impairment. In a phase II trial performed in preparation of this study, we have shown that erythropoietin protects global retinal nerve fibre layer thickness (RNFLT-G) in acute optic neuritis; however, the preparatory trial was not powered to show effects on visual function. METHODS AND ANALYSIS Treatment of Optic Neuritis with Erythropoietin (TONE) is a national, randomised, double-blind, placebo-controlled, multicentre trial with two parallel arms. The primary objective is to determine the efficacy of erythropoietin compared to placebo given add-on to methylprednisolone as assessed by measurements of RNFLT-G and low-contrast visual acuity in the affected eye 6 months after randomisation. Inclusion criteria are a first episode of optic neuritis with decreased visual acuity to ≤0.5 (decimal system) and an onset of symptoms within 10 days prior to inclusion. The most important exclusion criteria are history of optic neuritis or multiple sclerosis or any ocular disease (affected or non-affected eye), significant hyperopia, myopia or astigmatism, elevated blood pressure, thrombotic events or malignancy. After randomisation, patients either receive 33 000 international units human recombinant erythropoietin intravenously for 3 consecutive days or placebo (0.9% saline) administered intravenously. With an estimated power of 80%, the calculated sample size is 100 patients. The trial started in September 2014 with a planned recruitment period of 30 months. ETHICS AND DISSEMINATION TONE has been approved by the Central Ethics Commission in Freiburg (194/14) and the German Federal Institute for Drugs and Medical Devices (61-3910-4039831). It complies with the Declaration of Helsinki, local laws and ICH-GCP. TRIAL REGISTRATION NUMBER NCT01962571.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND CONTEXT Several randomized controlled trials (RCTs) have compared patient outcomes of anterior (cervical) interbody fusion (AIF) with those of total disc arthroplasty (TDA). Because RCTs have known limitations with regard to their external validity, the comparative effectiveness of the two therapies in daily practice remains unknown. PURPOSE This study aimed to compare patient-reported outcomes after TDA versus AIF based on data from an international spine registry. STUDY DESIGN AND SETTING A retrospective analysis of registry data was carried out. PATIENT SAMPLE Inclusion criteria were degenerative disc or disc herniation of the cervical spine treated by single-level TDA or AIF, no previous surgery, and a Core Outcome Measures Index (COMI) completed at baseline and at least 3 months' follow-up. Overall, 987 patients were identified. OUTCOME MEASURES Neck and arm pain relief and COMI score improvement were the outcome measures. METHODS Three separate analyses were performed to compare TDA and AIF surgical outcomes: (1) mimicking an RCT setting, with admission criteria typical of those in published RCTs, a 1:1 matched analysis was carried out in 739 patients; (2) an analysis was performed on 248 patients outside the classic RCT spectrum, that is, with one or more typical RCT exclusion criteria; (3) a subgroup analysis of all patients with additional follow-up longer than 2 years (n=149). RESULTS Matching resulted in 190 pairs with an average follow-up of 17 months that had no residual significant differences for any patient characteristics. Small but statistically significant differences in outcome were observed in favor of TDA, which are potentially clinically relevant. Subgroup analyses of atypical patients and of patients with longer-term follow-up showed no significant differences in outcome between the treatments. CONCLUSIONS The results of this observational study were in accordance with those of the published RCTs, suggesting substantial pain reduction both after AIF and TDA, with slightly greater benefit after arthroplasty. The analysis of atypical patients suggested that, in patients outside the spectrum of clinical trials, both surgical interventions appeared to work to a similar extent to that shown for the cohort in the matched study. Also, in the longer-term perspective, both therapies resulted in similar benefits to the patients.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coronary artery bypass graft (CABG) surgery is among the most common operations performed in the United States and accounts for more resources expended in cardiovascular medicine than any other single procedure. CABG surgery patients initially recover in the Cardiovascular Intensive Care Unit (CVICU). The post-procedure CVICU length of stay (LOS) goal is two days or less. A longer ICU LOS is associated with a prolonged hospital LOS, poor health outcomes, greater use of limited resources, and increased medical costs. ^ Research has shown that experienced clinicians can predict LOS no better than chance. Current CABG surgery LOS risk models differ greatly in generalizability and ease of use in the clinical setting. A predictive model that identified modifiable pre- and intra-operative risk factors for CVICU LOS greater than two days could have major public health implications as modification of these identified factors could decrease CVICU LOS and potentially minimize morbidity and mortality, optimize use of limited health care resources, and decrease medical costs. ^ The primary aim of this study was to identify modifiable pre-and intra-operative predictors of CVICU LOS greater than two days for CABG surgery patients with cardiopulmonary bypass (CPB). A secondary aim was to build a probability equation for CVICU LOS greater than two days. Data were extracted from 416 medical records of CABG surgery patients with CPB, 50 to 80 years of age, recovered in the CVICU of a large teaching, referral hospital in southeastern Texas, during the calendar year 2004 and the first quarter of 2005. Exclusion criteria included Diagnosis Related Group (DRG) 106, CABG surgery without CPB, CABG surgery with other procedures, and operative deaths. The data were analyzed using multivariate logistic regression for an alpha=0.05, power=0.80, and correlation=0.26. ^ This study found age, history of peripheral arterial disease, and total operative time equal to and greater than four hours to be independent predictors of CVICU LOS greater than two days. The probability of CVICU LOS greater than two days can be calculated by the following equation: -2.872941 +.0323081 (age in years) + .8177223 (history of peripheral arterial disease) + .70379 (operative time). ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background. Risk factors underlying the development of Barrett's esophagus (BE) are poorly understood. Recent studies have examined the association between elevated body mass index (BMI) and BE with conflicting results. A systematic review of literature was performed to study this association.^ Methods. Cross-sectional, case control and cohort studies published through April 2007 meeting strict inclusion and exclusion criteria were included. A thorough data abstraction, including that of reported crude or adjusted odds ratios or mean BMI, was performed. Crude odds ratios were estimated from available information in 3 studies.^ Results. Of 630 publications identified by our search items, 59 were reviewed in detail and 12 included in the final analyses. 3 studies showed a statistically significant association between obesity and BE (30-32) while 2 studies found a statistically significant association between overweight and BE (31, 32). Two studies that reported BMI as a continuous variable found BMI in cases to be significantly higher than that in the comparison group (30, 32). Other studies failed to show an significant association between elevated BMI and BE.^ Conclusions. There is conflicting data regarding the association between elevated BMI and BE. It is important to identify other risk factors that in combination with elevated BMI may lead to BE. Further studies are needed to evaluate if the presence of reflux symptoms or any particular pattern of obesity, are independently associated with BE.^ Key words. Barrett's esophagus, obesity, Body Mass Index, gastroesophageal reflux disease, meta-analysis^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introduction. Traveler's Diarrhea is an important public health program in travelers from industrialized nations to the developing world with a prevalence rate of between 13 and 60%. Although studies are found on the etiology of traveler's diarrhea, these studies have not described the etiology over different regions of the world. The objective of this study was to identify the frequency of specific etiology of traveler's diarrhea by geographic area of the world. In addition to this, it was also examined whether there are any regional differences in the isolation rate of ETEC and conventional pathogens and variation, if any, in frequency of these pathogens in different regions over time.^ Material and methods. This is a systematic review of the studies on the etiology of traveler's diarrhea by geographic regions. The search databases used were Medline Pubmed and Medline Ovid and key words used for the search were Etiology of traveler's diarrhea, travelers' diarrhea and acute diarrhea of travelers. The articles were selected according to the inclusion and exclusion criteria and relevant data was extracted which was statistically analyzed.^ Results. Out of 110 studies from 1970 to 2004, 52 studies were included and 58 were excluded from the review. All the 52 studies were grouped according to the geographic regions of interest. Latin America (25 studies), Asia (7 studies), Africa (9 studies), and others/Mixed (11 studies), were the 4 major groups of regions studied. The overall most common pathogen was ETEC (29.10%) in this study and other common pathogens were EAEC (14.42%), norovirus (10.95%), EPEC (6%) and rotavirus (5.23%). ETEC and Shigella show a decreasing trend in Latin America & Caribbean but increasing trend in Asia.^ Conclusion. ETEC is the single most common cause of travelers' diarrhea in the world. Potent vaccines against ETEC are required to prevent travelers' diarrhea and thus reduce the attack rate. Also, PCR based studies are required to identify the causes of pathogen negative diarrhea. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background. Several studies have proposed a link between type 2 Diabetes mellitus (DM2) and Hepatitis C infection (HCV) with conflicting results. Since DM2 and HCV have high prevalence, establishing a link between the two may guide further studies aimed at DM2 prevention. A systematic review was conducted to estimate the magnitude and direction of association between DM2 and HCV. Temporality was assessed from cohort studies and case-control studies where such information was available. ^ Methods. MEDLINE searches were conducted for studies that provided risk estimates and fulfill criteria regarding the definition of exposure (HCV) and outcomes (DM2). HCV was defined in terms of method of diagnosis, laboratory technique and method of data collection; DM2 was defined in terms of the classification [World Health Organization (WHO) and American Diabetes Association (ADA)] 1-3 used for diagnosis, laboratory technique and method of data collection. Standardized searches and data abstraction for construction of tables was performed. Unadjusted or adjusted measures of association for individual studies were obtained or calculated from the full text of the studies. Template designed by Dr. David Ramsey. ^ Results. Forty-six studies out of one hundred and nine potentially eligible articles finally met the inclusion and exclusion criteria and were classified separately based on the study design as cross-sectional (twenty four), case-control (fifteen) or cohort studies (seven). The cohort studies showed a three-fold high (confidence interval 1.66–6.29) occurrence of DM2 in individuals with HCV compared to those who were unexposed to HCV and cross sectional studies had a summary odds ratio of 2.53 (1.96, 3.25). In case control studies, the summary odds ratio for studies done in subjects with DM2 was 3.61 (1.93, 6.74); in HCV, it was 2.30 (1.56, 3.38); and all fifteen studies, together, yielded an odds ratio of 2.60 (1.82, 3.73). ^ Conclusion. The above results support the hypothesis that there is an association between DM and HCV. The temporal relationship evident from cohort studies and proposed pathogenic mechanisms also suggest that HCV predisposes patients to development of DM2. Further cohort or prospective studies are needed, however, to determine whether treatment of HCV infections prevents development of DM2.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Context. Despite the rapid growth of disease management programs, there are still questions about their efficacy and effectiveness for improving patient outcomes and their ability to reduce costs associated with chronic disease. ^ Objective. To determine the effectiveness of disease management programs on improving the results of HbA1c tests, lipid profiles and systolic blood pressure (SBP) readings among diabetics. These three quantitative measures are widely accepted methods of determining the quality of a patient's diabetes management and the potential for future complications. ^ Data Sources. MEDLINE and CINAHL were searched from 1950 to June 2008 using MeSH terms designed to capture all relevant studies. Scopus pearling and hand searching were also done. Only English language articles were selected. ^ Study Selection. Titles and abstracts for the 2347 articles were screened against predetermined inclusion and exclusion criteria, yielding 217 articles for full screening. After full article screening, 29 studies were selected for inclusion in the review. ^ Data Extraction. From the selected studies, data extraction included sample size, mean change over baseline, and standard deviation for each control and experimental arm. ^ Results. The pooled results show a mean HbA1c reduction of 0.64%, 95% CI (-0.83 to -0.44), mean SBP reduction of 7.39 mmHg (95% CI to -11.58 to -3.2), mean total cholesterol reduction of 5.74 mg/dL (95% CI, -10.01 to -1.43), and mean LDL cholesterol reduction of 3.74 mg/dL (95% CI, -8.34 to 0.87). Results for HbA1c, SBP and total cholesterol were statistically significant, while the results for LDL cholesterol were not. ^ Conclusions. The findings suggest that disease management programs utilizing five hallmarks of care can be effective at improving intermediate outcomes among diabetics. However, given the significant heterogeneity present, there may be fundamental differences with respect to study-specific interventions and populations that render them inappropriate for meta-analysis. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective. To determine whether the use of a triage team would reduce the average time-in-department in a pediatric emergency department by 25%.^ Methods. A triage team consisting of a physician, a nurse, and a nurse's assistant initiated work-ups and saw patients who required minimal lab work-up and were likely to be discharged. Study days were randomized. Our inclusion criteria were all children seen in the emergency center between 6p and 2a Monday-Friday. Our exclusion criteria included resuscitations, inpatient-inpatient transfers, left without being seen, leaving against medical advice, any child seen outside of 6p-2am Monday-Friday and on the weekends. A Pearson-Chi square was used for comparison of the two groups for heterogeneity. For the time-in-department analysis, we performed a 2 sided t-test with a set alpha of 0.05 using Mann Whitney U looking for differences in time-in-department based on acuity level, disposition, and acuity level stratified by disposition. ^ Results. Among urgent and non-urgent patients, we found a statistically significant decrease in time-in-department in a pediatric emergency department. Urgent patients had a time-in-department that was 51 minutes shorter than patients seen on non-triage team days (p=0.007), which represents a 14% decrease in time-in-department. Non-urgent patients seen on triage team days had a time-in-department that was 24 minutes shorter than non-urgent patients seen on non-triage team days (p=0.009). From the disposition perspective, discharged patients seen on triage team days had a shorter time-in-department of 28 minutes as compared to those seen on non-triage team days (p=0.012). ^ Conclusion. Overall, there was a trend towards decreased time-in-department of 19 minutes (5.9% decrease) during triage team times. There was a statistically significant decrease in the time-in-department among urgent patients of 51 minutes (13.9% decrease) and among discharged patients of 28 minutes (8.4% decrease). Urgent care patients make up nearly a quarter of the emergency patient population and decreasing their time-in-department would likely make a significant impact on overall emergency flow.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background. Cardiovascular disease (CVD) exhibits the most striking public health significance due to its high prevalence and mortality as well as huge economic burdens all over the world, especially in industrialized countries. Major risk factors of CVDs have been the targets of population-wide prevention in the United States. Economic evaluations provide structured information in regard to the efficiency of resource utilization which can inform decisions of resource allocation. The main purpose of this review is to investigate the pattern of study design of economic evaluations for interventions of CVDs. ^ Methods. Primary journal articles published during 2003-2008 were systematically retrieved via relevant keywords from Medline, NHS Economic Evaluation Database (NHS EED) and EBSCO Academic Search Complete. Only full economic evaluations for narrowly defined CVD interventions were included for this review. The methodological data of interest were extracted from the eligible articles and reorganized in Microsoft Access database. Chi-square tests in SPSS were used to analyze the associations between pairs of categorical data. ^ Results. One hundred and twenty eligible articles were reviewed after two steps of literature selection with explicit inclusion and exclusion criteria. Descriptive statistics were reported regarding the evaluated interventions, outcome measures, unit costing and cost reports. The chi-square test of the association between prevention level of intervention and category of time horizon showed no statistical significance. The chi-square test showed that sponsor type was significantly associated with whether new or standard intervention being concluded as more cost effective. ^ Conclusions. Tertiary prevention and medication interventions are the major interests for economic evaluators. The majority of the evaluations were claimed from either a provider’s or a payer’s perspective. Almost all evaluations adopted gross costing strategy for unit cost data rather than micro costing. EQ-5D is the most commonly used instrument for subjective outcome measurement. More than half of the evaluations used decision analytic modeling techniques. The lack of consistency in study design standards in published evaluations appears in several aspects. Prevention level of intervention is not likely to be a factor for evaluators to decide whether to design an evaluation in a lifetime horizon or not. Published evaluations sponsored by industry are more likely to conclude that new intervention is more cost effective than standard intervention.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

African Americans make up 12.3% of the population but account for over half of the new HIV cases and 39% of the AIDS cases in 2003 (Centers for Disease Control and Prevention [CDC], 2003). African American women in particular accounted for 64% of these cases of HIV and 60% of the AIDS cases (Leigh & Huff, 2003). This study contributed to the knowledge about the disclosure process of women living with HIV/AIDS by documenting the relationship between social support and the disclosure process in the African American HIV/AIDS population.^ The study aims were to: (1) discuss the participants' self concept of support; (2) describe the common characteristics of the disclosure process; and (3) evaluate the common characteristics of support sought in a potential disclosure source. The ethnographic qualitative methodology was utilized to elicit participant narratives of HIV disclosure and social support. The researcher utilized a key informant interview methodology building on existing social and organizational relationships (Krueger, 1994) to gain access to the population. ^ Semi-structured interviews are a widely used and accepted qualitative research method for use with hard to reach populations and sensitive topics. Ten participants completed a 45 to 60 minute, one on one semi-structured interview covering social support and disclosure variables. Inclusion and exclusion criteria included: (1) self identified as a person living with HIV/AIDS; (2) African American); (3) female; (4) age 18-64 years old, (5) residence in Houston or surrounding counties.^ Themes generated from the interviews were (1) nondisclosure, (2) experiences with disclosure, (3) timing, (4) disclosure sources, and (5) coping. The themes suggest African American women living with HIV/AIDS come from different lifestyles but share similar experiences. Women utilize different strategies such as deciphering whom to trust and determining how much information to divulge in order to protect themselves or others.^ Although the sample group was small for this study, the results inform us about the various experiences each woman goes through as it relates to social support and disclosure and that each woman has to customize her response to the type of support she is receiving and her personal attitude about her disease.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Studies suggest that depression affects glucose metabolism, and therefore is a risk factor for insulin resistance. The association between depression and insulin resistance has been investigated in a number of studies, but there is no agreement on the results. The objective of this study is to survey the epidemiological studies, identify the ones that measured the association of depression (as exposure) with insulin resistance (as outcome), and perform a systematic review to assess the reliability and strength of the association. For high quality reporting, and assessment, this systematic review used the outlined procedures, guidelines and recommendations for reviews in health care, suggested by the Centre for Reviews and Dissemination, along with recommendations from the STROBE group (Strengthening the Reporting of Observational Studies in Epidemiology). Ovid MEDLINE 1996 to April Week 1 2010, was used to identify the relevant epidemiological studies. To identify the most relevant set of articles for this systematic review, a set of inclusion and exclusion criteria were applied. Six studies that met the specific criteria were selected. Key information from identified studies was tabulated, and the methodological quality, internal and external validity, and the strength of the evidence of the selected studies were assessed. The result from the tabulated data of the reviewed studies indicates that the studies either did not apply a case definition for insulin resistance in their investigation, or did not state a specific value for the index used to define insulin resistance. The quality assessment of the reviewed studies indicates that to assess the association between insulin resistance and depression, specifying a case definition for insulin resistance is important. The case definition for insulin resistance is defined by the World Health Organization and the European Group for the Study of Insulin Resistance as the insulin sensitivity index of the lowest quartile or lowest decile of a general population, respectively. Three studies defined the percentile cut-off point for insulin resistance, but did not give the insulin sensitivity index value. In these cases, it is not possible to compare the results. Three other studies did not define the cut-off point for insulin resistance. In these cases, it is hard to confirm the existence of insulin resistance. In conclusion, to convincingly answer our question, future studies need to adopt a clear case definition, define a percentile cut-off point and reference population, and give value of the insulin resistance measure at the specified percentile.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An investigation of (a) month/season-of-birth as a risk factor and (b) month/season-of-treatment initation as a prognostic factor in acute lymphoblastic leukemia (ALL) in children, 0-15 years of age, was conducted. The study population used was that of the Surveillance, Epidemiology, and End Results (SEER) program of the National Cancer Institute and included children diagnosed and treated for ALL from 1973-1986. Two separate sets of analyses using different exclusion criteria led to similar results. Specifically, the inability to reject the null hypothesis of no significant difference in the variation of monthly/seasonal incidence rates among children residing within the 10 SEER sites using either cosinor analysis or one-way analysis of variance. No association was established between month/season of treatment initiation and survival in ALL among children using either Kaplan-Meier or cosinor analysis. In separate Kaplan-Meier analyses, age, gender, and treatment type were each found to be significant univariate prognostic factors for survival, however. ^