950 resultados para Risk Indicators
Resumo:
Quality of life (QL) is an important consideration when comparing adjuvant therapies for early breast cancer, especially if they differ substantially in toxicity. We evaluated QL and Q-TWiST among patients randomised to adjuvant dose-intensive epirubicin and cyclophosphamide administered with filgrastim and progenitor cell support (DI-EC) or standard-dose anthracycline-based chemotherapy (SD-CT). We estimated the duration of chemotherapy toxicity (TOX), time without disease symptoms and toxicity (TWiST), and time following relapse (REL). Patients scored QL indicators. Mean durations for the three transition times were weighted with patient reported utilities to obtain mean Q-TWiST. Patients receiving DI-EC reported worse QL during TOX, especially treatment burden (month 3: P<0.01), but a faster recovery 3 months following chemotherapy than patients receiving SD-CT, for example, less coping effort (P<0.01). Average Q-TWiST was 1.8 months longer for patients receiving DI-EC (95% CI, -2.5 to 6.1). Q-TWiST favoured DI-EC for most values of utilities attached to TOX and REL. Despite greater initial toxicity, quality-adjusted survival was similar or better with dose-intensive treatment as compared to standard treatment. Thus, QL considerations should not be prohibitive if future intensive therapies show superior efficacy.
Resumo:
PURPOSE: There is no general agreement on the best indication and timing of vitrectomy in patients suffering from Terson syndrome. Therefore, we reviewed our cases in order to assess factors interfering with the functional outcome and complication rates after vitrectomy. METHODS: In this retrospective consecutive case series, the records from all patients undergoing vitrectomy for Terson syndrome between 1975 and 2005 were evaluated. RESULTS: Thirty-seven patients (45 eyes) were identified, 36 of whom (44 corresponding eyes) were eligible. The best-corrected visual acuity (BCVA) at first and last presentation was 0.07 +/- 0.12 and 0.72 +/- 0.31, respectively. Thirty-five eyes (79.5%) achieved a postoperative BCVA of > or = 0.5; 26 (59.1%) eyes achieved a postoperative BCVA of > or = 0.8. Patients operated on within 90 days of vitreous haemorrhage achieved a better final BCVA than those with a longer latency (BCVA of 0.87 +/- 0.27 compared to 0.66 +/- 0.31; P = 0.03). Patients younger than 45 years of age achieved a better final BCVA than older patients (0.85 +/- 0.24 compared to 0.60 +/- 0.33; P = 0.006). Retinal detachment developed in four patients between 6 and 27 months after surgery. Seven patients (16%) required epiretinal membrane peeling and seven cataract surgery. CONCLUSION: Ninety-eight per cent of our patients experienced a rapid and persisting visual recovery after removal of a vitreous haemorrhage caused by Terson syndrome. A shorter time between occurrence of vitreous haemorrhage and surgery as well as a younger patient age are predictive of a better outcome. Generally, the surgical risk is low, but complications (namely retinal detachment) may occur late after surgery.
Resumo:
The diagnosis of the obliterative bronchiolitis syndrome in lung transplantation is presently best established by evaluation of postoperative lung function tests. Unfortunately the decline in lung function occurs only when obliteration has progressed significantly and is therefore not an early predictive indicator. To distinguish patients at increased risk for the development of obliterative bronchiolitis, we regularly assessed the chemiluminescence response of polymorphonuclear leukocytes, opsonic capacity, and plasma elastase/beta-N-acetylglucosaminidase in 52 outpatients (25 women and 27 men; mean age 45 +/- 12 years) who underwent transplantation between January 1991 and January 1992. Recent onset bronchiolitis within the described observation period occurred in 16 patients (group obliterative bronchiolitis). A matched cohort of 16 patients was formed according to type of procedure, age and follow-up (control) from the remaining 36 patients. Data obtained from a period 6 months before clinical onset of the syndrome showed a significant drop of the opsonic capacity (group obliterative bronchiolitis = 87% +/- 7%; control = 100% +/- 9%; p < 0.023) and rise of the N-acetyl-D-glucosaminidase (group obliterative bronchiolitis = 7.5 +/- 2 U/L; control = 5.8 +/- 1.8 U/L; p < 0.04). No correlation was found between the number of infectious events or rejection episodes and the incidence of obliterative bronchiolitis. According to these results, it can be concluded that a decrease in the plasma opsonic capacity and a rise in beta-N-acetylglucosaminidase may be early markers before clinical onset of obliterative bronchiolitis. The nonspecific immune system may therefore play an important role in the development of obliterative bronchiolitis.
Resumo:
BACKGROUND Empirical research has illustrated an association between study size and relative treatment effects, but conclusions have been inconsistent about the association of study size with the risk of bias items. Small studies give generally imprecisely estimated treatment effects, and study variance can serve as a surrogate for study size. METHODS We conducted a network meta-epidemiological study analyzing 32 networks including 613 randomized controlled trials, and used Bayesian network meta-analysis and meta-regression models to evaluate the impact of trial characteristics and study variance on the results of network meta-analysis. We examined changes in relative effects and between-studies variation in network meta-regression models as a function of the variance of the observed effect size and indicators for the adequacy of each risk of bias item. Adjustment was performed both within and across networks, allowing for between-networks variability. RESULTS Imprecise studies with large variances tended to exaggerate the effects of the active or new intervention in the majority of networks, with a ratio of odds ratios of 1.83 (95% CI: 1.09,3.32). Inappropriate or unclear conduct of random sequence generation and allocation concealment, as well as lack of blinding of patients and outcome assessors, did not materially impact on the summary results. Imprecise studies also appeared to be more prone to inadequate conduct. CONCLUSIONS Compared to more precise studies, studies with large variance may give substantially different answers that alter the results of network meta-analyses for dichotomous outcomes.
Resumo:
Over the last forty years, applying dendrogeomorphology to palaeoflood analysis has improved estimates of the frequency and magnitude of past floods worldwide. This paper reviews the main results obtained by applying dendrogeomorphology to flood research in several case studies in Central Spain. These dendrogeomorphological studies focused on the following topics: (1) anatomical analysis to understand the physiological response of trees to flood damage and improve sampling efficiency; (2) compiling robust flood chronologies in ungauged mountain streams, (3) determining flow depth and estimating flood discharge using two-dimensional hydraulic modelling, and comparing them with other palaeostage indicators; (4) calibrating hydraulic model parameters (i.e. Manning roughness); and (5) implementing stochastic-based, cost–benefit analysis to select optimal mitigation measures. The progress made in these areas is presented with suggestions for further research to improve the applicability of dendrogeochronology to palaeoflood studies. Further developments will include new methods for better identification of the causes of specific types of flood damage to trees (e.g. tilted trees) or stable isotope analysis of tree rings to identify the climatic conditions associated with periods of increasing flood magnitude or frequency.
Resumo:
Dendrogeomorphology uses information sources recorded in the roots, trunks and branches of trees and bushes located in the fluvial system to complement (or sometimes even replace) systematic and palaeohydrological records of past floods. The application of dendrogeomorphic data sources and methods to palaeoflood analysis over nearly 40 years has allowed improvements to be made in frequency and magnitude estimations of past floods. Nevertheless, research carried out so far has shown that the dendrogeomorphic indicators traditionally used (mainly scar evidence), and their use to infer frequency and magnitude, have been restricted to a small, limited set of applications. New possibilities with enormous potential remain unexplored. New insights in future research of palaeoflood frequency and magnitude using dendrogeomorphic data sources should: (1) test the application of isotopic indicators (16O/18O ratio) to discover the meteorological origin of past floods; (2) use different dendrogeomorphic indicators to estimate peak flows with 2D (and 3D) hydraulic models and study how they relate to other palaeostage indicators; (3) investigate improved calibration of 2D hydraulic model parameters (roughness); and (4) apply statistics-based cost–benefit analysis to select optimal mitigation measures. This paper presents an overview of these innovative methodologies, with a focus on their capabilities and limitations in the reconstruction of recent floods and palaeofloods.
Resumo:
Objectives. Cardiovascular disease (CVD) including CVD secondary to diabetes type II, a significant health problem among Mexican American populations, originates in early childhood. This study seeks to determine risk factors available to the health practitioner that can identify the child at potential risk of developing CVD, thereby enabling early intervention. ^ Design. This is a secondary analysis of cross-sectional data of matched Mexican American parents and children selected from the HHANES, 1982–1984. ^ Methods. Parents at high risk for CVD were identified based on medical history, and clinical and physical findings. Factor analysis was performed on children's skinfold thicknesses, height, weight, and systolic and diastolic blood pressures, in order to produce a limited number of uncorrelated child CVD risk factors. Multiple regression analyses were then performed to determine other CVD markers associated with these Factors, independently for mothers and fathers. ^ Results. Factor analysis of children's measurements revealed three uncorrelated latent variables summarizing the children's CVD risk: Factor1: ‘Fatness’, Factor2: ‘Size and Maturity’, and Factor3: ‘Blood Pressure’, together accounting for the bulk of variation in children's measurements (86–89%). Univariate analyses showed that children from high CVD risk families did not differ from children of low risk families in occurrence of high blood pressure, overweight, biological maturity, acculturation score, or social and economic indicators. However, multiple regression using the factor scores (from factor analysis) as dependent variables, revealed that higher CVD risk in parents, was significantly associated with increased fatness and increased blood pressure in the children. Father's CVD risk status was associated with higher levels of body fat in his children and higher levels of blood pressure in sons. Mother's CVD risk status was associated with higher blood pressure levels in children, and occurrence of obesity in the mother associated with higher fatness levels in her children. ^ Conclusion. Occurrence of cardiovascular disease and its risk factors in parents of Mexican American children, may be used to identify children at potentially higher risk for developing CV disease in the future. Obesity in mothers appears to be an important marker for the development of higher levels of body fatness in children. ^
Resumo:
Subclinical thyroid dysfunction has been associated with coronary heart disease, but the risk of stroke is unclear. Our aim is to combine the evidence on the association between subclinical thyroid dysfunction and the risk of stroke in prospective cohort studies. We searched Medline (OvidSP), Embase, Web-of-Science, Pubmed Publisher, Cochrane and Google Scholar from inception to November 2013 using a cohort filter, but without language restriction or other limitations. Reference lists of articles were searched. Two independent reviewers screened articles according to pre-specified criteria and selected prospective cohort studies with baseline thyroid function measurements and assessment of stroke outcomes. Data were derived using a standardized data extraction form. Quality was assessed according to previously defined quality indicators by two independent reviewers. We pooled the outcomes using a random-effects model. Of 2,274 articles screened, six cohort studies, including 11,309 participants with 665 stroke events, met the criteria. Four of six studies provided information on subclinical hyperthyroidism including a total of 6,029 participants and five on subclinical hypothyroidism (n = 10,118). The pooled hazard ratio (HR) was 1.08 (95 % CI 0.87-1.34) for subclinical hypothyroidism (I (2) of 0 %) and 1.17 (95 % CI 0.54-2.56) for subclinical hyperthyroidism (I (2) of 67 %) compared to euthyroidism. Subgroup analyses yielded similar results. Our systematic review provides no evidence supporting an increased risk for stroke associated with subclinical thyroid dysfunction. However, the available literature is insufficient and larger datasets are needed to perform extended analyses. Also, there were insufficient events to exclude clinically significant risk from subclinical hyperthyroidism, and more data are required for subgroup analyses.
Resumo:
BACKGROUND: There is little knowledge in the literature on the role of time-related variables for the prognosis of acute and subacute low back pain (LBP). OBJECTIVE: The aim of this study was to estimate the relationship between time-related LBP characteristics and prognostic factors for acute/subacute LBP. METHODS: We performed a prospective inception cohort study of 315 patients attending a health practitioner for acute/subacute LBP or recurrent LBP. One-tailed correlations were conducted between patient characteristics and time-related variables. RESULTS: The pattern of correlation between risk factors for and resources against persistent LBP differed between three time-related variables. 'Subacute LBP' and 'delayed presentation' were positively associated with psychological factors. Both indicators were negatively correlated with resources against development of persistent LBP. Moreover, 'delayed presentation' was related positively with occupational stressors. In contrast, 'recurrent LBP' was only related to more impaired health-related factors. CONCLUSIONS: Patients with current LBP waiting longer until seeking help in primary care have a more disadvantageous profile of occupational and psychological risk factors and lower resource levels. A similar but less pronounced pattern occurred in those with subacute LBP compared to those with acute LBP. Consideration of time characteristics of LBP may help to better understand LBP.
Resumo:
BACKGROUND Associations between social status and health behaviours are well documented, but the mechanisms involved are less understood. Cultural capital theory may contribute to a better understanding by expanding the scope of inequality indicators to include individuals' knowledge, skills, beliefs and material goods to examine how these indicators impact individuals' health lifestyles. We explore the structure and applicability of a set of cultural capital indicators in the empirical exploration of smoking behaviour among young male adults. METHODS We analysed data from the Swiss Federal Survey of Adolescents (CH-X) 2010-11 panel of young Swiss males (n = 10 736). A set of nine theoretically relevant variables (including incorporated, institutionalized and objectified cultural capital) were investigated using exploratory factor analysis. Regression models were run to observe the association between factor scores and smoking outcomes. Outcome measures consisted of daily smoking status and the number of cigarettes smoked by daily smokers. RESULTS Cultural capital indicators aggregated in a three-factor solution representing 'health values', 'education and knowledge' and 'family resources'. Each factor score predicted the smoking outcomes. In young males, scoring low on health values, education and knowledge and family resources was associated with a higher risk of being a daily smoker and of smoking more cigarettes daily. CONCLUSION Cultural capital measures that include, but go beyond, educational attainment can improve prediction models of smoking in young male adults. New measures of cultural capital may thus contribute to our understanding of the social status-based resources that individuals can use towards health behaviours.
Resumo:
Background Several indicators of heightened vulnerability to psychosis and relevant stressors have been identified. However, it has rarely been studied prospectively to what extent these vulnerability factors are in fact more frequently present in individuals with an at-risk mental state for psychosis. Moreover, it remains unknown whether any of these contribute to the prediction of psychosis onset in at-risk mental state individuals. Methods There were 28 healthy controls, 86 first-episode psychosis patients and 127 at-risk mental state individuals recruited within the Basel “Früherkennung von Psychosen” project. Relative frequencies of selected vulnerability factors for psychosis were compared between healthy controls, psychosis patients, those at-risk mental state individuals with subsequent psychosis onset (n = 31) and those without subsequent psychosis onset (n = 55). Survival analyses were applied to determine associations between time to transition to psychosis and vulnerability factors in all 127 at-risk mental state individuals. Results The vulnerability factors/indicators such as “difficulties during school education or vocational training”, “difficulties during employment”, “being single”, “difficulties with intimate relationships” and “being burdened with specific stressful situations” were more commonly found in the at-risk mental state and first-episode psychosis group than in healthy controls. Conclusions At-risk mental state and first-episode psychosis individuals more frequently present with vulnerability factors. Individual vulnerability factors appear, however, not to be predictive for an onset of psychosis.
Resumo:
We used meat-inspection data collected over a period of three years in Switzerland to evaluate slaughterhouse-level, farm-level and animal-level factors that may be associated with whole carcass condemnation (WCC) in cattle after slaughter. The objective of this study was to identify WCC risk factors so they can be communicated to, and managed by, the slaughter industry and veterinary services. During meat inspection, there were three main important predictors of the risk of WCC; the slaughtered animal's sex, age, and the size of the slaughterhouse it was processed in. WCC for injuries and significant weight loss (visible welfare indicators) were almost exclusive to smaller slaughterhouses. Cattle exhibiting clinical syndromes that were not externally visible (e.g. pneumonia lesions) and that are associated with fattening of cattle, end up in larger slaughterhouses. For this reason, it is important for animal health surveillance to collect data from both types of slaughterhouses. Other important risk factors for WCC were on-farm mortality rate and the number of cattle on the farm of origin. This study highlights the fact that the many risk factors for WCC are as complex as the production system itself, with risk factors interacting with one another in ways which are sometimes difficult to interpret biologically. Risk-based surveillance aimed at farms with reoccurring health problems (e.g. a history of above average condemnation rates) may be more appropriate than the selection, of higher-risk animals arriving at slaughter. In Switzerland, the introduction of a benchmarking system that would provide feedback to the farmer with information on condemnation reasons, and his/her performance compared to the national/regional average could be a first step towards improving herd-management and financial returns for producers.
Resumo:
In order to better take advantage of the abundant results from large-scale genomic association studies, investigators are turning to a genetic risk score (GRS) method in order to combine the information from common modest-effect risk alleles into an efficient risk assessment statistic. The statistical properties of these GRSs are poorly understood. As a first step toward a better understanding of GRSs, a systematic analysis of recent investigations using a GRS was undertaken. GRS studies were searched in the areas of coronary heart disease (CHD), cancer, and other common diseases using bibliographic databases and by hand-searching reference lists and journals. Twenty-one independent case-control studies, cohort studies, and simulation studies (12 in CHD, 9 in other diseases) were identified. The underlying statistical assumptions of the GRS using the experience of the Framingham risk score were investigated. Improvements in the construction of a GRS guided by the concept of composite indicators are discussed. The GRS will be a promising risk assessment tool to improve prediction and diagnosis of common diseases.^
Resumo:
This study provides a review of the current alcoholism planning process of the Houston-Galveston planning process of the Houston-Galveston Area Council, an agency carrying out planning for a thirteen county region in surrounding Houston, Texas. The four central groups involved in this planning are identified, and the role that each plays and how it effects the planning outcomes is discussed.^ The most substantive outcome of the Houston-Galveston Area Council's alcoholism planning, the Regional Alcoholism/Alcohol Abuse Plan is examined. Many of the shortcomings in the data provided, and the lack of other data necessary for planning are offered.^ A problem oriented planning model is presented as an alternative to the Houston-Galveston Area Council's current service oriented approach to alcoholism planning. Five primary phases of the model, identification of the problem, statement of objectives, selection of alternative programs, implementation, and evaluation, are presented, and an overview of the tasks involved in the application of this model to alcoholism planning is offered.^ A specific aspect of the model, the use of problem status indicators is explored using cirrhosis and suicide mortality data. A review of the literature suggests that based on five criteria, availability, subgroup identification, validity, reliability, and sensitivity, both suicide and cirrhosis are suitable as indicators of the alcohol problem when combined with other indicators.^ Cirrhosis and suicide mortality data are examined for the thirteen county Houston-Galveston Region for the years 1969 through 1976. Data limitations preclude definite conclusions concerning the alcohol problem in the region. Three hypotheses about the nature of the regional alcohol problem are presented. First, there appears to be no linear trend in the number of alcoholics that are at risk of suicide and cirrhosis mortality. Second, the number of alcoholics in the metropolitan areas seems to be greater than the number of rural areas. Third, the number of male alcoholics at risk of cirrhosis and suicide mortality is greater than the number of female alcoholics.^
Resumo:
Injection drug use is the third most frequent risk factor for new HIV infections in the United States. A dual mode of exposure: unsafe drug using practices and risky sexual behaviors underlies injection drug users' (IDUs) risk for HIV infection. This research study aims to characterize patterns of drug use and sexual behaviors and to examine the social contexts associated with risk behaviors among a sample of injection drug users. ^ This cross-sectional study includes 523 eligible injection drug users from Houston, Texas, recruited into the 2009 National HIV Behavioral Surveillance project. Three separate set of analyses were carried out. First, using latent class analysis (LCA) and maximum likelihood we identified classes of behavior describing levels of HIV risk, from nine drug and sexual behaviors. Second, eight separate multivariable regression models were built to examine the odds of reporting a given risk behavior. We constructed the most parsimonious multivariable model using a manual backward stepwise process. Third, we examined whether HIV serostatus knowledge (self-reported positive, negative, or unknown serostatus) is associated with drug use and sexual HIV risk behaviors. ^ Participants were mostly male, older, and non-Hispanic Black. Forty-two percent of our sample had behaviors putting them at high risk, 25% at moderate risk, and 33% at low risk for HIV infection. Individuals in the High-risk group had the highest probability of risky behaviors, categorized as almost always sharing needles (0.93), seldom using condoms (0.10), reporting recent exchange sex partners (0.90), and practicing anal sex (0.34). We observed that unsafe injecting practices were associated with high risk sexual behaviors. IDUs who shared needles had higher odds of having anal sex (OR=2.89, 95%CI: 1.69-4.92) and unprotected sex (OR=2.66, 95%CI: 1.38-5.10) at last sex. Additionally, homelessness was associated with needle sharing (OR=2.24, 95% CI: 1.34-3.76) and cocaine use was associated with multiple sex partners (OR=1.82, 95% CI: 1.07-3.11). Furthermore, twenty-one percent of the sample was unaware of their HIV serostatus. The three groups were not different from each other in terms of drug-use behaviors: always using a new sterile needle, or in sharing needles or drug preparation equipment. However, IDUs unaware of their HIV serostatus were 33% more likely to report having more than three sexual partners in the past 12 months; 45% more likely to report to have unprotected sex and 85% more likely to use drug and or alcohol during or before at last sex compared to HIV-positive IDUs. ^ This analysis underscores the merit of LCA approach to empirically categorize injection drug users into distinct classes and identify their risk pattern using multiple indicators and our results show considerable overlap of high risk sexual and drug use behaviors among the high-risk class members. The observed clustering pattern of drug and sexual risk behavior among this population confirms that injection drug users do not represent a homogeneous population in terms of HIV risk. These findings will help develop tailored prevention programs.^