808 resultados para Medical lab data
Resumo:
The factorial validity of the SF-36 was evaluated using confirmatory factor analysis (CFA) methods, structural equation modeling (SEM), and multigroup structural equation modeling (MSEM). First, the measurement and structural model of the hypothesized SF-36 was explicated. Second, the model was tested for the validity of a second-order factorial structure, upon evidence of model misfit, determined the best-fitting model, and tested the validity of the best-fitting model on a second random sample from the same population. Third, the best-fitting model was tested for invariance of the factorial structure across race, age, and educational subgroups using MSEM.^ The findings support the second-order factorial structure of the SF-36 as proposed by Ware and Sherbourne (1992). However, the results suggest that: (a) Mental Health and Physical Health covary; (b) general mental health cross-loads onto Physical Health; (c) general health perception loads onto Mental Health instead of Physical Health; (d) many of the error terms are correlated; and (e) the physical function scale is not reliable across these two samples. This hierarchical factor pattern was replicated across both samples of health care workers, suggesting that the post hoc model fitting was not data specific. Subgroup analysis suggests that the physical function scale is not reliable across the "age" or "education" subgroups and that the general mental health scale path from Mental Health is not reliable across the "white/nonwhite" or "education" subgroups.^ The importance of this study is in the use of SEM and MSEM in evaluating sample data from the use of the SF-36. These methods are uniquely suited to the analysis of latent variable structures and are widely used in other fields. The use of latent variable models for self reported outcome measures has become widespread, and should now be applied to medical outcomes research. Invariance testing is superior to mean scores or summary scores when evaluating differences between groups. From a practical, as well as, psychometric perspective, it seems imperative that construct validity research related to the SF-36 establish whether this same hierarchical structure and invariance holds for other populations.^ This project is presented as three articles to be submitted for publication. ^
Resumo:
Background. Increased incidence of cancer is documented in immunosuppressed transplant patients. Likewise, as survival increases for persons infected with the Human Immunodeficiency Virus (HIV), we expect their incidence of cancer to increase. The objective of this study was to examine the current gender specific spectrum of cancer in an HIV infected cohort (especially malignancies not currently associated with Acquired Immunodeficiency Syndrome (AIDS)) in relation to the general population.^ Methods. Cancer incidence data was collected for residents of Harris County, Texas who were diagnosed with a malignancy between 1975 and 1994. This data was linked to HIV/AIDS registry data to identify malignancies in an HIV infected cohort of 14,986 persons. A standardized incidence ratio (SIR) analysis was used to compare incidence of cancer in this cohort to that in the general population. Risk factors such as mode of HIV infection, age, race and gender, were evaluated for contribution to the development of cancer within the HIV cohort, using Cox regression techniques.^ Findings. Of those in the HIV infected cohort, 2289 persons (15%) were identified as having one or more malignancies. The linkage identified 29.5% of these malignancies (males 28.7% females 60.9%). HIV infected men and women had incidences of cancer that were 16.7 (16.1, 17.3) and 2.9 (2.3, 3.7) times that expected for the general population of Harris County, Texas, adjusting for age. Significant SIR's were observed for the AIDS-defining malignancies of Kaposi's sarcoma, non-Hodgkin's lymphoma, primary lymphoma of the brain and cancer of the cervix. Additionally, significant SIR's for non-melanotic skin cancer in males, 6.9 (4.8, 9.5) and colon cancer in females, 4.0 (1.1, 10.2) were detected. Among the HIV infected cohort, race/ethnicity of White (relative risk 2.4 with 95% confidence intervals 2.0, 2.8) or Spanish Surname, 2.2 (1.9, 2.7) and an infection route of male to male sex, with, 3.0 (1.9, 4.9) or without, 3.4 (2.1, 5.5) intravenous drug use, increased the risk of having a diagnosis of an incident cancer.^ Interpretation. There appears to be an increased risk of developing cancer if infected with the HIV. In addition to the malignancies routinely associated with HIV infection, there appears to be an increased risk of being diagnosed with non-melanotic skin cancer in males and colon cancer in females. ^
Resumo:
Nuclear morphometry (NM) uses image analysis to measure features of the cell nucleus which are classified as: bulk properties, shape or form, and DNA distribution. Studies have used these measurements as diagnostic and prognostic indicators of disease with inconclusive results. The distributional properties of these variables have not been systematically investigated although much of the medical data exhibit nonnormal distributions. Measurements are done on several hundred cells per patient so summary measurements reflecting the underlying distribution are needed.^ Distributional characteristics of 34 NM variables from prostate cancer cells were investigated using graphical and analytical techniques. Cells per sample ranged from 52 to 458. A small sample of patients with benign prostatic hyperplasia (BPH), representing non-cancer cells, was used for general comparison with the cancer cells.^ Data transformations such as log, square root and 1/x did not yield normality as measured by the Shapiro-Wilks test for normality. A modulus transformation, used for distributions having abnormal kurtosis values, also did not produce normality.^ Kernel density histograms of the 34 variables exhibited non-normality and 18 variables also exhibited bimodality. A bimodality coefficient was calculated and 3 variables: DNA concentration, shape and elongation, showed the strongest evidence of bimodality and were studied further.^ Two analytical approaches were used to obtain a summary measure for each variable for each patient: cluster analysis to determine significant clusters and a mixture model analysis using a two component model having a Gaussian distribution with equal variances. The mixture component parameters were used to bootstrap the log likelihood ratio to determine the significant number of components, 1 or 2. These summary measures were used as predictors of disease severity in several proportional odds logistic regression models. The disease severity scale had 5 levels and was constructed of 3 components: extracapsulary penetration (ECP), lymph node involvement (LN+) and seminal vesicle involvement (SV+) which represent surrogate measures of prognosis. The summary measures were not strong predictors of disease severity. There was some indication from the mixture model results that there were changes in mean levels and proportions of the components in the lower severity levels. ^
Resumo:
This research examines prevalence of alcohol and illicit substance use in the United States and Mexico and associated socio-demographic characteristics. The sources of data for this study are public domain data from the U.S. National Household Survey of Drug Abuse, 1988 (n = 8814), and the Mexican National Survey of Addictions, 1988 (n = 12,579). In addition, this study discusses methodologic issues in cross-cultural and cross-national comparison of behavioral and epidemiologic data from population-based samples. The extent to which patterns of substance abuse vary among subgroups of the U.S. and Mexican populations is assessed, as well as the comparability and equivalence of measures of alcohol and drug use in these national samples.^ The prevalence of alcohol use was somewhat similar in the two countries for all three measures of use: lifetime, past year and past year heavy use, (85.0%, 68.1%, 39.6% and 72.6%, 47.7% and 45.8% for the U.S. and Mexico respectively). The use of illegal substances varied widely between countries, with U.S. respondents reporting significantly higher levels of use than their Mexican counterparts. For example, reported use of any illicit substance in lifetime and past year was 34.2%, 11.6 for the U.S., and 3.3% and 0.6% for Mexico. Despite these differences in prevalence, two demographic characteristics, gender and age, were important correlates of use in both countries. Men in both countries were more likely to report use of alcohol and illicit substances than women. Generally speaking, a greater proportion of respondents in both countries 18 years of age or older reported use of alcohol for all three measures than younger respondents; and a greater proportion of respondents between the ages of 18 and 34 years reported use of illicit substances during lifetime and past year than any other age group.^ Additional substantive research investigating population-based samples and at-risk subgroups is needed to understand the underlying mechanisms of these associations. Further development of cross-culturally meaningful survey methods is warranted to validate comparisons of substance use across countries and societies. ^
Resumo:
Most statistical analysis, theory and practice, is concerned with static models; models with a proposed set of parameters whose values are fixed across observational units. Static models implicitly assume that the quantified relationships remain the same across the design space of the data. While this is reasonable under many circumstances this can be a dangerous assumption when dealing with sequentially ordered data. The mere passage of time always brings fresh considerations and the interrelationships among parameters, or subsets of parameters, may need to be continually revised. ^ When data are gathered sequentially dynamic interim monitoring may be useful as new subject-specific parameters are introduced with each new observational unit. Sequential imputation via dynamic hierarchical models is an efficient strategy for handling missing data and analyzing longitudinal studies. Dynamic conditional independence models offers a flexible framework that exploits the Bayesian updating scheme for capturing the evolution of both the population and individual effects over time. While static models often describe aggregate information well they often do not reflect conflicts in the information at the individual level. Dynamic models prove advantageous over static models in capturing both individual and aggregate trends. Computations for such models can be carried out via the Gibbs sampler. An application using a small sample repeated measures normally distributed growth curve data is presented. ^
Resumo:
Many studies in biostatistics deal with binary data. Some of these studies involve correlated observations, which can complicate the analysis of the resulting data. Studies of this kind typically arise when a high degree of commonality exists between test subjects. If there exists a natural hierarchy in the data, multilevel analysis is an appropriate tool for the analysis. Two examples are the measurements on identical twins, or the study of symmetrical organs or appendages such as in the case of ophthalmic studies. Although this type of matching appears ideal for the purposes of comparison, analysis of the resulting data while ignoring the effect of intra-cluster correlation has been shown to produce biased results.^ This paper will explore the use of multilevel modeling of simulated binary data with predetermined levels of correlation. Data will be generated using the Beta-Binomial method with varying degrees of correlation between the lower level observations. The data will be analyzed using the multilevel software package MlwiN (Woodhouse, et al, 1995). Comparisons between the specified intra-cluster correlation of these data and the estimated correlations, using multilevel analysis, will be used to examine the accuracy of this technique in analyzing this type of data. ^
Resumo:
The Food and Drug Administration (FDA) is responsible for risk assessment and risk management in the post-market surveillance of the U.S. medical device industry. One of the FDA regulatory mechanisms, the Medical Device Reporting System (MDR) is an adverse event reporting system intended to provide the FDA with advance warning of device problems. It includes voluntary reporting for individuals, and mandatory reporting for device manufacturers. ^ In a study of alleged breast implant safety problems, this research examines the organizational processes by which the FDA gathers data on adverse events and uses adverse event reporting systems to assess and manage risk. The research reviews the literature on problem recognition, risk perception, and organizational learning to understand the influence highly publicized events may have on adverse event reporting. Understanding the influence of an environmental factor, such as publicity, on adverse event reporting can provide insight into the question of whether the FDA's adverse event reporting system operates as an early warning system for medical device problems. ^ The research focuses on two main questions. The first question addresses the relationship between publicity and the voluntary and mandatory reporting of adverse events. The second question examines whether government agencies make use of these adverse event reports. ^ Using quantitative and qualitative methods, a longitudinal study was conducted of the number and content of adverse event reports regarding breast implants filed with the FDA's medical device reporting system during 1985–1991. To assess variation in publicity over time, the print media were analyzed to identify articles related to breast implant failures. ^ The exploratory findings suggest that an increase in media activity is related to an increase in voluntary reporting, especially following periods of intense media coverage of the FDA. However, a similar relationship was not found between media activity and manufacturers' mandatory adverse event reporting. A review of government committee and agency reports on the FDA published during 1976–1996 produced little evidence to suggest that publicity or MDR information contributed to problem recognition, agenda setting, or the formulation of policy recommendations. ^ The research findings suggest that the reporting of breast implant problems to FDA may reflect the perceptions and concerns of the reporting groups, a barometer of the volume and content of media attention. ^
Resumo:
(1) A mathematical theory for computing the probabilities of various nucleotide configurations is developed, and the probability of obtaining the correct phylogenetic tree (model tree) from sequence data is evaluated for six phylogenetic tree-making methods (UPGMA, distance Wagner method, transformed distance method, Fitch-Margoliash's method, maximum parsimony method, and compatibility method). The number of nucleotides (m*) necessary to obtain the correct tree with a probability of 95% is estimated with special reference to the human, chimpanzee, and gorilla divergence. m* is at least 4,200, but the availability of outgroup species greatly reduces m* for all methods except UPGMA. m* increases if transitions occur more frequently than transversions as in the case of mitochondrial DNA. (2) A new tree-making method called the neighbor-joining method is proposed. This method is applicable either for distance data or character state data. Computer simulation has shown that the neighbor-joining method is generally better than UPGMA, Farris' method, Li's method, and modified Farris method on recovering the true topology when distance data are used. A related method, the simultaneous partitioning method, is also discussed. (3) The maximum likelihood (ML) method for phylogeny reconstruction under the assumption of both constant and varying evolutionary rates is studied, and a new algorithm for obtaining the ML tree is presented. This method gives a tree similar to that obtained by UPGMA when constant evolutionary rate is assumed, whereas it gives a tree similar to that obtained by the maximum parsimony tree and the neighbor-joining method when varying evolutionary rate is assumed. ^
Resumo:
Background:Erythropoiesis-stimulating agents (ESAs) reduce the need for red blood cell transfusions; however, they increase the risk of thromboembolic events and mortality. The impact of ESAs on quality of life (QoL) is controversial and led to different recommendations of medical societies and authorities in the USA and Europe. We aimed to critically evaluate and quantify the effects of ESAs on QoL in cancer patients.Methods:We included data from randomised controlled trials (RCTs) on the effects of ESAs on QoL in cancer patients. Randomised controlled trials were identified by searching electronic data bases and other sources up to January 2011. To reduce publication and outcome reporting biases, we included unreported results from clinical study reports. We conducted meta-analyses on fatigue- and anaemia-related symptoms measured with the Functional Assessment of Cancer Therapy-Fatigue (FACT-F) and FACT-Anaemia (FACT-An) subscales (primary outcomes) or other validated instruments.Results:We identified 58 eligible RCTs. Clinical study reports were available for 27% (4 out of 15) of the investigator-initiated trials and 95% (41 out of 43) of the industry-initiated trials. We excluded 21 RTCs as we could not use their QoL data for meta-analyses, either because of incomplete reporting (17 RCTs) or because of premature closure of the trial (4 RCTs). We included 37 RCTs with 10 581 patients; 21 RCTs were placebo controlled. Chemotherapy was given in 27 of the 37 RCTs. The median baseline haemoglobin (Hb) level was 10.1 g dl(-1); in 8 studies ESAs were stopped at Hb levels below 13 g dl(-1) and in 27 above 13 g dl(-1). For FACT-F, the mean difference (MD) was 2.41 (95% confidence interval (95% CI) 1.39-3.43; P<0.0001; 23 studies, n=6108) in all cancer patients and 2.81 (95% CI 1.73-3.90; P<0.0001; 19 RCTs, n=4697) in patients receiving chemotherapy, which was below the threshold (⩾3) for a clinically important difference (CID). Erythropoiesis-stimulating agents had a positive effect on anaemia-related symptoms (MD 4.09; 95% CI 2.37-5.80; P=0.001; 14 studies, n=2765) in all cancer patients and 4.50 (95% CI 2.55-6.45; P<0.0001; 11 RCTs, n=2436) in patients receiving chemotherapy, which was above the threshold (⩾4) for a CID. Of note, this effect persisted when we restricted the analysis to placebo-controlled RCTs in patients receiving chemotherapy. There was some evidence that the MDs for FACT-F were above the threshold for a CID in RCTs including cancer patients receiving chemotherapy with Hb levels below 12 g dl(-1) at baseline and in RCTs stopping ESAs at Hb levels above 13 g dl(-1). However, these findings for FACT-F were not confirmed when we restricted the analysis to placebo-controlled RCTs in patients receiving chemotherapy.Conclusions:In cancer patients, particularly those receiving chemotherapy, we found that ESAs provide a small but clinically important improvement in anaemia-related symptoms (FACT-An). For fatigue-related symptoms (FACT-F), the overall effect did not reach the threshold for a CID.British Journal of Cancer advance online publication, 17 April 2014; doi:10.1038/bjc.2014.171 www.bjcancer.com.
Resumo:
BACKGROUND AND AIMS: Internet-based surveys provide a potentially important tool for Inflammatory Bowel Disease (IBD) research. The advantages include low cost, large numbers of participants, rapid study completion and less extensive infrastructure than traditional methods. The aim was to determine the accuracy of patient self-reporting in internet-based IBD research and identify predictors of greater reliability. METHODS: 197 patients from a tertiary care center answered an online survey concerning personal medical history and an evaluation of disease specific knowledge. Self-reported medical details were compared with data abstracted from medical records. Agreement was assessed by kappa (κ) statistics. RESULTS: Participants responded correctly with excellent agreement (κ=0.96-0.97) on subtype of IBD and history of surgery. The agreement was also excellent for colectomy (κ=0.88) and small bowel resection (κ=0.91), moderate for abscesses and fistulas (κ=0.60 and 0.63), but poor regarding partial colectomy (κ=0.39). Time since last colonoscopy was self-reported with better agreement (κ=0.84) than disease activity. For disease location/extent, moderate agreements at κ=69% and 64% were observed for patients with Crohn's disease and ulcerative colitis, respectively. Subjects who scored higher than the average in the IBD knowledge assessment were significantly more accurate about disease location than their complementary group (74% vs. 59%, p=0.02). CONCLUSION: This study demonstrates that IBD patients accurately report their medical history regarding type of disease and surgical procedures. More detailed medical information is less reliably reported. Disease knowledge assessment may help in identifying the most accurate individuals and could therefore serve as validity criteria. Internet-based surveys are feasible with high reliability about basic disease features only. However, the participants in this study were engaged at a tertiary center, which potentially leads to a bias and compromises generalization to an unfiltered patient group.
Resumo:
Background: Patients presenting to the emergency department (ED) currently face inacceptable delays in initial treatment, and long, costly hospital stays due to suboptimal initial triage and site-of-care decisions. Accurate ED triage should focus not only on initial treatment priority, but also on prediction of medical risk and nursing needs to improve site-of-care decisions and to simplify early discharge management. Different triage scores have been proposed, such as the Manchester triage system (MTS). Yet, these scores focus only on treatment priority, have suboptimal performance and lack validation in the Swiss health care system. Because the MTS will be introduced into clinical routine at the Kantonsspital Aarau, we propose a large prospective cohort study to optimize initial patient triage. Specifically, the aim of this trial is to derive a three-part triage algorithm to better predict (a) treatment priority; (b) medical risk and thus need for in-hospital treatment; (c) post-acute care needs of patients at the most proximal time point of ED admission. Methods/design: Prospective, observational, multicenter, multi-national cohort study. We will include all consecutive medical patients seeking ED care into this observational registry. There will be no exclusions except for non-adult and non-medical patients. Vital signs will be recorded and left over blood samples will be stored for later batch analysis of blood markers. Upon ED admission, the post-acute care discharge score (PACD) will be recorded. Attending ED physicians will adjudicate triage priority based on all available results at the time of ED discharge to the medical ward. Patients will be reassessed daily during the hospital course for medical stability and readiness for discharge from the nurses and if involved social workers perspective. To assess outcomes, data from electronic medical records will be used and all patients will be contacted 30 days after hospital admission to assess vital and functional status, re-hospitalization, satisfaction with care and quality of life measures. We aim to include between 5000 and 7000 patients over one year of recruitment to derive the three-part triage algorithm. The respective main endpoints were defined as (a) initial triage priority (high vs. low priority) adjudicated by the attending ED physician at ED discharge, (b) adverse 30 day outcome (death or intensive care unit admission) within 30 days following ED admission to assess patients risk and thus need for in-hospital treatment and (c) post acute care needs after hospital discharge, defined as transfer of patients to a post-acute care institution, for early recognition and planning of post-acute care needs. Other outcomes are time to first physician contact, time to initiation of adequate medical therapy, time to social worker involvement, length of hospital stay, reasons fordischarge delays, patient’s satisfaction with care, overall hospital costs and patients care needs after returning home. Discussion: Using a reliable initial triage system for estimating initial treatment priority, need for in-hospital treatment and post-acute care needs is an innovative and persuasive approach for a more targeted and efficient management of medical patients in the ED. The proposed interdisciplinary , multi-national project has unprecedented potential to improve initial triage decisions and optimize resource allocation to the sickest patients from admission to discharge. The algorithms derived in this study will be compared in a later randomized controlled trial against a usual care control group in terms of resource use, length of hospital stay, overall costs and patient’s outcomes in terms of mortality, re-hospitalization, quality of life and satisfaction with care.
Resumo:
OBJECTIVE To investigate whether revascularisation improves prognosis compared with medical treatment among patients with stable coronary artery disease. DESIGN Bayesian network meta-analyses to combine direct within trial comparisons between treatments with indirect evidence from other trials while maintaining randomisation. ELIGIBILITY CRITERIA FOR SELECTING STUDIES A strategy of initial medical treatment compared with revascularisation by coronary artery bypass grafting or Food and Drug Administration approved techniques for percutaneous revascularization: balloon angioplasty, bare metal stent, early generation paclitaxel eluting stent, sirolimus eluting stent, and zotarolimus eluting (Endeavor) stent, and new generation everolimus eluting stent, and zotarolimus eluting (Resolute) stent among patients with stable coronary artery disease. DATA SOURCES Medline and Embase from 1980 to 2013 for randomised trials comparing medical treatment with revascularisation. MAIN OUTCOME MEASURE All cause mortality. RESULTS 100 trials in 93 553 patients with 262 090 patient years of follow-up were included. Coronary artery bypass grafting was associated with a survival benefit (rate ratio 0.80, 95% credibility interval 0.70 to 0.91) compared with medical treatment. New generation drug eluting stents (everolimus: 0.75, 0.59 to 0.96; zotarolimus (Resolute): 0.65, 0.42 to 1.00) but not balloon angioplasty (0.85, 0.68 to 1.04), bare metal stents (0.92, 0.79 to 1.05), or early generation drug eluting stents (paclitaxel: 0.92, 0.75 to 1.12; sirolimus: 0.91, 0.75 to 1.10; zotarolimus (Endeavor): 0.88, 0.69 to 1.10) were associated with improved survival compared with medical treatment. Coronary artery bypass grafting reduced the risk of myocardial infarction compared with medical treatment (0.79, 0.63 to 0.99), and everolimus eluting stents showed a trend towards a reduced risk of myocardial infarction (0.75, 0.55 to 1.01). The risk of subsequent revascularisation was noticeably reduced by coronary artery bypass grafting (0.16, 0.13 to 0.20) followed by new generation drug eluting stents (zotarolimus (Resolute): 0.26, 0.17 to 0.40; everolimus: 0.27, 0.21 to 0.35), early generation drug eluting stents (zotarolimus (Endeavor): 0.37, 0.28 to 0.50; sirolimus: 0.29, 0.24 to 0.36; paclitaxel: 0.44, 0.35 to 0.54), and bare metal stents (0.69, 0.59 to 0.81) compared with medical treatment. CONCLUSION Among patients with stable coronary artery disease, coronary artery bypass grafting reduces the risk of death, myocardial infarction, and subsequent revascularisation compared with medical treatment. All stent based coronary revascularisation technologies reduce the need for revascularisation to a variable degree. Our results provide evidence for improved survival with new generation drug eluting stents but no other percutaneous revascularisation technology compared with medical treatment.
Resumo:
BACKGROUND Cardiac events (CEs) are among the most serious late effects following childhood cancer treatment. To establish accurate risk estimates for the occurrence of CEs it is essential that they are graded in a valid and consistent manner, especially for international studies. We therefore developed a data-extraction form and a set of flowcharts to grade CEs and tested the validity and consistency of this approach in a series of patients. METHODS The Common Terminology Criteria for Adverse Events version 3.0 and 4.0 were used to define the CEs. Forty patients were randomly selected from a cohort of 72 subjects with known CEs that had been graded by a physician for an earlier study. To establish whether the new method was valid for appropriate grading, a non-physician graded the CEs by using the new method. To evaluate consistency of the grading, the same charts were graded again by two other non-physicians, one with receiving brief introduction and one with receiving extensive training on the new method. We calculated weighted Kappa statistics to quantify inter-observer agreement. RESULTS The inter-observer agreement was 0.92 (95% CI 0.80-1.00) for validity, and 0.88 (0.79-0.98) and 0.99 (0.96-1.00) for consistency with the outcome assessors who had the brief introduction and the extensive training, respectively. CONCLUSIONS The newly developed standardized method to grade CEs using data from medical records has shown excellent validity and consistency. The study showed that the method can be correctly applied by researchers without a medical background, provided that they receive adequate training.
Resumo:
QUESTIONS UNDER STUDY To improve the response of deteriorating patients during their hospital stay, the University Hospital Bern has introduced a Medical Emergency Team (MET). Aim of this retrospective cohort study is to review the preceding factors, patient characteristics, process parameters and their correlation to patient outcomes of MET calls since the introduction of the team. METHODS Data on patient characteristics, parameters related to MET activation and intervention and patient outcomes were evaluated. A Vital Sign Score (VSS), which is defined as the sum of the occurrence of each vital sign abnormalities, was calculated for all physiological parameters pre MET event, during event and correlation with hospital outcomes. RESULTS A total of 1,628 MET calls in 1,317 patients occurred; 262 (19.9%) of patients with MET calls during their hospital stay died. The VSS pre MET event (odds ratio [OR] 1.78, 95% confidence interval [CI] 1.50-2.13; AUROC 0.63; all p <0.0001) and during the MET call (OR 1.60, 95% CI 1.41-1.83; AUROC 0.62; all p <0.0001) were significantly correlated to patient outcomes. A significant increase in MET calls from 5.2 to 16.5 per 1000 hospital admissions (p <0.0001) and a decrease in cardiac arrest calls in the MET perimeter from 1.6 in 2008 to 0.8 per 1000 admissions was observed during the study period (p = 0.014). CONCLUSIONS The VSS is a significant predictor of mortality in patients assessed by the MET. Increasing MET utilisation coincided with a decrease in cardiac arrest calls in the MET perimeter.
Resumo:
Background: According to the World Health Organization, stroke is the 'incoming epidemic of the 21st century'. In light of recent data suggesting that 85% of all strokes may be preventable, strategies for prevention are moving to the forefront in stroke management. Summary: This review discusses the risk factors and provides evidence on the effective medical interventions and lifestyle modifications for optimal stroke prevention. Key Messages: Stroke risk can be substantially reduced using the medical measures that have been proven in many randomized trials, in combination with effective lifestyle modifications. The global modification of health and lifestyle is more beneficial than the treatment of individual risk factors. Clinical Implications: Hypertension is the most important modifiable risk factor for stroke. Efficacious reduction of blood pressure is essential for stroke prevention, even more so than the choice of antihypertensive drugs. Indications for the use of antihypertensive drugs depend on blood pressure values and vascular risk profile; thus, treatment should be initiated earlier in patients with diabetes mellitus or in those with a high vascular risk profile. Treatment of dyslipidemia with statins, anticoagulation therapy in atrial fibrillation, and carotid endarterectomy in symptomatic high-grade carotid stenosis are also effective for stroke prevention. Lifestyle factors that have been proven to reduce stroke risk include reducing salt, eliminating smoking, performing regular physical activity, and maintaining a normal body weight. © 2015 S. Karger AG, Basel.