917 resultados para Adult intensive care,
Resumo:
PURPOSE To develop internationally harmonised standards for programmes of training in intensive care medicine (ICM). METHODS Standards were developed by using consensus techniques. A nine-member nominal group of European intensive care experts developed a preliminary set of standards. These were revised and refined through a modified Delphi process involving 28 European national coordinators representing national training organisations using a combination of moderated discussion meetings, email, and a Web-based tool for determining the level of agreement with each proposed standard, and whether the standard could be achieved in the respondent's country. RESULTS The nominal group developed an initial set of 52 possible standards which underwent four iterations to achieve maximal consensus. All national coordinators approved a final set of 29 standards in four domains: training centres, training programmes, selection of trainees, and trainers' profiles. Only three standards were considered immediately achievable by all countries, demonstrating a willingness to aspire to quality rather than merely setting a minimum level. Nine proposed standards which did not achieve full consensus were identified as potential candidates for future review. CONCLUSIONS This preliminary set of clearly defined and agreed standards provides a transparent framework for assuring the quality of training programmes, and a foundation for international harmonisation and quality improvement of training in ICM.
Resumo:
INTRODUCTION Community acquired pneumonia (CAP) is the most common infectious reason for admission to the Intensive Care Unit (ICU). The GenOSept study was designed to determine genetic influences on sepsis outcome. Phenotypic data was recorded using a robust clinical database allowing a contemporary analysis of the clinical characteristics, microbiology, outcomes and independent risk factors in patients with severe CAP admitted to ICUs across Europe. METHODS Kaplan-Meier analysis was used to determine mortality rates. A Cox Proportional Hazards (PH) model was used to identify variables independently associated with 28-day and six-month mortality. RESULTS Data from 1166 patients admitted to 102 centres across 17 countries was extracted. Median age was 64 years, 62% were male. Mortality rate at 28 days was 17%, rising to 27% at six months. Streptococcus pneumoniae was the commonest organism isolated (28% of cases) with no organism identified in 36%. Independent risk factors associated with an increased risk of death at six months included APACHE II score (hazard ratio, HR, 1.03; confidence interval, CI, 1.01-1.05), bilateral pulmonary infiltrates (HR1.44; CI 1.11-1.87) and ventilator support (HR 3.04; CI 1.64-5.62). Haematocrit, pH and urine volume on day one were all associated with a worse outcome. CONCLUSIONS The mortality rate in patients with severe CAP admitted to European ICUs was 27% at six months. Streptococcus pneumoniae was the commonest organism isolated. In many cases the infecting organism was not identified. Ventilator support, the presence of diffuse pulmonary infiltrates, lower haematocrit, urine volume and pH on admission were independent predictors of a worse outcome.
Resumo:
INTRODUCTION Faecal peritonitis (FP) is a common cause of sepsis and admission to the intensive care unit (ICU). The Genetics of Sepsis and Septic Shock in Europe (GenOSept) project is investigating the influence of genetic variation on the host response and outcomes in a large cohort of patients with sepsis admitted to ICUs across Europe. Here we report an epidemiological survey of the subset of patients with FP. OBJECTIVES To define the clinical characteristics, outcomes and risk factors for mortality in patients with FP admitted to ICUs across Europe. METHODS Data was extracted from electronic case report forms. Phenotypic data was recorded using a detailed, quality-assured clinical database. The primary outcome measure was 6-month mortality. Patients were followed for 6 months. Kaplan-Meier analysis was used to determine mortality rates. Cox proportional hazards regression analysis was employed to identify independent risk factors for mortality. RESULTS Data for 977 FP patients admitted to 102 centres across 16 countries between 29 September 2005 and 5 January 2011 was extracted. The median age was 69.2 years (IQR 58.3-77.1), with a male preponderance (54.3%). The most common causes of FP were perforated diverticular disease (32.1%) and surgical anastomotic breakdown (31.1%). The ICU mortality rate at 28 days was 19.1%, increasing to 31.6% at 6 months. The cause of FP, pre-existing co-morbidities and time from estimated onset of symptoms to surgery did not impact on survival. The strongest independent risk factors associated with an increased rate of death at 6 months included age, higher APACHE II score, acute renal and cardiovascular dysfunction within 1 week of admission to ICU, hypothermia, lower haematocrit and bradycardia on day 1 of ICU stay. CONCLUSIONS In this large cohort of patients admitted to European ICUs with FP the 6 month mortality was 31.6%. The most consistent predictors of mortality across all time points were increased age, development of acute renal dysfunction during the first week of admission, lower haematocrit and hypothermia on day 1 of ICU admission.
Resumo:
INTRODUCTION Patients admitted to intensive care following surgery for faecal peritonitis present particular challenges in terms of clinical management and risk assessment. Collaborating surgical and intensive care teams need shared perspectives on prognosis. We aimed to determine the relationship between dynamic assessment of trends in selected variables and outcomes. METHODS We analysed trends in physiological and laboratory variables during the first week of intensive care unit (ICU) stay in 977 patients at 102 centres across 16 European countries. The primary outcome was 6-month mortality. Secondary endpoints were ICU, hospital and 28-day mortality. For each trend, Cox proportional hazards (PH) regression analyses, adjusted for age and sex, were performed for each endpoint. RESULTS Trends over the first 7 days of the ICU stay independently associated with 6-month mortality were worsening thrombocytopaenia (mortality: hazard ratio (HR) = 1.02; 95% confidence interval (CI), 1.01 to 1.03; P <0.001) and renal function (total daily urine output: HR =1.02; 95% CI, 1.01 to 1.03; P <0.001; Sequential Organ Failure Assessment (SOFA) renal subscore: HR = 0.87; 95% CI, 0.75 to 0.99; P = 0.047), maximum bilirubin level (HR = 0.99; 95% CI, 0.99 to 0.99; P = 0.02) and Glasgow Coma Scale (GCS) SOFA subscore (HR = 0.81; 95% CI, 0.68 to 0.98; P = 0.028). Changes in renal function (total daily urine output and renal component of the SOFA score), GCS component of the SOFA score, total SOFA score and worsening thrombocytopaenia were also independently associated with secondary outcomes (ICU, hospital and 28-day mortality). We detected the same pattern when we analysed trends on days 2, 3 and 5. Dynamic trends in all other measured laboratory and physiological variables, and in radiological findings, changes inrespiratory support, renal replacement therapy and inotrope and/or vasopressor requirements failed to be retained as independently associated with outcome in multivariate analysis. CONCLUSIONS Only deterioration in renal function, thrombocytopaenia and SOFA score over the first 2, 3, 5 and 7 days of the ICU stay were consistently associated with mortality at all endpoints. These findings may help to inform clinical decision making in patients with this common cause of critical illness.
Resumo:
OBJECTIVES Human studies on the role of mannose-binding lectin (MBL) in patients with invasive candidiasis have yielded conflicting results. We investigated the influence of MBL and other lectin pathway proteins on Candida colonization and intra-abdominal candidiasis (IAC) in a cohort of high-risk patients. METHODS Prospective observational cohort study of 89 high-risk intensive-care unit (ICU) patients. Levels of lectin pathway proteins at study entry and six MBL2 single-nucleotide polymorphisms were analyzed by sandwich-type immunoassays and genotyping, respectively, and correlated with development of heavy Candida colonization (corrected colonization index (CCI) ≥0.4) and occurrence of IAC during a 4-week period. RESULTS Within 4 weeks after inclusion a CCI ≥0.4 and IAC was observed in 47% and 38% of patients respectively. Neither serum levels of MBL, ficolin-1, -2, -3, MASP-2 or collectin liver 1 nor MBL2 genotypes were associated with a CCI ≥0.4. Similarly, none of the analyzed proteins was found to be associated with IAC with the exception of lower MBL levels (HR 0.74, p = 0.02) at study entry. However, there was no association of MBL deficiency (<0.5 μg/ml), MBL2 haplo- or genotypes with IAC. CONCLUSION Lectin pathway protein levels and MBL2 genotype investigated in this study were not associated with heavy Candida colonization or IAC in a cohort of high-risk ICU patients.
Resumo:
INTRODUCTION Dexmedetomidine was shown in two European randomized double-blind double-dummy trials (PRODEX and MIDEX) to be non-inferior to propofol and midazolam in maintaining target sedation levels in mechanically ventilated intensive care unit (ICU) patients. Additionally, dexmedetomidine shortened the time to extubation versus both standard sedatives, suggesting that it may reduce ICU resource needs and thus lower ICU costs. Considering resource utilization data from these two trials, we performed a secondary, cost-minimization analysis assessing the economics of dexmedetomidine versus standard care sedation. METHODS The total ICU costs associated with each study sedative were calculated on the basis of total study sedative consumption and the number of days patients remained intubated, required non-invasive ventilation, or required ICU care without mechanical ventilation. The daily unit costs for these three consecutive ICU periods were set to decline toward discharge, reflecting the observed reduction in mean daily Therapeutic Intervention Scoring System (TISS) points between the periods. A number of additional sensitivity analyses were performed, including one in which the total ICU costs were based on the cumulative sum of daily TISS points over the ICU period, and two further scenarios, with declining direct variable daily costs only. RESULTS Based on pooled data from both trials, sedation with dexmedetomidine resulted in lower total ICU costs than using the standard sedatives, with a difference of €2,656 in the median (interquartile range) total ICU costs-€11,864 (€7,070 to €23,457) versus €14,520 (€7,871 to €26,254)-and €1,649 in the mean total ICU costs. The median (mean) total ICU costs with dexmedetomidine compared with those of propofol or midazolam were €1,292 (€747) and €3,573 (€2,536) lower, respectively. The result was robust, indicating lower costs with dexmedetomidine in all sensitivity analyses, including those in which only direct variable ICU costs were considered. The likelihood of dexmedetomidine resulting in lower total ICU costs compared with pooled standard care was 91.0% (72.4% versus propofol and 98.0% versus midazolam). CONCLUSIONS From an economic point of view, dexmedetomidine appears to be a preferable option compared with standard sedatives for providing light to moderate ICU sedation exceeding 24 hours. The savings potential results primarily from shorter time to extubation. TRIAL REGISTRATION ClinicalTrials.gov NCT00479661 (PRODEX), NCT00481312 (MIDEX).
Resumo:
Coronary artery bypass graft (CABG) surgery is among the most common operations performed in the United States and accounts for more resources expended in cardiovascular medicine than any other single procedure. CABG surgery patients initially recover in the Cardiovascular Intensive Care Unit (CVICU). The post-procedure CVICU length of stay (LOS) goal is two days or less. A longer ICU LOS is associated with a prolonged hospital LOS, poor health outcomes, greater use of limited resources, and increased medical costs. ^ Research has shown that experienced clinicians can predict LOS no better than chance. Current CABG surgery LOS risk models differ greatly in generalizability and ease of use in the clinical setting. A predictive model that identified modifiable pre- and intra-operative risk factors for CVICU LOS greater than two days could have major public health implications as modification of these identified factors could decrease CVICU LOS and potentially minimize morbidity and mortality, optimize use of limited health care resources, and decrease medical costs. ^ The primary aim of this study was to identify modifiable pre-and intra-operative predictors of CVICU LOS greater than two days for CABG surgery patients with cardiopulmonary bypass (CPB). A secondary aim was to build a probability equation for CVICU LOS greater than two days. Data were extracted from 416 medical records of CABG surgery patients with CPB, 50 to 80 years of age, recovered in the CVICU of a large teaching, referral hospital in southeastern Texas, during the calendar year 2004 and the first quarter of 2005. Exclusion criteria included Diagnosis Related Group (DRG) 106, CABG surgery without CPB, CABG surgery with other procedures, and operative deaths. The data were analyzed using multivariate logistic regression for an alpha=0.05, power=0.80, and correlation=0.26. ^ This study found age, history of peripheral arterial disease, and total operative time equal to and greater than four hours to be independent predictors of CVICU LOS greater than two days. The probability of CVICU LOS greater than two days can be calculated by the following equation: -2.872941 +.0323081 (age in years) + .8177223 (history of peripheral arterial disease) + .70379 (operative time). ^
Resumo:
Objective. Loud noises in neonatal intensive care units (NICUs) may impede growth and development for extremely low birthweight (ELBW, < 1000 grams) newborns. The objective of this study was to measure the association between NICU sound levels and ELBW neonates' arterial blood pressure to determine whether these newborns experience noise-induced stress. ^ Methods. Noise and arterial blood pressure recordings were collected for 9 ELBW neonates during the first week of life. Sound levels were measured inside the incubator, and each subject's arterial blood pressures were simultaneously recorded for 15 minutes (at 1 sec intervals). Time series cross-correlation functions were calculated for NICU noise and mean arterial blood pressure (MABP) recordings for each subject. The grand mean noise-MABP cross-correlation was calculated for all subjects and for lower and higher birthweight groups for comparison. ^ Results. The grand mean noise-MABP cross-correlation for all subjects was mostly negative (through 300 sec lag time) and nearly reached significance at the 95% level at 111 sec lag (mean r = -0.062). Lower birthweight newborns (454-709 g) experienced significant decreases in blood pressure with increasing NICU noise after 145 sec lag (peak r = -0.074). Higher birthweight newborns had an immediate negative correlation with NICU sound levels (at 3 sec lag, r = -0.071), but arterial blood pressures increased to a positive correlation with noise levels at 197 sec lag (r = 0.075). ^ Conclusions. ELBW newborns' arterial blood pressure was influenced by NICU noise levels during the first week of life. Lower birthweight newborns may have experienced an orienting reflex to NICU sounds. Higher birthweight newborns experienced an immediate orienting reflex to increasing sound levels, but arterial blood pressure increased approximately 3 minutes after increases in noise levels. Increases in arterial blood pressure following increased NICU sound levels may result from a stress response to noise. ^
Resumo:
Background. Nosocomial infections are a source of concern for many hospitals in the United States and worldwide. These infections are associated with increased morbidity, mortality and hospital costs. Nosocomial infections occur in ICUs at a rate which is five times greater than those in general wards. Understanding the reasons for the higher rates can ultimately help reduce these infections. The literature has been weak in documenting a direct relationship between nosocomial infections and non-traditional risk factors, such as unit staffing or patient acuity.^ Objective. To examine the relationship, if any, between nosocomial infections and non-traditional risk factors. The potential non-traditional risk factors we studied were the patient acuity (which comprised of the mortality and illness rating of the patient), patient days for patients hospitalized in the ICU, and the patient to nurse ratio.^ Method. We conducted a secondary data analysis on patients hospitalized in the Medical Intensive Care Unit (MICU) of the Memorial Hermann- Texas Medical Center in Houston during the months of March 2008- May 2009. The average monthly values for the patient acuity (mortality and illness Diagnostic Related Group (DRG) scores), patient days for patients hospitalized in the ICU and average patient to nurse ratio were calculated during this time period. Active surveillance of Bloodstream Infections (BSIs), Urinary Tract Infections (UTIs) and Ventilator Associated Pneumonias (VAPs) was performed by Infection Control practitioners, who visited the MICU and performed a personal infection record for each patient. Spearman's rank correlation was performed to determine the relationship between these nosocomial infections and the non-traditional risk factors.^ Results. We found weak negative correlations between BSIs and two measures (illness and mortality DRG). We also found a weak negative correlation between UTI and unit staffing (patient to nurse ratio). The strongest positive correlation was found between illness DRG and mortality DRG, validating our methodology.^ Conclusion. From this analysis, we were able to infer that non-traditional risk factors do not appear to play a significant role in transmission of infection in the units we evaluated.^
Resumo:
A number of medical and social developments have had an impact on the neonatal mortality over the past ten to 15 years in the United States. The purpose of this study was to examine one of these developments, Newborn Intensive Care Units (NICUs), and evaluate their impact on neonatal mortality in Houston, Texas.^ This study was unique in that it used as its data base matched birth and infant death records from two periods of time: 1958-1960 (before NICUs) and 1974-1976 (after NICUs). The neonatal mortality of single, live infants born to Houston resident mothers was compared for two groups: infants born in hospitals which developed NICUs and infants born in all other Houston hospitals. Neonatal mortality comparisons were made using the following birth-characteristic variables: birthweight, gestation, race, sex, maternal age, legitimacy, birth order and prenatal care.^ The results of the study showed that hospitals which developed NICUs had a higher percentage of their population with high risk characteristics. In spite of this, they had lower neonatal mortality rates in two categories: (1) white 3.5-5.5 pounds birthweight infants, (2) low birthweight infants whose mothers received no prenatal care. Black 3.5-5.5 pounds birthweight infants did equally well in either hospital group. While the differences between the two hospital groups for these categories were not statistically significant at the p < 0.05 level, data from the 1958-1960 period substantiate that a marked change occurred in the 3.5-5.5 pounds birthweight category for those infants born in hospitals which developed NICUs. Early data were not available for prenatal care. These findings support the conclusion that, in Houston, NICUs had some impact on neonatal mortality among moderately underweight infants. ^
Resumo:
Risk factors for Multi-Drug Resistant Acinetobacter (MDRA) acquisition were studied in patients in a burn intensive care unit (ICU) where there was an outbreak of MDRA. Forty cases were matched with eighty controls based on length of stay in the Burn ICU and statistical analysis was performed on data for several different variables. Matched analysis showed that mechanical ventilation, transport ventilation, number of intubations, number of bronchoscopy procedures, total body surface area burn, and prior Methicillin Resistant Staphylococcus aureus colonization were all significant risk factors for MDRA acquisition. ^ MDRA remains a significant threat to the burn population. Treatment for burn patients with MDRA is challenging as resistance to antibiotics continues to increase. This study underlined the need to closely monitor the most critically ill ventilated patients during an outbreak of MDRA as they are the most at risk for MDRA acquisition.^
Resumo:
Sepsis is a significant cause for multiple organ failure and death in the burn patient, yet identification in this population is confounded by chronic hypermetabolism and impaired immune function. The purpose of this study was twofold: 1) determine the ability of the systemic inflammatory response syndrome (SIRS) and American Burn Association (ABA) criteria to predict sepsis in the burn patient; and 2) develop a model representing the best combination of clinical predictors associated with sepsis in the same population. A retrospective, case-controlled, within-patient comparison of burn patients admitted to a single intensive care unit (ICU) was conducted for the period January 2005 to September 2010. Blood culture results were paired with clinical condition: "positive-sick"; "negative-sick", and "screening-not sick". Data were collected for the 72 hours prior to each blood culture. The most significant predictors were evaluated using logistic regression, Generalized Estimating Equations (GEE) and ROC area under the curve (AUC) analyses to assess model predictive ability. Bootstrapping methods were employed to evaluate potential model over-fitting. Fifty-nine subjects were included, representing 177 culture periods. SIRS criteria were not found to be associated with culture type, with an average of 98% of subjects meeting criteria in the 3 days prior. ABA sepsis criteria were significantly different among culture type only on the day prior (p = 0.004). The variables identified for the model included: heart rate>130 beats/min, mean blood pressure<60 mmHg, base deficit<-6 mEq/L, temperature>36°C, use of vasoactive medications, and glucose>150 mg/d1. The model was significant in predicting "positive culture-sick" and sepsis state, with AUC of 0.775 (p < 0.001) and 0.714 (p < .001), respectively; comparatively, the ABA criteria AUC was 0.619 (p = 0.028) and 0.597 (p = .035), respectively. SIRS criteria are not appropriate for identifying sepsis in the burn population. The ABA criteria perform better, but only for the day prior to positive blood culture results. The time period useful to diagnose sepsis using clinical criteria may be limited to 24 hours. A combination of predictors is superior to individual variable trends, yet algorithms or computer support will be necessary for the clinician to find such models useful. ^
Resumo:
Background: Poor communication among health care providers is cited as the most common cause of sentinel events involving patients. Sign-out of patient data at the change of clinician shifts is a component of communication that is especially vulnerable to errors. Sign-outs are particularly extensive and complex in intensive care units (ICUs). There is a paucity of validated tools to assess ICU sign-outs. ^ Objective: To design a valid and reliable survey tool to assess the perceptions of Pediatric ICU (PICU) clinicians about sign-out. ^ Design: Cross-sectional, web-based survey ^ Setting: Academic hospital, 31-bed PICU ^ Subjects: Attending faculty, fellows, nurse practitioners and physician assistants. ^ Interventions: A survey was designed with input from a focus group and administered to PICU clinicians. Test-retest reliability, internal consistency and validity of the survey tool were assessed. ^ Measurements and Main Results: Forty-eight PICU clinicians agreed to participate. We had 42(88%) and 40(83%) responses in the test and retest phases. The mean scores for the ten survey items ranged from 2.79 to 3.67 on a five point Likert scale with no significant test-retest difference and a Pearson correlation between pre and post answers of 0.65. The survey item scores showed internal consistency with a Cronbach's Alpha of 0.85. Exploratory factor analysis revealed three constructs: efficacy of sign-out process, recipient satisfaction and content applicability. Seventy eight % clinicians affirmed the need for improvement of the sign-out process and 83% confirmed the need for face- to-face verbal sign-out. A system-based sign-out format was favored by fellows and advanced level practitioners while attendings preferred a problem-based format (p=0.003). ^ Conclusions: We developed a valid and reliable survey to assess clinician perceptions about the ICU sign-out process. These results can be used to design a verbal template to improve and standardize the sign-out process.^
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.
Resumo:
Over the last 2 decades, survival rates in critically ill cancer patients have improved. Despite the increase in survival, the intensive care unit (ICU) continues to be a location where end-of-life care takes place. More than 20% of deaths in the United States occur after admission to an ICU, and as baby boomers reach the seventh and eighth decades of their lives, the volume of patients in the ICU is predicted to rise. The aim of this study was to evaluate intensive care unit utilization among patients with cancer who were at the end of life. End of life was defined using decedent and high-risk cohort study designs. The decedent study evaluated characteristics and ICU utilization during the terminal hospital stay among patients who died at The University of Texas MD Anderson Cancer Center during 2003-2007. The high-risk cohort study evaluated characteristics and ICU utilization during the index hospital stay among patients admitted to MD Anderson during 2003-2007 with a high risk of in-hospital mortality. Factors associated with higher ICU utilization in the decedent study included non-local residence, hematologic and non-metastatic solid tumor malignancies, malignancy diagnosed within 2 months, and elective admission to surgical or pediatric services. Having a palliative care consultation on admission was associated with dying in the hospital without ICU services. In the cohort of patients with high risk of in-hospital mortality, patients who went to the ICU were more likely to be younger, male, with newly diagnosed non-metastatic solid tumor or hematologic malignancy, and admitted from the emergency center to one of the surgical services. A palliative care consultation on admission was associated with a decreased likelihood of having an ICU stay. There were no differences in ethnicity, marital status, comorbidities, or insurance status between patients who did and did not utilize ICU services. Inpatient mortality probability models developed for the general population are inadequate in predicting in-hospital mortality for patients with cancer. The following characteristics that differed between the decedent study and high-risk cohort study can be considered in future research to predict risk of in-hospital mortality for patients with cancer: ethnicity, type and stage of malignancy, time since diagnosis, and having advance directives. Identifying those at risk can precipitate discussions in advance to ensure care remains appropriate and in accordance with the wishes of the patient and family.^