11 resultados para performance measures
em DigitalCommons@The Texas Medical Center
Resumo:
Family preservation service agencies in the State of Kansas have undergone major changes since the implementation of a managed care model of service delivery in 1996. This qualitative study examines the successes and barriers experienced by agency directors in utilization of a managed care system. Outcome/ performance measures utilized by the State of Kansas are reviewed, and contributing factors to the successes and limitations of the program are discussed. Included in these reviews is an analysis and presentation of literature and research which has been used as support for the current program structure. Recommendations for further evolution of practice are proposed.
Resumo:
Neuropsychological impairment occurs in 20%-40% of childhood acute lymphoblastic leukemia (ALL) survivors, possibly mediated by folate depletion following methotrexate chemotherapy. We evaluated the relationship between two folate pathway polymorphisms and neuropsychological impairment after childhood ALL chemotherapy. Eighty-six childhood ALL survivors were recruited between 2004-2007 at Texas Children's Hospital after exclusion for central nervous system leukemia, cranial irradiation, and age<1 year at diagnosis. Neuropsychological evaluation at a median of 5.3 years off therapy included a parental questionnaire and the following child performance measures: Trail Making Tests A and B, Grooved Pegboard Test Dominant-Hand and Nondominant-Hand, and Digit Span subtest. We performed genotyping for polymorphisms in two folate pathway genes: reduced folate carrier (RFC1 80G>A, rs1051266) and dihydrofolate reductase (DHFR Intron-1 19bp deletion). Fisher exact test, logistic regression, Student's t-test, and ANOVA were used to compare neuropsychological test scores by genotype, using a dominant model to group genotypes. In univariate analysis, survivors with cumulative methotrexate exposure ≥9000 mg/m2 had an increased risk of attention disorder (OR=6.2, 95% CI 1.2 – 31.3), compared to survivors with methotrexate exposure <9000 mg/m2. On average, female survivors scored 8.5 points higher than males on the Digit Span subtest, a test of working memory (p=0.02). The RFC1 80G>A and DHFR Intron-1 deletion polymorphisms were not related to attention disorder or impairment on tests of attention, processing speed, fine motor speed, or memory. These data imply a strong relationship between methotrexate dose intensity and impairment in attention after childhood ALL therapy. We did not find an association between the RFC1 80G>A or DHFR Intron-1 deletion polymorphisms and long-term neuropsychological impairment in childhood ALL survivors.^
Resumo:
Objectives. The central objective of this study was to systematically examine the internal structure of multihospital systems, determining the management principles used and the performance levels achieved in medical care and administrative areas.^ The Universe. The study universe consisted of short-term general American hospitals owned and operated by multihospital corporations. Corporations compared were the investor-owned (for-profit) and the voluntary multihospital systems. The individual hospital was the unit of analysis for the study.^ Theoretical Considerations. The contingency theory, using selected aspects of the classical and human relations schools of thought, seemed well suited to describe multihospital organization and was used in this research.^ The Study Hypotheses. The main null hypotheses generated were that there are no significant differences between the voluntary and the investor-owned multihospital sectors in their (1) hospital structures and (2) patient care and administrative performance levels.^ The Sample. A stratified random sample of 212 hospitals owned by multihospital systems was selected to equally represent the two study sectors. Of the sampled hospitals approached, 90.1% responded.^ The Analysis. Sixteen scales were constructed in conjunction with 16 structural variables developed from the major questions and sub-items of the questionnaire. This was followed by analysis of an additional 7 structural and 24 effectiveness (performance) measures, using frequency distributions. Finally, summary statistics and statistical testing for each variable and sub-items were completed and recorded in 38 tables.^ Study Findings. While it has been argued that there are great differences between the two sectors, this study found that with a few exceptions the null hypotheses of no difference in organizational and operational characteristics of non-profit and for-profit hospitals was accepted. However, there were several significant differences found in the structural variables: functional specialization, and autonomy were significantly higher in the voluntary sector. Only centralization was significantly different in the investor owned. Among the effectiveness measures, occupancy rate, cost of data processing, total manhours worked, F.T.E. ratios, and personnel per occupied bed were significantly higher in the voluntary sector. The findings indicated that both voluntary and for-profit systems were converging toward a common hierarchical corporate management approach. Factors of size and management style may be better descriptors to characterize a specific multihospital group than its profit or nonprofit status. (Abstract shortened with permission of author.) ^
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.
Resumo:
Purpose. No Child Left Behind aimed to "improve the academic achievement of the disadvantaged." The primary research question considered how academic achievement of those from economic disadvantage compared to those not from disadvantage? ^ Economically disadvantaged students can potentially have added academic disadvantage. Research shows low academic achievement can potentially result in drug abuse, youth violence, and teen pregnancy. ^ Methods. To compare the student populations, measures included TAKS results and academic indicator data collected by the Texas Education Agency. ^ Results. T-test analyses showed a significant difference between the economically and non-economically disadvantaged student populations in meeting the TAKS passing standard, graduation, and preparation for higher education.^ Conclusions. The achievement gap between students remained as indicated by the Texas testing program. More research and time are needed to observe if the desired impact on those from economic disadvantage will be reflected by academic achievement data.^
Resumo:
Objective. The study reviewed one year of Texas hospital discharge data and Trauma Registry data for the 22 trauma services regions in Texas to identify regional variations in capacity, process of care and clinical outcomes for trauma patients, and analyze the statistical associations among capacity, process of care, and outcomes. ^ Methods. Cross sectional study design covering one year of state-wide Texas data. Indicators of trauma capacity, trauma care processes, and clinical outcomes were defined and data were collected on each indicator. Descriptive analyses were conducted of regional variations in trauma capacity, process of care, and clinical outcomes at all trauma centers, at Level I and II trauma centers and at Level III and IV trauma centers. Multilevel regression models were performed to test the relations among trauma capacity, process of care, and outcome measures at all trauma centers, at Level I and II trauma centers and at Level III and IV trauma centers while controlling for confounders such as age, gender, race/ethnicity, injury severity, level of trauma centers and urbanization. ^ Results. Significant regional variation was found among the 22 trauma services regions across Texas in trauma capacity, process of care, and clinical outcomes. The regional trauma bed rate, the average staffed bed per 100,000 varied significantly by trauma service region. Pre-hospital trauma care processes were significantly variable by region---EMS time, transfer time, and triage. Clinical outcomes including mortality, hospital and intensive care unit length of stay, and hospital charges also varied significantly by region. In multilevel regression analysis, the average trauma bed rate was significantly related to trauma care processes including ambulance delivery time, transfer time, and triage after controlling for age, gender, race/ethnicity, injury severity, level of trauma centers, and urbanization at all trauma centers. Transfer time only among processes of care was significant with the average trauma bed rate by region at Level III and IV. Also trauma mortality only among outcomes measures was significantly associated with the average trauma bed rate by region at all trauma centers. Hospital charges only among outcomes measures were statistically related to trauma bed rate at Level I and II trauma centers. The effect of confounders on processes and outcomes such as age, gender, race/ethnicity, injury severity, and urbanization was found significantly variable by level of trauma centers. ^ Conclusions. Regional variation in trauma capacity, process, and outcomes in Texas was extensive. Trauma capacity, age, gender, race/ethnicity, injury severity, level of trauma centers and urbanization were significantly associated with trauma process and clinical outcomes depending on level of trauma centers. ^ Key words: regionalized trauma systems, trauma capacity, pre-hospital trauma care, process, trauma outcomes, trauma performance, evaluation measures, regional variations ^
Resumo:
This dissertation develops and tests a comparative effectiveness methodology utilizing a novel approach to the application of Data Envelopment Analysis (DEA) in health studies. The concept of performance tiers (PerT) is introduced as terminology to express a relative risk class for individuals within a peer group and the PerT calculation is implemented with operations research (DEA) and spatial algorithms. The analysis results in the discrimination of the individual data observations into a relative risk classification by the DEA-PerT methodology. The performance of two distance measures, kNN (k-nearest neighbor) and Mahalanobis, was subsequently tested to classify new entrants into the appropriate tier. The methods were applied to subject data for the 14 year old cohort in the Project HeartBeat! study.^ The concepts presented herein represent a paradigm shift in the potential for public health applications to identify and respond to individual health status. The resultant classification scheme provides descriptive, and potentially prescriptive, guidance to assess and implement treatments and strategies to improve the delivery and performance of health systems. ^
Resumo:
This study developed proxy measures to test the independent effects of medical specialty, institutional ethics committee (IEC) and the interaction between the two, upon a proxy for the dependent variable of the medical decision to withhold/withdraw care for the dying--the resuscitation index (R-index). Five clinical vignettes were constructed and validated to convey the realism and contextual factors implicit in the decision to withhold/withdraw care. A scale was developed to determine the range of contact by an IEC in terms of physician knowledge and use of IEC policy.^ This study was composed of a sample of 215 physicians in a teaching hospital in the Southwest where proxy measures were tested for two competing influences, medical specialty and IEC, which alternately oppose and support the decision to withhold/withdraw care for the dying. A sub-sample of surgeons supported the hypothesis that an IEC is influential in opposing the medical training imperative to prolong life.^ Those surgeons with a low IEC score were 326 percent more likely to continue care than were surgeons with a high IEC score when compared to all other specialties. IEC alone was also found to significantly predict the decision to withhold/withdraw care. Interaction of IEC with the specialty of surgery was found to be the best predictor for a decision to withhold/withdraw care for the dying. ^
Resumo:
The research project is an extension of a series of administrative science and health care research projects evaluating the influence of external context, organizational strategy, and organizational structure upon organizational success or performance. The research will rely on the assumption that there is not one single best approach to the management of organizations (the contingency theory). As organizational effectiveness is dependent on an appropriate mix of factors, organizations may be equally effective based on differing combinations of factors. The external context of the organization is expected to influence internal organizational strategy and structure and in turn the internal measures affect performance (discriminant theory). The research considers the relationship of external context and organization performance.^ The unit of study for the research will be the health maintenance organization (HMO); an organization the accepts in exchange for a fixed, advance capitation payment, contractual responsibility to assure the delivery of a stated range of health sevices to a voluntary enrolled population. With the current Federal resurgence of interest in the Health Maintenance Organization (HMO) as a major component in the health care system, attention must be directed at maximizing development of HMOs from the limited resources available. Increased skills are needed in both Federal and private evaluation of HMO feasibility in order to prevent resource investment and in projects that will fail while concurrently identifying potentially successful projects that will not be considered using current standards.^ The research considers 192 factors measuring contextual milieu (social, educational, economic, legal, demographic, health and technological factors). Through intercorrelation and principle components data reduction techniques this was reduced to 12 variables. Two measures of HMO performance were identified, they are (1) HMO status (operational or defunct), and (2) a principle components factor score considering eight measures of performance. The relationship between HMO context and performance was analysed using correlation and stepwise multiple regression methods. In each case it has been concluded that the external contextual variables are not predictive of success or failure of study Health Maintenance Organizations. This suggests that performance of an HMO may rely on internal organizational factors. These findings have policy implications as contextual measures are used as a major determinant in HMO feasibility analysis, and as a factor in the allocation of limited Federal funds. ^
Resumo:
The study analyzed Hospital Compare data for Medicare Fee-for-service patients at least 65 years of age to determine whether hospital performance for AMI outcome and processes of care measures differ amongst Texas hospitals with respect to ownership status (for profit vs. not-for-profit), academic status (teaching vs. non-teaching) and geographical setting (rural vs. urban). ^ The study found a statistically significant difference between for-profit and not-for-profit hospitals in four process-of-care measures (aspirin at discharge, P=0.028; ACE or ARB inhibitor for LSVD, P=0.048; Smoking cessation advice: P=0.034; outpatients who got aspirin with 24 hours of arrival in the ED, P=0.044). No significant difference in performance was found between COTH-member teaching and non-teaching hospitals for any of the eight process-of-care measures or the two outcome measures for AMI. The study was unable to compare performance based on geographic setting of hospitals due to lack of sufficient data for rural hospitals. ^ The results of the study suggest that for-profit Texas hospitals might be slightly better than not-for –profit hospitals at providing possible heart attack patients with certain processes of care.^
Resumo:
Objective. In 2009, the International Expert Committee recommended the use of HbA1c test for diagnosis of diabetes. Although it has been recommended for the diagnosis of diabetes, its precise test performance among Mexican Americans is uncertain. A strong “gold standard” would rely on repeated blood glucose measurement on different days, which is the recommended method for diagnosing diabetes in clinical practice. Our objective was to assess test performance of HbA1c in detecting diabetes and pre-diabetes against repeated fasting blood glucose measurement for the Mexican American population living in United States-Mexico border. Moreover, we wanted to find out a specific and precise threshold value of HbA1c for Diabetes Mellitus (DM) and pre-diabetes for this high-risk population which might assist in better diagnosis and better management of patient diabetes. ^ Research design and methods. We used CCHC dataset for our study. In 2004, the Cameron County Hispanic Cohort (CCHC), now numbering 2,574, was established drawn from randomly selected households on the basis of 2000 Census tract data. The CCHC study randomly selected a subset of people (aged 18-64 years) in CCHC cohort households to determine the influence of SES on diabetes and obesity. Among the participants in Cohort-2000, 67.15% are female; all are Hispanic. ^ Individuals were defined as having diabetes mellitus (Fasting plasma glucose [FPG] ≥ 126 mg/dL or pre-diabetes (100 ≤ FPG < 126 mg/dL). HbA1c test performance was evaluated using receiver operator characteristic (ROC) curves. Moreover, change-point models were used to determine HbA1c thresholds compatible with FPG thresholds for diabetes and pre-diabetes. ^ Results. When assessing Fasting Plasma Glucose (FPG) is used to detect diabetes, the sensitivity and specificity of HbA1c≥ 6.5% was 75% and 87% respectively (area under the curve 0.895). Additionally, when assessing FPG to detect pre-diabetes, the sensitivity and specificity of HbA1c≥ 6.0% (ADA recommended threshold) was 18% and 90% respectively. The sensitivity and specificity of HbA1c≥ 5.7% (International Expert Committee recommended threshold) for detecting pre-diabetes was 31% and 78% respectively. ROC analyses suggest HbA1c as a sound predictor of diabetes mellitus (area under the curve 0.895) but a poorer predictor for pre-diabetes (area under the curve 0.632). ^ Conclusions. Our data support the current recommendations for use of HbA1c in the diagnosis of diabetes for the Mexican American population as it has shown reasonable sensitivity, specificity and accuracy against repeated FPG measures. However, use of HbA1c may be premature for detecting pre-diabetes in this specific population because of the poor sensitivity with FPG. It might be the case that HbA1c is differentiating the cases more effectively who are at risk of developing diabetes. Following these pre-diabetic individuals for a longer-term for the detection of incident diabetes may lead to more confirmatory result.^