9 resultados para allocation of risks
em DigitalCommons@The Texas Medical Center
Resumo:
A patient classification system was developed integrating a patient acuity instrument with a computerized nursing distribution method based on a linear programming model. The system was designed for real-time measurement of patient acuity (workload) and allocation of nursing personnel to optimize the utilization of resources.^ The acuity instrument was a prototype tool with eight categories of patients defined by patient severity and nursing intensity parameters. From this tool, the demand for nursing care was defined in patient points with one point equal to one hour of RN time. Validity and reliability of the instrument was determined as follows: (1) Content validity by a panel of expert nurses; (2) predictive validity through a paired t-test analysis of preshift and postshift categorization of patients; (3) initial reliability by a one month pilot of the instrument in a practice setting; and (4) interrater reliability by the Kappa statistic.^ The nursing distribution system was a linear programming model using a branch and bound technique for obtaining integer solutions. The objective function was to minimize the total number of nursing personnel used by optimally assigning the staff to meet the acuity needs of the units. A penalty weight was used as a coefficient of the objective function variables to define priorities for allocation of staff.^ The demand constraints were requirements to meet the total acuity points needed for each unit and to have a minimum number of RNs on each unit. Supply constraints were: (1) total availability of each type of staff and the value of that staff member (value was determined relative to that type of staff's ability to perform the job function of an RN (i.e., value for eight hours RN = 8 points, LVN = 6 points); (2) number of personnel available for floating between units.^ The capability of the model to assign staff quantitatively and qualitatively equal to the manual method was established by a thirty day comparison. Sensitivity testing demonstrated appropriate adjustment of the optimal solution to changes in penalty coefficients in the objective function and to acuity totals in the demand constraints.^ Further investigation of the model documented: correct adjustment of assignments in response to staff value changes; and cost minimization by an addition of a dollar coefficient to the objective function. ^
Resumo:
Background. Over half of children in the United States under age five spend 32 hours a week in child care, facilities, where they consume approximately 33-50% of their food intake. ^ Objectives. The aim of this research was to identify the effects of state nutrition policies on provision of food in child care centers. ^ Subjects. Eleven directors or their designee from ten randomly selected licensed child care centers in Travis County, Texas were interviewed. Centers included both nonprofit and for-profit centers, with enrollments ranging from 19 to 82. ^ Methods. Centers were selected using a web-based list of licensed child care providers in the Austin area. One-on-one interviews were conducted in person with center directors using a standard set of questions developed from previous pilot work. Interview items included demographic data, questions about state policies regarding provision of foods in centers, effects of policies on child care center budgets and foods offered, and changes in the provision of food. All interviews were audiotaped and transcribed, and themes were identified using standard qualitative techniques. ^ Results. Four of the centers provided both meals and snacks, four provided snacks only, and two did not provide any food. Directors of centers that provided food were more likely to report adherence to the Minimum Standards than directors of centers that did not. In general, center directors reported that the regulations were loosely enforced. In contrast, center directors were more concerned about a local city-county regulation that required food permits and new standards for kitchens. Most of these local regulations were cost prohibitive and, as a result, centers had changed the types of foods provided, which included providing less fresh produce and more prepackaged items. Although implementation of local regulations had reduced provision of fruits and vegetables to children, no adjustments were reported for allocation of resources, tuition costs or care of the children. ^ Conclusions. Qualitative data from a small sample of child care directors indicate that the implementation and accountability of food- and nutrition-related guidelines for centers is sporadic, uncoordinated, and can have unforeseen effects on the provision of food. A quantitative survey and dietary assessment methods should be conducted to verify these findings in a larger and more representative sample.^
Resumo:
Background. At present, prostate cancer screening (PCS) guidelines require a discussion of risks, benefits, alternatives, and personal values, making decision aids an important tool to help convey information and to help clarify values. Objective: The overall goal of this study is to provide evidence of the reliability and validity of a PCS anxiety measure and the Decisional Conflict Scale (DCS). Methods. Using data from a randomized, controlled PCS decision aid trial that measured PCS anxiety at baseline and DCS at baseline (T0) and at two-weeks (T2), four psychometric properties were assessed: (1) internal consistency reliability, indicated by factor analysis intraclass correlations and Cronbach's α; (2) construct validity, indicated by patterns of Pearson correlations among subscales; (3) discriminant validity, indicated by the measure's ability to discriminate between undecided men and those with a definite screening intention; and (4) factor validity and invariance using confirmatory factor analyses (CFA). Results. The PCS anxiety measure had adequate internal consistency reliability and good construct and discriminant validity. CFAs indicated that the 3-factor model did not have adequate fit. CFAs for a general PCS anxiety measure and a PSA anxiety measure indicated adequate fit. The general PCS anxiety measure was invariant across clinics. The DCS had adequate internal consistency reliability except for the support subscale and had adequate discriminate validity. Good construct validity was found at the private clinic, but was only found for the feeling informed subscale at the public clinic. The traditional DCS did not have adequate fit at T0 or at T2. The alternative DCS had adequate fit at T0 but was not identified at T2. Factor loadings indicated that two subscales, feeling informed and feeling clear about values, were not distinct factors. Conclusions. Our general PCS anxiety measure can be used in PCS decision aid studies. The alternative DCS may be appropriate for men eligible for PCS. Implications: More emphasis needs to be placed on the development of PCS anxiety items relating to testing procedures. We recommend that the two DCS versions be validated in other samples of men eligible for PCS and in other health care decisions that involve uncertainty. ^
Resumo:
Public health efforts were initiated in the United States with legislative actions for enhancing food safety and ensuring pure drinking water. Some additional policy initiatives during the early 20th century helped organize and coordinate relief efforts for victims of natural disasters. By 1950's the federal government expanded its role for providing better health and safety to the communities, and its disaster relief activities became more structured. A rise in terrorism related incidents during the late 1990's prompted new proactive policy directions. The traditional policy and program efforts for rescue, recovery, and relief measures changed focus to include disaster preparedness and countermeasures against terrorism.^ The study took a holistic approach by analyzing all major disaster related policies and programs, in regard to their structure, process, and outcome. Study determined that United States has a strong disaster preparedness agenda and appropriate programs are in place with adequate policy support, and the country is prepared to meet all possible security challenges that may arise in the future. The man-made disaster of September 11th gave a major thrust to improve security and enhance preparedness of the country. These new efforts required large additional funding from the federal government. Most existing preparedness programs at the local and national levels are run with federal funds which is insufficient in some cases. This discrepancy arises from the fact that federal funding for disaster preparedness programs at present are not allocated by the level of risks to individual states or according to the risks that can be assigned to critical infrastructures across the country. However, the increased role of the federal government in public health affairs of the states is unusual, and opposed to the spirit of our constitution where sovereignty is equally divided between the federal government and the states. There is also shortage of manpower in public health to engage in disaster preparedness activities, despite some remarkable progress following the September 11th disaster.^ Study found that there was a significant improvement in knowledge and limited number of studies showed improvement of skills, increase in confidence and improvement in message-mapping. Among healthcare and allied healthcare professionals, short-term training on disaster preparedness increased knowledge and improved personal protective equipment use with some limited improvement in confidence and skills. However, due to the heterogeneity of these studies, the results and interpretation of this systematic review may be interpreted with caution.^
Resumo:
This study demonstrated that accurate, short-term forecasts of Veterans Affairs (VA) hospital utilization can be made using the Patient Treatment File (PTF), the inpatient discharge database of the VA. Accurate, short-term forecasts of two years or less can reduce required inventory levels, improve allocation of resources, and are essential for better financial management. These are all necessary achievements in an era of cost-containment.^ Six years of non-psychiatric discharge records were extracted from the PTF and used to calculate four indicators of VA hospital utilization: average length of stay, discharge rate, multi-stay rate (a measure of readmissions) and days of care provided. National and regional levels of these indicators were described and compared for fiscal year 1984 (FY84) to FY89 inclusive.^ Using the observed levels of utilization for the 48 months between FY84 and FY87, five techniques were used to forecast monthly levels of utilization for FY88 and FY89. Forecasts were compared to the observed levels of utilization for these years. Monthly forecasts were also produced for FY90 and FY91.^ Forecasts for days of care provided were not produced. Current inpatients with very long lengths of stay contribute a substantial amount of this indicator and it cannot be accurately calculated.^ During the six year period between FY84 and FY89, average length of stay declined substantially, nationally and regionally. The discharge rate was relatively stable, while the multi-stay rate increased slightly during this period. FY90 and FY91 forecasts show a continued decline in the average length of stay, while the discharge rate is forecast to decline slightly and the multi-stay rate is forecast to increase very slightly.^ Over a 24 month ahead period, all three indicators were forecast within a 10 percent average monthly error. The 12-month ahead forecast errors were slightly lower. Average length of stay was less easily forecast, while the multi-stay rate was the easiest indicator to forecast.^ No single technique performed significantly better as determined by the Mean Absolute Percent Error, a standard measure of error. However, Autoregressive Integrated Moving Average (ARIMA) models performed well overall and are recommended for short-term forecasting of VA hospital utilization. ^
Resumo:
The research project is an extension of a series of administrative science and health care research projects evaluating the influence of external context, organizational strategy, and organizational structure upon organizational success or performance. The research will rely on the assumption that there is not one single best approach to the management of organizations (the contingency theory). As organizational effectiveness is dependent on an appropriate mix of factors, organizations may be equally effective based on differing combinations of factors. The external context of the organization is expected to influence internal organizational strategy and structure and in turn the internal measures affect performance (discriminant theory). The research considers the relationship of external context and organization performance.^ The unit of study for the research will be the health maintenance organization (HMO); an organization the accepts in exchange for a fixed, advance capitation payment, contractual responsibility to assure the delivery of a stated range of health sevices to a voluntary enrolled population. With the current Federal resurgence of interest in the Health Maintenance Organization (HMO) as a major component in the health care system, attention must be directed at maximizing development of HMOs from the limited resources available. Increased skills are needed in both Federal and private evaluation of HMO feasibility in order to prevent resource investment and in projects that will fail while concurrently identifying potentially successful projects that will not be considered using current standards.^ The research considers 192 factors measuring contextual milieu (social, educational, economic, legal, demographic, health and technological factors). Through intercorrelation and principle components data reduction techniques this was reduced to 12 variables. Two measures of HMO performance were identified, they are (1) HMO status (operational or defunct), and (2) a principle components factor score considering eight measures of performance. The relationship between HMO context and performance was analysed using correlation and stepwise multiple regression methods. In each case it has been concluded that the external contextual variables are not predictive of success or failure of study Health Maintenance Organizations. This suggests that performance of an HMO may rely on internal organizational factors. These findings have policy implications as contextual measures are used as a major determinant in HMO feasibility analysis, and as a factor in the allocation of limited Federal funds. ^
Resumo:
Groundwater constitutes approximately 30% of freshwater globally and serves as a source of drinking water in many regions. Groundwater sources are subject to contamination with human pathogens (viruses, bacteria and protozoa) from a variety of sources that can cause diarrhea and contribute to the devastating global burden of this disease. To attempt to describe the extent of this public health concern in developing countries, a systematic review of the evidence for groundwater microbially-contaminated at its source as risk factor for enteric illness under endemic (non-outbreak) conditions in these countries was conducted. Epidemiologic studies published in English language journals between January 2000 and January 2011, and meeting certain other criteria, were selected, resulting in eleven studies reviewed. Data were extracted on microbes detected (and their concentrations if reported) and on associations measured between microbial quality of, or consumption of, groundwater and enteric illness; other relevant findings are also reported. In groundwater samples, several studies found bacterial indicators of fecal contamination (total coliforms, fecal coliforms, fecal streptococci, enterococci and E. coli), all in a wide range of concentrations. Rotavirus and a number of enteropathogenic bacteria and parasites were found in stool samples from study subjects who had consumed groundwater, but no concentrations were reported. Consumption of groundwater was associated with increased risk of diarrhea, with odds ratios ranging from 1.9 to 6.1. However, limitations of the selected studies, especially potential confounding factors, limited the conclusions that could be drawn from them. These results support the contention that microbial contamination of groundwater reservoirs—including with human enteropathogens and from a variety of sources—is a reality in developing countries. While microbially-contaminated groundwaters pose risk for diarrhea, other factors are also important, including water treatment, water storage practices, consumption of other water sources, water quantity and access to it, sanitation and hygiene, housing conditions, and socio-economic status. Further understanding of the interrelationships between, and the relative contributions to disease risk of, the various sources of microbial contamination of groundwater can guide the allocation of resources to interventions with the greatest public health benefit. Several recommendations for future research, and for practitioners and policymakers, are presented.^
Resumo:
Purpose. The measurement of quality of life has become an important topic in healthcare and in the allocation of limited healthcare resources. Improving the quality of life (QOL) in cancer patients is paramount. Cataract removal and lens implantation appears to improve patient well-being of cancer patients, though a formal measurement has never been published in the US literature. In this current study, National Eye Institute Visual Functioning Questionnaire (NEI-VFQ-25), a validated vision quality of life metric, was used to study the change in vision-related quality of life in cancer patients who underwent cataract extraction with intraocular lens implantation. ^ Methods. Under an IRB approved protocol, cancer patients who underwent cataract surgery with intraocular lens implantation (by a single surgeon) from December 2008 to March 2011, and who had completed a pre- and postoperative NEI-VFQ-25 were retrospectively reviewed. Post-operative data was collected at their routine 4-6 week post-op visit. Patients' demographics, cancer history, their pre and postoperative ocular examinations, visual acuities, and NEI-VFQ-25 with twelve components were included in the evaluation. The responses were evaluated using the Student t test, Spearman correlation and Wilcoxon signed rank test. ^ Results. 63 cases of cataract surgery (from 54 patients) from the MD Anderson Cancer Center were included in the study. Cancer patients had a significant improvement in the visual acuity (P<0.0001) postoperatively, along with a significant increase in vision-related quality of life (P<0.0001). Patients also had a statistically significant improvement in ten of the twelve subcategories which are addressed in the NEI-VFQ-25. ^ Conclusions. In our study, cataract extraction and intraocular implantation showed a significant impact on the vision-related quality of life in cancer patients. Although this study includes a small sample size, it serves as a positive pilot study to evaluate and quantify the impact of a surgical intervention on QOL in cancer patients and may help to design a larger study to measure vision related QOL per dollar spent for health care cost in cancer patients.^
Resumo:
Existing data, collected from 1st-year students enrolled in a major Health Science Community College in the south central United States, for Fall 2010, Spring 2011, Fall 2011 and Spring 2012 semesters as part of the "Online Navigational Assessment Vehicle, Intervention Guidance, and Targeting of Risks (NAVIGATOR) for Undergraduate Minority Student Success" with CPHS approval number HSC-GEN-07-0158, was used for this thesis. The Personal Background and Preparation Survey (PBPS) and a two-question risk self-assessment subscale were administered to students during their 1st-year orientation. The PBPS total risk score, risk self-assessment total and overall scores, and Under Representative Minority Student (URMS) status were recorded. The purpose of this study is to evaluate and report the predictive validity of the indicators identified above for Adverse Academic Status Events (AASE) and Nonadvancement Adverse Academic Status Events (NAASE) as well as the effectiveness of interventions targeted using the PBPS among a diverse population of health science community college students. The predictive validity of the PBPS for AASE has previously been demonstrated among health science professions and graduate students (Johnson, Johnson, Kim, & McKee, 2009a; Johnson, Johnson, McKee, & Kim, 2009b). Data will be analyzed using binary logistic regression and correlation using SPSS 19 statistical package. Independent variables will include baseline- versus intervention-year treatments, PBPS, risk self-assessment, and URMS status. The dependent variables will be binary AASE and NAASE status. ^ The PBPS was the first reliable diagnostic and prescriptive instrument to establish documented predictive validity for student Adverse Academic Status Events (AASE) among students attending health science professional schools. These results extend the documented validity for the PBPS in predicting AASE to a health science community college student population. Results further demonstrated that interventions introduced using the PBPS were followed by approximately one-third reduction in the odds of Nonadvancement Adverse Academic Status Events (NAASE), controlling for URMS status and risk self-assessment scores. These results indicate interventions introduced using the PBPS may have potential to reduce AASE or attrition among URMS and nonURMS attending health science community colleges on a broader scale; positively impacting costs, shortages, and diversity of health science professionals.^