8 resultados para Weighted average power tests
em DigitalCommons@The Texas Medical Center
Resumo:
Objective. Long Term Acute Care Hospitals (LTACs) are subject to Medicare rules because they accept Medicare and Medicaid patients. In October 2002, Medicare changed the LTAC reimbursement formulas, from a cost basis system to a Prospective Payment System (PPS). This study examines whether the PPS has negatively affected the financial performance of the LTAC hospitals in the period following the reimbursement change (2003-2006), as compared to the period prior to the change (1999-2003), and if so, to what extent. This study will also examine whether the PPS has resulted in a decreased average patient length of stay (LOS) in the LTAC hospitals for the period of 2003-2006 as compared to the prior period of 1999-2003, and if so, to what extent. ^ Methods. The study group consists of two large LTAC hospital systems, Kindred Healthcare Inc. and Select Specialty Hospitals of Select Medical Corporation. Financial data and operational indicators were reviewed, tabulated and dichotomized into two groups, covering the two periods: 1999-2002 and 2003-2006. The financial data included net annual revenues, net income, revenue per patient per day and profit margins. It was hypothesized that the profit margins for the LTAC hospitals were reduced because of the new PPS. Operational indicators, such as annual admissions, annual patient days, and average LOS were analyzed. It was hypothesized that LOS for the LTAC hospitals would have decreased. Case mix index, defined as the weighted average of patients’ DRGs for each hospital system, was not available to cast more light on the direction of LOS. ^ Results. This assessment found that the negative financial impacts did not materialize; instead, financial performance improved during the PPS period (2003-2006). The income margin percentage under the PPS increased for Kindred by 24%, and for Select by 77%. Thus, the study’s working hypothesis of reduced income margins for the LTACs under the PPS was contradicted. As to the average patient length of stay, LOS decreased from 34.7 days to 29.4 days for Kindred, and from 30.5 days to 25.3 days for Select. Thus, on the issue of LTAC shorter length of stay, the study’s working hypothesis was confirmed. ^ Conclusion. Overall, there was no negative financial effect on the LTAC hospitals during the period of 2003-2006 following Medicare implementation of the PPS in October 2002. On the contrary, the income margins improved significantly. ^ During the same period, LOS decreased following the implementation of the PPS. This was consistent with the LTAC hospitals’ pursuit of financial incentives.^
Resumo:
Objective. Long Term Acute Care Hospitals (LTACs) are subject to Medicare rules because they accept Medicare and Medicaid patients. In October 2002, Medicare changed the LTAC reimbursement formulas, from a cost basis system to a Prospective Payment System (PPS). This study examines whether the PPS has negatively affected the financial performance of the LTAC hospitals in the period following the reimbursement change (2003–2006), as compared to the period prior to the change (1999–2003), and if so, to what extent. This study will also examine whether the PPS has resulted in a decreased average patient length of stay (LOS) in the LTAC hospitals for the period of 2003–2006 as compared to the prior period of 1999-2003, and if so, to what extent. ^ Methods. The study group consists of two large LTAC hospital systems, Kindred Healthcare Inc. and Select Specialty Hospitals of Select Medical Corporation. Financial data and operational indicators were reviewed, tabulated and dichotomized into two groups, covering the two periods: 1999–2002 and 2003–2006. The financial data included net annual revenues, net income, revenue per patient per day and profit margins. It was hypothesized that the profit margins for the LTAC hospitals were reduced because of the new PPS. Operational indicators, such as annual admissions, annual patient days, and average LOS were analyzed. It was hypothesized that LOS for the LTAC hospitals would have decreased. Case mix index, defined as the weighted average of patients’ DRGs for each hospital system, was not available to cast more light on the direction of LOS. ^ Results. This assessment found that the negative financial impacts did not materialize; instead, financial performance improved during the PPS period (2003–2006). The income margin percentage under the PPS increased for Kindred by 24%, and for Select by 77%. Thus, the study’s working hypothesis of reduced income margins for the LTACs under the PPS was contradicted. As to the average patient length of stay, LOS decreased from 34.7 days to 29.4 days for Kindred, and from 30.5 days to 25.3 days for Select. Thus, on the issue of LTAC shorter length of stay, the study’s working hypothesis was confirmed. ^ Conclusion. Overall, there was no negative financial effect on the LTAC hospitals during the period of 2003–2006 following Medicare implementation of the PPS in October 2002. On the contrary, the income margins improved significantly. ^ During the same period, LOS decreased following the implementation of the PPS. This was consistent with the LTAC hospitals’ pursuit of financial incentives. ^
Resumo:
An analysis of variation in hospital inpatient charges in the greater Houston area is conducted to determine if there are consistent differences among payers. Differences in charges are examined for 59 Composite Diagnosis Related Groups (CDRGs) and two regression equations estimating charges are specified. Simple comparison of mean charges by diagnostic categories are significantly different for 42 (71 percent) of the 59 categories examined. In 41 of the 42 significant categories, charges to Medicaid were less than charges to private insurers. Meta-analytic statistical techniques yielded a weighted average effect size of $-$0.7198 for the 59 diagnostic categories, indicating an overall effect that Medicaid charges were less than private insurance charges. Results of a multiple regression estimating charges showed that private insurance was a significant independent variable, along with age, length of stay, and hospital variables. Results indicated consistent differential charges in the present analysis. ^
Resumo:
Sizes and power of selected two-sample tests of the equality of survival distributions are compared by simulation for small samples from unequally, randomly-censored exponential distributions. The tests investigated include parametric tests (F, Score, Likelihood, Asymptotic), logrank tests (Mantel, Peto-Peto), and Wilcoxon-Type tests (Gehan, Prentice). Equal sized samples, n = 18, 16, 32 with 1000 (size) and 500 (power) simulation trials, are compared for 16 combinations of the censoring proportions 0%, 20%, 40%, and 60%. For n = 8 and 16, the Asymptotic, Peto-Peto, and Wilcoxon tests perform at nominal 5% size expectations, but the F, Score and Mantel tests exceeded 5% size confidence limits for 1/3 of the censoring combinations. For n = 32, all tests showed proper size, with the Peto-Peto test most conservative in the presence of unequal censoring. Powers of all tests are compared for exponential hazard ratios of 1.4 and 2.0. There is little difference in power characteristics of the tests within the classes of tests considered. The Mantel test showed 90% to 95% power efficiency relative to parametric tests. Wilcoxon-type tests have the lowest relative power but are robust to differential censoring patterns. A modified Peto-Peto test shows power comparable to the Mantel test. For n = 32, a specific Weibull-exponential comparison of crossing survival curves suggests that the relative powers of logrank and Wilcoxon-type tests are dependent on the scale parameter of the Weibull distribution. Wilcoxon-type tests appear more powerful than logrank tests in the case of late-crossing and less powerful for early-crossing survival curves. Guidelines for the appropriate selection of two-sample tests are given. ^
Resumo:
Linkage disequilibrium methods can be used to find genes influencing quantitative trait variation in humans. Linkage disequilibrium methods can require smaller sample sizes than linkage equilibrium methods, such as the variance component approach to find loci with a specific effect size. The increase in power is at the expense of requiring more markers to be typed to scan the entire genome. This thesis compares different linkage disequilibrium methods to determine which factors influence the power to detect disequilibrium. The costs of disequilibrium and equilibrium tests were compared to determine whether the savings in phenotyping costs when using disequilibrium methods outweigh the additional genotyping costs.^ Nine linkage disequilibrium tests were examined by simulation. Five tests involve selecting isolated unrelated individuals while four involved the selection of parent child trios (TDT). All nine tests were found to be able to identify disequilibrium with the correct significance level in Hardy-Weinberg populations. Increasing linked genetic variance and trait allele frequency were found to increase the power to detect disequilibrium, while increasing the number of generations and distance between marker and trait loci decreased the power to detect disequilibrium. Discordant sampling was used for several of the tests. It was found that the more stringent the sampling, the greater the power to detect disequilibrium in a sample of given size. The power to detect disequilibrium was not affected by the presence of polygenic effects.^ When the trait locus had more than two trait alleles, the power of the tests maximized to less than one. For the simulation methods used here, when there were more than two-trait alleles there was a probability equal to 1-heterozygosity of the marker locus that both trait alleles were in disequilibrium with the same marker allele, resulting in the marker being uninformative for disequilibrium.^ The five tests using isolated unrelated individuals were found to have excess error rates when there was disequilibrium due to population admixture. Increased error rates also resulted from increased unlinked major gene effects, discordant trait allele frequency, and increased disequilibrium. Polygenic effects did not affect the error rates. The TDT, Transmission Disequilibrium Test, based tests were not liable to any increase in error rates.^ For all sample ascertainment costs, for recent mutations ($<$100 generations) linkage disequilibrium tests were less expensive than the variance component test to carry out. Candidate gene scans saved even more money. The use of recently admixed populations also decreased the cost of performing a linkage disequilibrium test. ^
Resumo:
Linkage and association studies are major analytical tools to search for susceptibility genes for complex diseases. With the availability of large collection of single nucleotide polymorphisms (SNPs) and the rapid progresses for high throughput genotyping technologies, together with the ambitious goals of the International HapMap Project, genetic markers covering the whole genome will be available for genome-wide linkage and association studies. In order not to inflate the type I error rate in performing genome-wide linkage and association studies, multiple adjustment for the significant level for each independent linkage and/or association test is required, and this has led to the suggestion of genome-wide significant cut-off as low as 5 × 10 −7. Almost no linkage and/or association study can meet such a stringent threshold by the standard statistical methods. Developing new statistics with high power is urgently needed to tackle this problem. This dissertation proposes and explores a class of novel test statistics that can be used in both population-based and family-based genetic data by employing a completely new strategy, which uses nonlinear transformation of the sample means to construct test statistics for linkage and association studies. Extensive simulation studies are used to illustrate the properties of the nonlinear test statistics. Power calculations are performed using both analytical and empirical methods. Finally, real data sets are analyzed with the nonlinear test statistics. Results show that the nonlinear test statistics have correct type I error rates, and most of the studied nonlinear test statistics have higher power than the standard chi-square test. This dissertation introduces a new idea to design novel test statistics with high power and might open new ways to mapping susceptibility genes for complex diseases. ^
Resumo:
Background. EAP programs for airline pilots in companies with a well developed recovery management program are known to reduce pilot absenteeism following treatment. Given the costs and safety consequences to society, it is important to identify pilots who may be experiencing an AOD disorder to get them into treatment. ^ Hypotheses. This study investigated the predictive power of workplace absenteeism in identifying alcohol or drug disorders (AOD). The first hypothesis was that higher absenteeism in a 12-month period is associated with higher risk that an employee is experiencing AOD. The second hypothesis was that AOD treatment would reduce subsequent absence rates and the costs of replacing pilots on missed flights. ^ Methods. A case control design using eight years (time period) of monthly archival absence data (53,000 pay records) was conducted with a sample of (N = 76) employees having an AOD diagnosis (cases) matched 1:4 with (N = 304) non-diagnosed employees (controls) of the same profession and company (male commercial airline pilots). Cases and controls were matched on the variables age, rank and date of hire. Absence rate was defined as sick time hours used over the sum of the minimum guarantee pay hours annualized using the months the pilot worked for the year. Conditional logistic regression was used to determine if absence predicts employees experiencing an AOD disorder, starting 3 years prior to the cases receiving the AOD diagnosis. A repeated measures ANOVA, t tests and rate ratios (with 95% confidence intervals) were conducted to determine differences between cases and controls in absence usage for 3 years pre and 5 years post treatment. Mean replacement costs were calculated for sick leave usage 3 years pre and 5 years post treatment to estimate the cost of sick leave from the perspective of the company. ^ Results. Sick leave, as measured by absence rate, predicted the risk of being diagnosed with an AOD disorder (OR 1.10, 95% CI = 1.06, 1.15) during the 12 months prior to receiving the diagnosis. Mean absence rates for diagnosed employees increased over the three years before treatment, particularly in the year before treatment, whereas the controls’ did not (three years, x = 6.80 vs. 5.52; two years, x = 7.81 vs. 6.30, and one year, x = 11.00cases vs. 5.51controls. In the first year post treatment compared to the year prior to treatment, rate ratios indicated a significant (60%) post treatment reduction in absence rates (OR = 0.40, CI = 0.28, 0.57). Absence rates for cases remained lower than controls for the first three years after completion of treatment. Upon discharge from the FAA and company’s three year AOD monitoring program, case’s absence rates increased slightly during the fourth year (controls, x = 0.09, SD = 0.14, cases, x = 0.12, SD = 0.21). However, the following year, their mean absence rates were again below those of the controls (controls, x = 0.08, SD = 0.12, cases, x¯ = 0.06, SD = 0.07). Significant reductions in costs associated with replacing pilots calling in sick, were found to be 60% less, between the year of diagnosis for the cases and the first year after returning to work. A reduction in replacement costs continued over the next two years for the treated employees. ^ Conclusions. This research demonstrates the potential for workplace absences as an active organizational surveillance mechanism to assist managers and supervisors in identifying employees who may be experiencing or at risk of experiencing an alcohol/drug disorder. Currently, many workplaces use only performance problems and ignore the employee’s absence record. A referral to an EAP or alcohol/drug evaluation based on the employee’s absence/sick leave record as incorporated into company policy can provide another useful indicator that may also carry less stigma, thus reducing barriers to seeking help. This research also confirms two conclusions heretofore based only on cross-sectional studies: (1) higher absence rates are associated with employees experiencing an AOD disorder; (2) treatment is associated with lower costs for replacing absent pilots. Due to the uniqueness of the employee population studied (commercial airline pilots) and the organizational documentation of absence, the generalizability of this study to other professions and occupations should be considered limited. ^ Transition to Practice. The odds ratios for the relationship between absence rates and an AOD diagnosis are precise; the OR for year of diagnosis indicates the likelihood of being diagnosed increases 10% for every hour change in sick leave taken. In practice, however, a pilot uses approximately 20 hours of sick leave for one trip, because the replacement will have to be paid the guaranteed minimum of 20 hour. Thus, the rate based on hourly changes is precise but not practical. ^ To provide the organization with practical recommendations the yearly mean absence rates were used. A pilot flies on average, 90 hours a month, 1080 annually. Cases used almost twice the mean rate of sick time the year prior to diagnosis (T-1) compared to controls (cases, x = .11, controls, x = .06). Cases are expected to use on average 119 hours annually (total annual hours*mean annual absence rate), while controls will use 60 hours. The cases’ 60 hours could translate to 3 trips of 20 hours each. Management could use a standard of 80 hours or more of sick time claimed in a year as the threshold for unacceptable absence, a 25% increase over the controls (a cost to the company of approximately of $4000). At the 80-hour mark, the Chief Pilot would be able to call the pilot in for a routine check as to the nature of the pilot’s excessive absence. This management action would be based on a company standard, rather than a behavioral or performance issue. Using absence data in this fashion would make it an active surveillance mechanism. ^
Resumo:
Although the area under the receiver operating characteristic (AUC) is the most popular measure of the performance of prediction models, it has limitations, especially when it is used to evaluate the added discrimination of a new biomarker in the model. Pencina et al. (2008) proposed two indices, the net reclassification improvement (NRI) and integrated discrimination improvement (IDI), to supplement the improvement in the AUC (IAUC). Their NRI and IDI are based on binary outcomes in case-control settings, which do not involve time-to-event outcome. However, many disease outcomes are time-dependent and the onset time can be censored. Measuring discrimination potential of a prognostic marker without considering time to event can lead to biased estimates. In this dissertation, we have extended the NRI and IDI to survival analysis settings and derived the corresponding sample estimators and asymptotic tests. Simulation studies were conducted to compare the performance of the time-dependent NRI and IDI with Pencina’s NRI and IDI. For illustration, we have applied the proposed method to a breast cancer study.^ Key words: Prognostic model, Discrimination, Time-dependent NRI and IDI ^