6 resultados para Mean Absolute Scaled Error (MASE)

em DigitalCommons@The Texas Medical Center


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study demonstrated that accurate, short-term forecasts of Veterans Affairs (VA) hospital utilization can be made using the Patient Treatment File (PTF), the inpatient discharge database of the VA. Accurate, short-term forecasts of two years or less can reduce required inventory levels, improve allocation of resources, and are essential for better financial management. These are all necessary achievements in an era of cost-containment.^ Six years of non-psychiatric discharge records were extracted from the PTF and used to calculate four indicators of VA hospital utilization: average length of stay, discharge rate, multi-stay rate (a measure of readmissions) and days of care provided. National and regional levels of these indicators were described and compared for fiscal year 1984 (FY84) to FY89 inclusive.^ Using the observed levels of utilization for the 48 months between FY84 and FY87, five techniques were used to forecast monthly levels of utilization for FY88 and FY89. Forecasts were compared to the observed levels of utilization for these years. Monthly forecasts were also produced for FY90 and FY91.^ Forecasts for days of care provided were not produced. Current inpatients with very long lengths of stay contribute a substantial amount of this indicator and it cannot be accurately calculated.^ During the six year period between FY84 and FY89, average length of stay declined substantially, nationally and regionally. The discharge rate was relatively stable, while the multi-stay rate increased slightly during this period. FY90 and FY91 forecasts show a continued decline in the average length of stay, while the discharge rate is forecast to decline slightly and the multi-stay rate is forecast to increase very slightly.^ Over a 24 month ahead period, all three indicators were forecast within a 10 percent average monthly error. The 12-month ahead forecast errors were slightly lower. Average length of stay was less easily forecast, while the multi-stay rate was the easiest indicator to forecast.^ No single technique performed significantly better as determined by the Mean Absolute Percent Error, a standard measure of error. However, Autoregressive Integrated Moving Average (ARIMA) models performed well overall and are recommended for short-term forecasting of VA hospital utilization. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intensity modulated radiation therapy (IMRT) is a technique that delivers a highly conformal dose distribution to a target volume while attempting to maximally spare the surrounding normal tissues. IMRT is a common treatment modality used for treating head and neck (H&N) cancers, and the presence of many critical structures in this region requires accurate treatment delivery. The Radiological Physics Center (RPC) acts as both a remote and on-site quality assurance agency that credentials institutions participating in clinical trials. To date, about 30% of all IMRT participants have failed the RPC’s remote audit using the IMRT H&N phantom. The purpose of this project is to evaluate possible causes of H&N IMRT delivery errors observed by the RPC, specifically IMRT treatment plan complexity and the use of improper dosimetry data from machines that were thought to be matched but in reality were not. Eight H&N IMRT plans with a range of complexity defined by total MU (1460-3466), number of segments (54-225), and modulation complexity scores (MCS) (0.181-0.609) were created in Pinnacle v.8m. These plans were delivered to the RPC’s H&N phantom on a single Varian Clinac. One of the IMRT plans (1851 MU, 88 segments, and MCS=0.469) was equivalent to the median H&N plan from 130 previous RPC H&N phantom irradiations. This average IMRT plan was also delivered on four matched Varian Clinac machines and the dose distribution calculated using a different 6MV beam model. Radiochromic film and TLD within the phantom were used to analyze the dose profiles and absolute doses, respectively. The measured and calculated were compared to evaluate the dosimetric accuracy. All deliveries met the RPC acceptance criteria of ±7% absolute dose difference and 4 mm distance-to-agreement (DTA). Additionally, gamma index analysis was performed for all deliveries using a ±7%/4mm and ±5%/3mm criteria. Increasing the treatment plan complexity by varying the MU, number of segments, or varying the MCS resulted in no clear trend toward an increase in dosimetric error determined by the absolute dose difference, DTA, or gamma index. Varying the delivery machines as well as the beam model (use of a Clinac 6EX 6MV beam model vs. Clinac 21EX 6MV model), also did not show any clear trend towards an increased dosimetric error using the same criteria indicated above.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Environmental data sets of pollutant concentrations in air, water, and soil frequently include unquantified sample values reported only as being below the analytical method detection limit. These values, referred to as censored values, should be considered in the estimation of distribution parameters as each represents some value of pollutant concentration between zero and the detection limit. Most of the currently accepted methods for estimating the population parameters of environmental data sets containing censored values rely upon the assumption of an underlying normal (or transformed normal) distribution. This assumption can result in unacceptable levels of error in parameter estimation due to the unbounded left tail of the normal distribution. With the beta distribution, which is bounded by the same range of a distribution of concentrations, $\rm\lbrack0\le x\le1\rbrack,$ parameter estimation errors resulting from improper distribution bounds are avoided. This work developed a method that uses the beta distribution to estimate population parameters from censored environmental data sets and evaluated its performance in comparison to currently accepted methods that rely upon an underlying normal (or transformed normal) distribution. Data sets were generated assuming typical values encountered in environmental pollutant evaluation for mean, standard deviation, and number of variates. For each set of model values, data sets were generated assuming that the data was distributed either normally, lognormally, or according to a beta distribution. For varying levels of censoring, two established methods of parameter estimation, regression on normal ordered statistics, and regression on lognormal ordered statistics, were used to estimate the known mean and standard deviation of each data set. The method developed for this study, employing a beta distribution assumption, was also used to estimate parameters and the relative accuracy of all three methods were compared. For data sets of all three distribution types, and for censoring levels up to 50%, the performance of the new method equaled, if not exceeded, the performance of the two established methods. Because of its robustness in parameter estimation regardless of distribution type or censoring level, the method employing the beta distribution should be considered for full development in estimating parameters for censored environmental data sets. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study is to investigate the effects of predictor variable correlations and patterns of missingness with dichotomous and/or continuous data in small samples when missing data is multiply imputed. Missing data of predictor variables is multiply imputed under three different multivariate models: the multivariate normal model for continuous data, the multinomial model for dichotomous data and the general location model for mixed dichotomous and continuous data. Subsequent to the multiple imputation process, Type I error rates of the regression coefficients obtained with logistic regression analysis are estimated under various conditions of correlation structure, sample size, type of data and patterns of missing data. The distributional properties of average mean, variance and correlations among the predictor variables are assessed after the multiple imputation process. ^ For continuous predictor data under the multivariate normal model, Type I error rates are generally within the nominal values with samples of size n = 100. Smaller samples of size n = 50 resulted in more conservative estimates (i.e., lower than the nominal value). Correlation and variance estimates of the original data are retained after multiple imputation with less than 50% missing continuous predictor data. For dichotomous predictor data under the multinomial model, Type I error rates are generally conservative, which in part is due to the sparseness of the data. The correlation structure for the predictor variables is not well retained on multiply-imputed data from small samples with more than 50% missing data with this model. For mixed continuous and dichotomous predictor data, the results are similar to those found under the multivariate normal model for continuous data and under the multinomial model for dichotomous data. With all data types, a fully-observed variable included with variables subject to missingness in the multiple imputation process and subsequent statistical analysis provided liberal (larger than nominal values) Type I error rates under a specific pattern of missing data. It is suggested that future studies focus on the effects of multiple imputation in multivariate settings with more realistic data characteristics and a variety of multivariate analyses, assessing both Type I error and power. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. Over 39.9% of the adult population forty or older in the United States has refractive error, little is known about the etiology of this condition and associated risk factors and their entailed mechanism due to the paucity of data regarding the changes of refractive error for the adult population over time.^ Aim. To evaluate risk factors over a long term, 5-year period, in refractive error changes among persons 43 or older by testing the hypothesis that age, gender, systemic diseases, nuclear sclerosis and baseline refractive errors are all significantly associated with refractive errors changes in patients at a Dallas, Texas private optometric office.^ Methods. A retrospective chart review of subjective refraction, eye health, and self-report health history was done on patients at a private optometric office who were 43 or older in 2000 who had eye examinations both in 2000 and 2005. Aphakic and pseudophakic eyes were excluded as well as eyes with best corrected Snellen visual acuity of 20/40 and worse. After exclusions, refraction was obtained on 114 right eyes and 114 left eyes. Spherical equivalent (sum of sphere + ½ cylinder) was used as the measure of refractive error.^ Results. Similar changes in refractive error were observed for the two eyes. The 5-year change in spherical power was in a hyperopic direction for younger age groups and in a myopic direction for older subjects, P<0.0001. The gender-adjusted mean change in refractive error in right eyes of persons aged 43 to 54, 55 to 64, 65 to 74, and 75 or older at baseline was +0.43D, +0.46 D, -0.09 D, and -0.23D, respectively. Refractive change was strongly related to baseline nuclear cataract severity; grades 4 to 5 were associated with a myopic shift (-0.38 D, P< 0.0001). The mean age-adjusted change in refraction was +0.27 D for hyperopic eyes, +0.56 D for emmetropic eyes, and +0.26 D for myopic eyes.^ Conclusions. This report has documented refractive error changes in an older population and confirmed reported trends of a hyperopic shift before age 65 and a myopic shift thereafter associated with the development of nuclear cataract.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A large number of ridge regression estimators have been proposed and used with little knowledge of their true distributions. Because of this lack of knowledge, these estimators cannot be used to test hypotheses or to form confidence intervals.^ This paper presents a basic technique for deriving the exact distribution functions for a class of generalized ridge estimators. The technique is applied to five prominent generalized ridge estimators. Graphs of the resulting distribution functions are presented. The actual behavior of these estimators is found to be considerably different than the behavior which is generally assumed for ridge estimators.^ This paper also uses the derived distributions to examine the mean squared error properties of the estimators. A technique for developing confidence intervals based on the generalized ridge estimators is also presented. ^