920 resultados para Multivariate curve resolution-alternating least squares
Resumo:
The purpose of this research is to examine the relative profitability of the firm within the nursing facility industry in Texas. An examination is made of the variables expected to affect profitability and of importance to the design and implementation of regulatory policy. To facilitate this inquiry, specific questions addressed are: (1) Do differences in ownership form affect profitability (defined as operating income before fixed costs)? (2) What impact does regional location have on profitability? (3) Do patient case-mix and access to care by Medicaid patients differ between proprietary and non-profit firms and facilities located in urban versus rural regions, and what association exists between these variables and profitability? (4) Are economies of scale present in the nursing home industry? (5) Do nursing facilities operate in a competitive output market characterized by the inability of a single firm to exhibit influence over market price?^ Prior studies have principally employed a cost function to assess efficiency differences between classifications of nursing facilities. The inherent weakness in this approach is that it only considers technical efficiency. Not both technical and price efficiency which are the two components of overall economic efficiency. One firm is more technically efficient compared to another if it is able to produce a given quantity of output at the least possible costs. Price efficiency means that scarce resources are being directed towards their most valued use. Assuming similar prices in both input and output markets, differences in overall economic efficiency between firm classes are assessed through profitability, hence a profit function.^ Using the framework of the profit function, data from 1990 Medicaid Costs Reports for Texas, and the analytic technique of Ordinary Least Squares Regression, the findings of the study indicated (1) similar profitability between nursing facilities organized as for-profit versus non-profit and located in urban versus rural regions, (2) an inverse association between both payor-mix and patient case-mix with profitability, (3) strong evidence for the presence of scale economies, and (4) existence of a competitive market structure. The paper concludes with implications regarding reimbursement methodology and construction moratorium policies in Texas. ^
Resumo:
The desire to promote efficient allocation of health resources and effective patient care has focused attention on home care as an alternative to acute hospital service. in particular, clinical home care is suggested as a substitute for the final days of hospital stay. This dissertation evaluates the relationship between hospital and home care services for residents of British Columbia, Canada beginning in 1993/94 using data from the British Columbia Linked Health database. ^ Lengths of stay for patients referred to home care following hospital discharge are compared to those for patients not referred to home care. Ordinary least squares regression analysis adjusts for age, gender, admission severity, comorbidity, complications, income, and other patient, physician, and hospital characteristics. Home care clients tend to have longer stays in hospital than patients not referred to home care (β = 2.54, p = 0.0001). Longer hospital stays are evident for all home care client groups as well as both older and younger patients. Sensitivity analysis for referral time to direct care and extreme lengths of stay are consistent with these findings. Two stage regression analysis indicates that selection bias is not significant.^ Patients referred to clinical home care also have different health service utilization following discharge compared to patients not referred to home care. Home care nursing clients use more medical services to complement home care. Rehabilitation clients initially substitute home care for physiotherapy services but later are more likely to be admitted to residential care. All home care clients are more likely to be readmitted to hospital during the one year follow-up period. There is also a strong complementary association between direct care referral and homemaker support. Rehabilitation clients have a greater risk of dying during the year following discharge. ^ These results suggest that home care is currently used as a complement rather than a substitute for some acute health services. Organizational and resource issues may contribute to the longer stays by home care clients. Program planning and policies are required if home care is to provide an effective substitute for acute hospital days. ^
Resumo:
Application of biogeochemical models to the study of marine ecosystems is pervasive, yet objective quantification of these models' performance is rare. Here, 12 lower trophic level models of varying complexity are objectively assessed in two distinct regions (equatorial Pacific and Arabian Sea). Each model was run within an identical one-dimensional physical framework. A consistent variational adjoint implementation assimilating chlorophyll-a, nitrate, export, and primary productivity was applied and the same metrics were used to assess model skill. Experiments were performed in which data were assimilated from each site individually and from both sites simultaneously. A cross-validation experiment was also conducted whereby data were assimilated from one site and the resulting optimal parameters were used to generate a simulation for the second site. When a single pelagic regime is considered, the simplest models fit the data as well as those with multiple phytoplankton functional groups. However, those with multiple phytoplankton functional groups produced lower misfits when the models are required to simulate both regimes using identical parameter values. The cross-validation experiments revealed that as long as only a few key biogeochemical parameters were optimized, the models with greater phytoplankton complexity were generally more portable. Furthermore, models with multiple zooplankton compartments did not necessarily outperform models with single zooplankton compartments, even when zooplankton biomass data are assimilated. Finally, even when different models produced similar least squares model-data misfits, they often did so via very different element flow pathways, highlighting the need for more comprehensive data sets that uniquely constrain these pathways.
Resumo:
16S rRNA genes and transcripts of Acidobacteria were investigated in 57 grassland and forest soils of three different geographic regions. Acidobacteria contributed 9-31% of bacterial 16S rRNA genes whereas the relative abundances of the respective transcripts were 4-16%. The specific cellular 16S rRNA content (determined as molar ratio of rRNA:rRNA genes) ranged between 3 and 80, indicating a low in situ growth rate. Correlations with flagellate numbers, vascular plant diversity and soil respiration suggest that biotic interactions are important determinants of Acidobacteria 16S rRNA transcript abundances in soils. While the phylogenetic composition of Acidobacteria differed significantly between grassland and forest soils, high throughput denaturing gradient gel electrophoresis and terminal restriction fragment length polymorphism fingerprinting detected 16S rRNA transcripts of most phylotypes in situ. Partial least squares regression suggested that chemical soil conditions such as pH, total nitrogen, C:N ratio, ammonia concentrations and total phosphorus affect the composition of this active fraction of Acidobacteria. Transcript abundance for individual Acidobacteria phylotypes was found to correlate with particular physicochemical (pH, temperature, nitrogen or phosphorus) and, most notably, biological parameters (respiration rates, abundances of ciliates or amoebae, vascular plant diversity), providing culture-independent evidence for a distinct niche specialization of different Acidobacteria even from the same subdivision.
Resumo:
We present an independent calibration model for the determination of biogenic silica (BSi) in sediments, developed from analysis of synthetic sediment mixtures and application of Fourier transform infrared spectroscopy (FTIRS) and partial least squares regression (PLSR) modeling. In contrast to current FTIRS applications for quantifying BSi, this new calibration is independent from conventional wet-chemical techniques and their associated measurement uncertainties. This approach also removes the need for developing internal calibrations between the two methods for individual sediments records. For the independent calibration, we produced six series of different synthetic sediment mixtures using two purified diatom extracts, with one extract mixed with quartz sand, calcite, 60/40 quartz/calcite and two different natural sediments, and a second extract mixed with one of the natural sediments. A total of 306 samples—51 samples per series—yielded BSi contents ranging from 0 to 100 %. The resulting PLSR calibration model between the FTIR spectral information and the defined BSi concentration of the synthetic sediment mixtures exhibits a strong cross-validated correlation ( R2cv = 0.97) and a low root-mean square error of cross-validation (RMSECV = 4.7 %). Application of the independent calibration to natural lacustrine and marine sediments yields robust BSi reconstructions. At present, the synthetic mixtures do not include the variation in organic matter that occurs in natural samples, which may explain the somewhat lower prediction accuracy of the calibration model for organic-rich samples.
Resumo:
The direct Bayesian admissible region approach is an a priori state free measurement association and initial orbit determination technique for optical tracks. In this paper, we test a hybrid approach that appends a least squares estimator to the direct Bayesian method on measurements taken at the Zimmerwald Observatory of the Astronomical Institute at the University of Bern. Over half of the association pairs agreed with conventional geometric track correlation and least squares techniques. The remaining pairs cast light on the fundamental limits of conducting tracklet association based solely on dynamical and geometrical information.
Resumo:
In this work we devise two novel algorithms for blind deconvolution based on a family of logarithmic image priors. In contrast to recent approaches, we consider a minimalistic formulation of the blind deconvolution problem where there are only two energy terms: a least-squares term for the data fidelity and an image prior based on a lower-bounded logarithm of the norm of the image gradients. We show that this energy formulation is sufficient to achieve the state of the art in blind deconvolution with a good margin over previous methods. Much of the performance is due to the chosen prior. On the one hand, this prior is very effective in favoring sparsity of the image gradients. On the other hand, this prior is non convex. Therefore, solutions that can deal effectively with local minima of the energy become necessary. We devise two iterative minimization algorithms that at each iteration solve convex problems: one obtained via the primal-dual approach and one via majorization-minimization. While the former is computationally efficient, the latter achieves state-of-the-art performance on a public dataset.
Resumo:
Surface sediments from 68 small lakes in the Alps and 9 well-dated sediment core samples that cover a gradient of total phosphorus (TP) concentrations of 6 to 520 μg TP l-1 were studied for diatom, chrysophyte cyst, cladocera, and chironomid assemblages. Inference models for mean circulation log10 TP were developed for diatoms, chironomids, and benthic cladocera using weighted-averaging partial least squares. After screening for outliers, the final transfer functions have coefficients of determination (r2, as assessed by cross-validation, of 0.79 (diatoms), 0.68 (chironomids), and 0.49 (benthic cladocera). Planktonic cladocera and chrysophytes show very weak relationships to TP and no TP inference models were developed for these biota. Diatoms showed the best relationship with TP, whereas the other biota all have large secondary gradients, suggesting that variables other than TP have a strong influence on their composition and abundance. Comparison with other diatom – TP inference models shows that our model has high predictive power and a low root mean squared error of prediction, as assessed by cross-validation.
Resumo:
Chironomid-temperature inference models based on North American, European and combined surface sediment training sets were compared to assess the overall reliability of their predictions. Between 67 and 76 of the major chironomid taxa in each data set showed a unimodal response to July temperature, whereas between 5 and 22 of the common taxa showed a sigmoidal response. July temperature optima were highly correlated among the training sets, but the correlations for other taxon parameters such as tolerances and weighted averaging partial least squares (WA-PLS) and partial least squares (PLS) regression coefficients were much weaker. PLS, weighted averaging, WA-PLS, and the Modern Analogue Technique, all provided useful and reliable temperature inferences. Although jack-knifed error statistics suggested that two-component WA-PLS models had the highest predictive power, intercontinental tests suggested that other inference models performed better. The various models were able to provide good July temperature inferences, even where neither good nor close modern analogues for the fossil chironomid assemblages existed. When the models were applied to fossil Lateglacial assemblages from North America and Europe, the inferred rates and magnitude of July temperature changes varied among models. All models, however, revealed similar patterns of Lateglacial temperature change. Depending on the model used, the inferred Younger Dryas July temperature decrease ranged between 2.5 and 6°C.
Resumo:
Comparing published NAVD 88 Helmert orthometric heights of First-Order bench marks against GPS-determined orthometric heights showed that GEOID03 and GEOID09 perform at their reported accuracy in Connecticut. GPS-determined orthometric heights were determined by subtracting geoid undulations from ellipsoid heights obtained from a network least-squares adjustment of GPS occupations in 2007 and 2008. A total of 73 markers were occupied in these stability classes: 25 class A, 11 class B, 12 class C, 2 class D bench marks, and 23 temporary marks with transferred elevations. Adjusted ellipsoid heights were compared against OPUS as a check. We found that: the GPS-determined orthometric heights of stability class A markers and the transfers are statistically lower than their published values but just barely; stability class B, C and D markers are also statistically lower in a manner consistent with subsidence or settling; GEOID09 does not exhibit a statistically significant residual trend across Connecticut; and GEOID09 out-performed GEOID03. A "correction surface" is not recommended in spite of the geoid models being statistically different than the NAVD 88 heights because the uncertainties involved dominate the discrepancies. Instead, it is recommended that the vertical control network be re-observed.
Resumo:
Kriging is a widely employed method for interpolating and estimating elevations from digital elevation data. Its place of prominence is due to its elegant theoretical foundation and its convenient practical implementation. From an interpolation point of view, kriging is equivalent to a thin-plate spline and is one species among the many in the genus of weighted inverse distance methods, albeit with attractive properties. However, from a statistical point of view, kriging is a best linear unbiased estimator and, consequently, has a place of distinction among all spatial estimators because any other linear estimator that performs as well as kriging (in the least squares sense) must be equivalent to kriging, assuming that the parameters of the semivariogram are known. Therefore, kriging is often held to be the gold standard of digital terrain model elevation estimation. However, I prove that, when used with local support, kriging creates discontinuous digital terrain models, which is to say, surfaces with “rips” and “tears” throughout them. This result is general; it is true for ordinary kriging, kriging with a trend, and other forms. A U.S. Geological Survey (USGS) digital elevation model was analyzed to characterize the distribution of the discontinuities. I show that the magnitude of the discontinuity does not depend on surface gradient but is strongly dependent on the size of the kriging neighborhood.
Resumo:
It is widely acknowledged in theoretical and empirical literature that social relationships, comprising of structural measures (social networks) and functional measures (perceived social support) have an undeniable effect on health outcomes. However, the actual mechanism of this effect has yet to be clearly understood or explicated. In addition, comorbidity is found to adversely affect social relationships and health related quality of life (a valued outcome measure in cancer patients and survivors). ^ This cross sectional study uses selected baseline data (N=3088) from the Women's Healthy Eating and Living (WHEL) study. Lisrel 8.72 was used for the latent variable structural equation modeling. Due to the ordinal nature of the data, Weighted Least Squares (WLS) method of estimation using Asymptotic Distribution Free covariance matrices was chosen for this analysis. The primary exogenous predictor variables are Social Networks and Comorbidity; Perceived Social Support is the endogenous predictor variable. Three dimensions of HRQoL, physical, mental and satisfaction with current quality of life were the outcome variables. ^ This study hypothesizes and tests the mechanism and pathways between comorbidity, social relationships and HRQoL using latent variable structural equation modeling. After testing the measurement models of social networks and perceived social support, a structural model hypothesizing associations between the latent exogenous and endogenous variables was tested. The results of the study after listwise deletion (N=2131) mostly confirmed the hypothesized relationships (TLI, CFI >0.95, RMSEA = 0.05, p=0.15). Comorbidity was adversely associated with all three HRQoL outcomes. Strong ties were negatively associated with perceived social support; social network had a strong positive association with perceived social support, which served as a mediator between social networks and HRQoL. Mental health quality of life was the most adversely affected by the predictor variables. ^ This study is a preliminary look at the integration of structural and functional measures of social relationships, comorbidity and three HRQoL indicators using LVSEM. Developing stronger social networks and forming supportive relationships is beneficial for health outcomes such as HRQoL of cancer survivors. Thus, the medical community treating cancer survivors as well as the survivor's social networks need to be informed and cognizant of these possible relationships. ^
Resumo:
Anticancer drugs typically are administered in the clinic in the form of mixtures, sometimes called combinations. Only in rare cases, however, are mixtures approved as drugs. Rather, research on mixtures tends to occur after single drugs have been approved. The goal of this research project was to develop modeling approaches that would encourage rational preclinical mixture design. To this end, a series of models were developed. First, several QSAR classification models were constructed to predict the cytotoxicity, oral clearance, and acute systemic toxicity of drugs. The QSAR models were applied to a set of over 115,000 natural compounds in order to identify promising ones for testing in mixtures. Second, an improved method was developed to assess synergistic, antagonistic, and additive effects between drugs in a mixture. This method, dubbed the MixLow method, is similar to the Median-Effect method, the de facto standard for assessing drug interactions. The primary difference between the two is that the MixLow method uses a nonlinear mixed-effects model to estimate parameters of concentration-effect curves, rather than an ordinary least squares procedure. Parameter estimators produced by the MixLow method were more precise than those produced by the Median-Effect Method, and coverage of Loewe index confidence intervals was superior. Third, a model was developed to predict drug interactions based on scores obtained from virtual docking experiments. This represents a novel approach for modeling drug mixtures and was more useful for the data modeled here than competing approaches. The model was applied to cytotoxicity data for 45 mixtures, each composed of up to 10 selected drugs. One drug, doxorubicin, was a standard chemotherapy agent and the others were well-known natural compounds including curcumin, EGCG, quercetin, and rhein. Predictions of synergism/antagonism were made for all possible fixed-ratio mixtures, cytotoxicities of the 10 best-scoring mixtures were tested, and drug interactions were assessed. Predicted and observed responses were highly correlated (r2 = 0.83). Results suggested that some mixtures allowed up to an 11-fold reduction of doxorubicin concentrations without sacrificing efficacy. Taken together, the models developed in this project present a general approach to rational design of mixtures during preclinical drug development. ^
The determinants of improvements in health outcomes and of cost reduction in hospital inpatient care
Resumo:
This study aims to address two research questions. First, ‘Can we identify factors that are determinants both of improved health outcomes and of reduced costs for hospitalized patients with one of six common diagnoses?’ Second, ‘Can we identify other factors that are determinants of improved health outcomes for such hospitalized patients but which are not associated with costs?’ The Healthcare Cost and Utilization Project (HCUP) Nationwide Inpatient Sample (NIS) database from 2003 to 2006 was employed in this study. The total study sample consisted of hospitals which had at least 30 patients each year for the given diagnosis: 954 hospitals for acute myocardial infarction (AMI), 1552 hospitals for congestive heart failure (CHF), 1120 hospitals for stroke (STR), 1283 hospitals for gastrointestinal hemorrhage (GIH), 979 hospitals for hip fracture (HIP), and 1716 hospitals for pneumonia (PNE). This study used simultaneous equations models to investigate the determinants of improvement in health outcomes and of cost reduction in hospital inpatient care for these six common diagnoses. In addition, the study used instrumental variables and two-stage least squares random effect model for unbalanced panel data estimation. The study concluded that a few factors were determinants of high quality and low cost. Specifically, high specialty was the determinant of high quality and low costs for CHF patients; small hospital size was the determinant of high quality and low costs for AMI patients. Furthermore, CHF patients who were treated in Midwest, South, and West region hospitals had better health outcomes and lower hospital costs than patients who were treated in Northeast region hospitals. Gastrointestinal hemorrhage and pneumonia patients who were treated in South region hospitals also had better health outcomes and lower hospital costs than patients who were treated in Northeast region hospitals. This study found that six non-cost factors were related to health outcomes for a few diagnoses: hospital volume, percentage emergency room admissions for a given diagnosis, hospital competition, specialty, bed size, and hospital region.^
Resumo:
Interaction effect is an important scientific interest for many areas of research. Common approach for investigating the interaction effect of two continuous covariates on a response variable is through a cross-product term in multiple linear regression. In epidemiological studies, the two-way analysis of variance (ANOVA) type of method has also been utilized to examine the interaction effect by replacing the continuous covariates with their discretized levels. However, the implications of model assumptions of either approach have not been examined and the statistical validation has only focused on the general method, not specifically for the interaction effect.^ In this dissertation, we investigated the validity of both approaches based on the mathematical assumptions for non-skewed data. We showed that linear regression may not be an appropriate model when the interaction effect exists because it implies a highly skewed distribution for the response variable. We also showed that the normality and constant variance assumptions required by ANOVA are not satisfied in the model where the continuous covariates are replaced with their discretized levels. Therefore, naïve application of ANOVA method may lead to an incorrect conclusion. ^ Given the problems identified above, we proposed a novel method modifying from the traditional ANOVA approach to rigorously evaluate the interaction effect. The analytical expression of the interaction effect was derived based on the conditional distribution of the response variable given the discretized continuous covariates. A testing procedure that combines the p-values from each level of the discretized covariates was developed to test the overall significance of the interaction effect. According to the simulation study, the proposed method is more powerful then the least squares regression and the ANOVA method in detecting the interaction effect when data comes from a trivariate normal distribution. The proposed method was applied to a dataset from the National Institute of Neurological Disorders and Stroke (NINDS) tissue plasminogen activator (t-PA) stroke trial, and baseline age-by-weight interaction effect was found significant in predicting the change from baseline in NIHSS at Month-3 among patients received t-PA therapy.^