868 resultados para Multivariate data analysis
Resumo:
In interval-censored survival data, the event of interest is not observed exactly but is only known to occur within some time interval. Such data appear very frequently. In this paper, we are concerned only with parametric forms, and so a location-scale regression model based on the exponentiated Weibull distribution is proposed for modeling interval-censored data. We show that the proposed log-exponentiated Weibull regression model for interval-censored data represents a parametric family of models that include other regression models that are broadly used in lifetime data analysis. Assuming the use of interval-censored data, we employ a frequentist analysis, a jackknife estimator, a parametric bootstrap and a Bayesian analysis for the parameters of the proposed model. We derive the appropriate matrices for assessing local influences on the parameter estimates under different perturbation schemes and present some ways to assess global influences. Furthermore, for different parameter settings, sample sizes and censoring percentages, various simulations are performed; in addition, the empirical distribution of some modified residuals are displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended to a modified deviance residual in log-exponentiated Weibull regression models for interval-censored data. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The use of inter-laboratory test comparisons to determine the performance of individual laboratories for specific tests (or for calibration) [ISO/IEC Guide 43-1, 1997. Proficiency testing by interlaboratory comparisons - Part 1: Development and operation of proficiency testing schemes] is called Proficiency Testing (PT). In this paper we propose the use of the generalized likelihood ratio test to compare the performance of the group of laboratories for specific tests relative to the assigned value and illustrate the procedure considering an actual data from the PT program in the area of volume. The proposed test extends the test criteria in use allowing to test for the consistency of the group of laboratories. Moreover, the class of elliptical distributions are considered for the obtained measurements. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
In survival analysis applications, the failure rate function may frequently present a unimodal shape. In such case, the log-normal or log-logistic distributions are used. In this paper, we shall be concerned only with parametric forms, so a location-scale regression model based on the Burr XII distribution is proposed for modeling data with a unimodal failure rate function as an alternative to the log-logistic regression model. Assuming censored data, we consider a classic analysis, a Bayesian analysis and a jackknife estimator for the parameters of the proposed model. For different parameter settings, sample sizes and censoring percentages, various simulation studies are performed and compared to the performance of the log-logistic and log-Burr XII regression models. Besides, we use sensitivity analysis to detect influential or outlying observations, and residual analysis is used to check the assumptions in the model. Finally, we analyze a real data set under log-Buff XII regression models. (C) 2008 Published by Elsevier B.V.
Resumo:
This work describes two similar methods for calculating gamma transition intensities from multidetector coincidence measurements. In the first one, applicable to experiments where the angular correlation function is explicitly fitted, the normalization parameter from this fit is used to determine the gamma transition intensities. In the second, that can be used both in angular correlation or DCO measurements, the spectra obtained for all the detector pairs are summed up, in order to get the best detection statistics possible, and the analysis of the resulting bidimensional spectrum is used to calculate the transition intensities; in this method, the summation of data corresponding to different angles minimizes the influence of the angular correlation coefficient. Both methods are then tested in the calculation of intensities for well-known transitions from a (152)Eu standard source, as well as in the calculation of intensities obtained in beta-decay experiments with (193)Os and (155)Sm sources, yielding excellent results in all these cases. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
When missing data occur in studies designed to compare the accuracy of diagnostic tests, a common, though naive, practice is to base the comparison of sensitivity, specificity, as well as of positive and negative predictive values on some subset of the data that fits into methods implemented in standard statistical packages. Such methods are usually valid only under the strong missing completely at random (MCAR) assumption and may generate biased and less precise estimates. We review some models that use the dependence structure of the completely observed cases to incorporate the information of the partially categorized observations into the analysis and show how they may be fitted via a two-stage hybrid process involving maximum likelihood in the first stage and weighted least squares in the second. We indicate how computational subroutines written in R may be used to fit the proposed models and illustrate the different analysis strategies with observational data collected to compare the accuracy of three distinct non-invasive diagnostic methods for endometriosis. The results indicate that even when the MCAR assumption is plausible, the naive partial analyses should be avoided.
Resumo:
The main purpose of this work is to study the behaviour of Skovgaard`s [Skovgaard, I.M., 2001. Likelihood asymptotics. Scandinavian journal of Statistics 28, 3-32] adjusted likelihood ratio statistic in testing simple hypothesis in a new class of regression models proposed here. The proposed class of regression models considers Dirichlet distributed observations, and the parameters that index the Dirichlet distributions are related to covariates and unknown regression coefficients. This class is useful for modelling data consisting of multivariate positive observations summing to one and generalizes the beta regression model described in Vasconcellos and Cribari-Neto [Vasconcellos, K.L.P., Cribari-Neto, F., 2005. Improved maximum likelihood estimation in a new class of beta regression models. Brazilian journal of Probability and Statistics 19,13-31]. We show that, for our model, Skovgaard`s adjusted likelihood ratio statistics have a simple compact form that can be easily implemented in standard statistical software. The adjusted statistic is approximately chi-squared distributed with a high degree of accuracy. Some numerical simulations show that the modified test is more reliable in finite samples than the usual likelihood ratio procedure. An empirical application is also presented and discussed. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Thermal analysis has been extensively used to obtain information about drug-polymer interactions and to perform pre-formulation studies of pharmaceutical dosage forms. In this work, biodegradable microparticles of poly(D,L-lactide-co-glycolide) (PLGA) containing ciprofloxacin hydrochloride (CP) in various drug:polymer ratios were obtained by spray drying. The main purpose of this study was to investigate the effect of the spray drying process on the drug-polymer interactions and on the stability of microparticles using differential scanning calorimetry (DSC), thermogravimetry (TG) and derivative thermogravimetry (DTG) and infrared spectroscopy (IR). The results showed that the high levels of encapsulation efficiency were dependant on drug:polymer ratio. DSC and TG/DTG analyses showed that for physical mixtures of the microparticles components the thermal profiles were different from those signals obtained with the pure substances. Thermal analysis data disclosed that physical interaction between CP and PLGA in high temperatures had occurred. The DSC and TG profiles for drug-loaded microparticles were very similar to the physical mixtures of components and it was possible to characterize the thermal properties of microparticles according to drug content. These data indicated that the spray dryer technique does not affect the physicochemical properties of the microparticles. In addition, the results are in agreement with IR data analysis demonstrating that no significant chemical interaction occurs between CP and PLGA in both physical mixtures and microparticles. In conclusion, we have found that the spray drying procedure used in this work can be a secure methodology to produce CP-loaded microparticles. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The cost of a road construction over its service life is a function of the design, quality of construction, maintenance strategies and maintenance operations. Unfortunately, designers often neglect a very important aspect which is the possibility to perform future maintenance activities. The focus is mainly on other aspects such as investment costs, traffic safety, aesthetic appearance, regional development and environmental effects. This licentiate thesis is a part of a Ph.D. project entitled “Road Design for lower maintenance costs” that aims to examine how the life-cycle costs can be optimized by selection of appropriate geometrical designs for the roads and their components. The result is expected to give a basis for a new method used in the road planning and design process using life-cycle cost analysis with particular emphasis on road maintenance. The project started with a review of literature with the intention to study conditions causing increased needs for road maintenance, the efforts made by the road authorities to satisfy those needs and the improvement potential by consideration of maintenance aspects during planning and design. An investigation was carried out to identify the problems which obstruct due consideration of maintenance aspects during the road planning and design process. This investigation focused mainly on the road planning and design process at the Swedish Road Administration. However, the road planning and design process in Denmark, Finland and Norway were also roughly evaluated to gain a broader knowledge about the research subject. The investigation was carried out in two phases: data collection and data analysis. Data was collected by semi-structured interviews with expert actors involved in planning, design and maintenance and by a review of design-related documents. Data analyses were carried out using a method called “Change Analysis”. This investigation revealed a complex combination of problems which result in inadequate consideration of maintenance aspects. Several urgent needs for changes to eliminate these problems were identified. Another study was carried out to develop a model for calculation of the repair costs for damages of different road barrier types and to analyse how factors such as road type, speed limits, barrier types, barrier placement, type of road section, alignment and seasonal effects affect the barrier damages and the associated repair costs. This study was carried out using a method called the “Case Study Research Method”. Data was collected from 1087 barrier repairs in two regional offices of the Swedish Road Administration, the Central Region and the Western Region. A table was established for both regions containing the repair cost per vehicle kilometre for different combinations of barrier types, road types and speed limits. This table can be used by the designers in the calculation of the life-cycle costs for different road barrier types.
Resumo:
The accurate measurement of a vehicle’s velocity is an essential feature in adaptive vehicle activated sign systems. Since the velocities of the vehicles are acquired from a continuous wave Doppler radar, the data collection becomes challenging. Data accuracy is sensitive to the calibration of the radar on the road. However, clear methodologies for in-field calibration have not been carefully established. The signs are often installed by subjective judgment which results in measurement errors. This paper develops a calibration method based on mining the data collected and matching individual vehicles travelling between two radars. The data was cleaned and prepared in two ways: cleaning and reconstructing. The results showed that the proposed correction factor derived from the cleaned data corresponded well with the experimental factor done on site. In addition, this proposed factor showed superior performance to the one derived from the reconstructed data.
Resumo:
Background. Through a national policy agreement, over 167 million Euros will be invested in the Swedish National Quality Registries (NQRs) between 2012 and 2016. One of the policy agreement¿s intentions is to increase the use of NQR data for quality improvement (QI). However, the evidence is fragmented as to how the use of medical registries and the like lead to quality improvement, and little is known about non-clinical use. The aim was therefore to investigate the perspectives of Swedish politicians and administrators on quality improvement based on national registry data. Methods. Politicians and administrators from four county councils were interviewed. A qualitative content analysis guided by the Consolidated Framework for Implementation Research (CFIR) was performed. Results. The politicians and administrators perspectives on the use of NQR data for quality improvement were mainly assigned to three of the five CFIR domains. In the domain of intervention characteristics, data reliability and access in reasonable time were not considered entirely satisfactory, making it difficult for the politico-administrative leaderships to initiate, monitor, and support timely QI efforts. Still, politicians and administrators trusted the idea of using the NQRs as a base for quality improvement. In the domain of inner setting, the organizational structures were not sufficiently developed to utilize the advantages of the NQRs, and readiness for implementation appeared to be inadequate for two reasons. Firstly, the resources for data analysis and quality improvement were not considered sufficient at politico-administrative or clinical level. Secondly, deficiencies in leadership engagement at multiple levels were described and there was a lack of consensus on the politicians¿ role and level of involvement. Regarding the domain of outer setting, there was a lack of communication and cooperation between the county councils and the national NQR organizations. Conclusions. The Swedish experiences show that a government-supported national system of well-funded, well-managed, and reputable national quality registries needs favorable local politico-administrative conditions to be used for quality improvement; such conditions are not yet in place according to local politicians and administrators.
Resumo:
Purpose: This paper aims to extend and contribute to prior research on the association between company characteristics and choice of capital budgeting methods (CBMs). Design/methodology/approach: A multivariate regression analysis on questionnaire data from 2005 and 2008 is used to study which factors determine the choice of CBMs in Swedish listed companies. Findings: Our results supported hypotheses that Swedish listed companies have become more sophisticated over the years (or at least less unsophisticated) which indicates a closing of the theory-practice gap; that companies with greater leverage used payback more often; and that companies with stricter debt targets and less management ownership employed accounting rate of return more frequent. Moreover, larger companies used CBMs more often. Originality/value: The paper contributes to prior research within this field by being the first Swedish study to examine the association between use of CBMs and as many as twelve independent variables, including changes over time, by using multivariate regression analysis. The results are compared to a US and a continental European study.
Resumo:
Instrumentation and automation plays a vital role to managing the water industry. These systems generate vast amounts of data that must be effectively managed in order to enable intelligent decision making. Time series data management software, commonly known as data historians are used for collecting and managing real-time (time series) information. More advanced software solutions provide a data infrastructure or utility wide Operations Data Management System (ODMS) that stores, manages, calculates, displays, shares, and integrates data from multiple disparate automation and business systems that are used daily in water utilities. These ODMS solutions are proven and have the ability to manage data from smart water meters to the collaboration of data across third party corporations. This paper focuses on practical, utility successes in the water industry where utility managers are leveraging instantaneous access to data from proven, commercial off-the-shelf ODMS solutions to enable better real-time decision making. Successes include saving $650,000 / year in water loss control, safeguarding water quality, saving millions of dollars in energy management and asset management. Immediate opportunities exist to integrate the research being done in academia with these ODMS solutions in the field and to leverage these successes to utilities around the world.
Resumo:
Existing distributed hydrologic models are complex and computationally demanding for using as a rapid-forecasting policy-decision tool, or even as a class-room educational tool. In addition, platform dependence, specific input/output data structures and non-dynamic data-interaction with pluggable software components inside the existing proprietary frameworks make these models restrictive only to the specialized user groups. RWater is a web-based hydrologic analysis and modeling framework that utilizes the commonly used R software within the HUBzero cyber infrastructure of Purdue University. RWater is designed as an integrated framework for distributed hydrologic simulation, along with subsequent parameter optimization and visualization schemes. RWater provides platform independent web-based interface, flexible data integration capacity, grid-based simulations, and user-extensibility. RWater uses RStudio to simulate hydrologic processes on raster based data obtained through conventional GIS pre-processing. The program integrates Shuffled Complex Evolution (SCE) algorithm for parameter optimization. Moreover, RWater enables users to produce different descriptive statistics and visualization of the outputs at different temporal resolutions. The applicability of RWater will be demonstrated by application on two watersheds in Indiana for multiple rainfall events.
Resumo:
This article presents the data-rich findings of an experiment with enlisting patron-driven/demand-driven acquisitions (DDA) of ebooks in two ways. The first experiment entailed comparison of DDA eBook usage against newly ordered hardcopy materials’ circulation, both overall and ebook vs. print usage within the same subject areas. Secondly, this study experimented with DDA ebooks as a backup plan for unfunded requests left over at the end of the fiscal year.
Resumo:
This article presents the data-rich findings of an experiment with enlisting patron-driven/demand-driven acquisitions (DDA) of ebooks in two ways. The first experiment entailed comparison of DDA eBook usage against newly ordered hardcopy materials’ circulation, both overall and ebook vs. print usage within the same subject areas. Secondly, this study experimented with DDA ebooks as a backup plan for unfunded requests left over at the end of the fiscal year.