844 resultados para Failure time data analysis
Resumo:
In survival analysis applications, the failure rate function may frequently present a unimodal shape. In such case, the log-normal or log-logistic distributions are used. In this paper, we shall be concerned only with parametric forms, so a location-scale regression model based on the Burr XII distribution is proposed for modeling data with a unimodal failure rate function as an alternative to the log-logistic regression model. Assuming censored data, we consider a classic analysis, a Bayesian analysis and a jackknife estimator for the parameters of the proposed model. For different parameter settings, sample sizes and censoring percentages, various simulation studies are performed and compared to the performance of the log-logistic and log-Burr XII regression models. Besides, we use sensitivity analysis to detect influential or outlying observations, and residual analysis is used to check the assumptions in the model. Finally, we analyze a real data set under log-Buff XII regression models. (C) 2008 Published by Elsevier B.V.
Resumo:
In this paper a new parametric method to deal with discrepant experimental results is developed. The method is based on the fit of a probability density function to the data. This paper also compares the characteristics of different methods used to deduce recommended values and uncertainties from a discrepant set of experimental data. The methods are applied to the (137)Cs and (90)Sr published half-lives and special emphasis is given to the deduced confidence intervals. The obtained results are analyzed considering two fundamental properties expected from an experimental result: the probability content of confidence intervals and the statistical consistency between different recommended values. The recommended values and uncertainties for the (137)Cs and (90)Sr half-lives are 10,984 (24) days and 10,523 (70) days, respectively. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This paper presents an experimental characterization of the behavior of an analogous version of the Chua`s circuit. The electronic circuit signals are captured using a data acquisition board (DAQ) and processed using LabVIEW environment. The following aspects of the time series analysis are analyzed: time waveforms, phase portraits, frequency spectra, Poincar, sections, and bifurcation diagram. The circuit behavior is experimentally mapped with the parameter variations, where are identified equilibrium points, periodic and chaotic attractors, and bifurcations. These analysis techniques are performed in real-time and can be applied to characterize, with precision, several nonlinear systems.
Resumo:
Birnbaum-Saunders models have largely been applied in material fatigue studies and reliability analyses to relate the total time until failure with some type of cumulative damage. In many problems related to the medical field, such as chronic cardiac diseases and different types of cancer, a cumulative damage caused by several risk factors might cause some degradation that leads to a fatigue process. In these cases, BS models can be suitable for describing the propagation lifetime. However, since the cumulative damage is assumed to be normally distributed in the BS distribution, the parameter estimates from this model can be sensitive to outlying observations. In order to attenuate this influence, we present in this paper BS models, in which a Student-t distribution is assumed to explain the cumulative damage. In particular, we show that the maximum likelihood estimates of the Student-t log-BS models attribute smaller weights to outlying observations, which produce robust parameter estimates. Also, some inferential results are presented. In addition, based on local influence and deviance component and martingale-type residuals, a diagnostics analysis is derived. Finally, a motivating example from the medical field is analyzed using log-BS regression models. Since the parameter estimates appear to be very sensitive to outlying and influential observations, the Student-t log-BS regression model should attenuate such influences. The model checking methodologies developed in this paper are used to compare the fitted models.
Resumo:
Birnbaum and Saunders (1969a) introduced a probability distribution which is commonly used in reliability studies For the first time based on this distribution the so-called beta-Birnbaum-Saunders distribution is proposed for fatigue life modeling Various properties of the new model including expansions for the moments moment generating function mean deviations density function of the order statistics and their moments are derived We discuss maximum likelihood estimation of the model s parameters The superiority of the new model is illustrated by means of three failure real data sets (C) 2010 Elsevier B V All rights reserved
Resumo:
Objective: To develop a method for objective quantification of PD motor symptoms related to Off episodes and peak dose dyskinesias, using spiral data gathered by using a touch screen telemetry device. The aim was to objectively characterize predominant motor phenotypes (bradykinesia and dyskinesia), to help in automating the process of visual interpretation of movement anomalies in spirals as rated by movement disorder specialists. Background: A retrospective analysis was conducted on recordings from 65 patients with advanced idiopathic PD from nine different clinics in Sweden, recruited from January 2006 until August 2010. In addition to the patient group, 10 healthy elderly subjects were recruited. Upper limb movement data were collected using a touch screen telemetry device from home environments of the subjects. Measurements with the device were performed four times per day during week-long test periods. On each test occasion, the subjects were asked to trace pre-drawn Archimedean spirals, using the dominant hand. The pre-drawn spiral was shown on the screen of the device. The spiral test was repeated three times per test occasion and they were instructed to complete it within 10 seconds. The device had a sampling rate of 10Hz and measured both position and time-stamps (in milliseconds) of the pen tip. Methods: Four independent raters (FB, DH, AJ and DN) used a web interface that animated the spiral drawings and allowed them to observe different kinematic features during the drawing process and to rate task performance. Initially, a number of kinematic features were assessed including ‘impairment’, ‘speed’, ‘irregularity’ and ‘hesitation’ followed by marking the predominant motor phenotype on a 3-category scale: tremor, bradykinesia and/or choreatic dyskinesia. There were only 2 test occasions for which all the four raters either classified them as tremor or could not identify the motor phenotype. Therefore, the two main motor phenotype categories were bradykinesia and dyskinesia. ‘Impairment’ was rated on a scale from 0 (no impairment) to 10 (extremely severe) whereas ‘speed’, ‘irregularity’ and ‘hesitation’ were rated on a scale from 0 (normal) to 4 (extremely severe). The proposed data-driven method consisted of the following steps. Initially, 28 spatiotemporal features were extracted from the time series signals before being presented to a Multilayer Perceptron (MLP) classifier. The features were based on different kinematic quantities of spirals including radius, angle, speed and velocity with the aim of measuring the severity of involuntary symptoms and discriminate between PD-specific (bradykinesia) and/or treatment-induced symptoms (dyskinesia). A Principal Component Analysis was applied on the features to reduce their dimensions where 4 relevant principal components (PCs) were retained and used as inputs to the MLP classifier. Finally, the MLP classifier mapped these components to the corresponding visually assessed motor phenotype scores for automating the process of scoring the bradykinesia and dyskinesia in PD patients whilst they draw spirals using the touch screen device. For motor phenotype (bradykinesia vs. dyskinesia) classification, the stratified 10-fold cross validation technique was employed. Results: There were good agreements between the four raters when rating the individual kinematic features with intra-class correlation coefficient (ICC) of 0.88 for ‘impairment’, 0.74 for ‘speed’, 0.70 for ‘irregularity’, and moderate agreements when rating ‘hesitation’ with an ICC of 0.49. When assessing the two main motor phenotype categories (bradykinesia or dyskinesia) in animated spirals the agreements between the four raters ranged from fair to moderate. There were good correlations between mean ratings of the four raters on individual kinematic features and computed scores. The MLP classifier classified the motor phenotype that is bradykinesia or dyskinesia with an accuracy of 85% in relation to visual classifications of the four movement disorder specialists. The test-retest reliability of the four PCs across the three spiral test trials was good with Cronbach’s Alpha coefficients of 0.80, 0.82, 0.54 and 0.49, respectively. These results indicate that the computed scores are stable and consistent over time. Significant differences were found between the two groups (patients and healthy elderly subjects) in all the PCs, except for the PC3. Conclusions: The proposed method automatically assessed the severity of unwanted symptoms and could reasonably well discriminate between PD-specific and/or treatment-induced motor symptoms, in relation to visual assessments of movement disorder specialists. The objective assessments could provide a time-effect summary score that could be useful for improving decision-making during symptom evaluation of individualized treatment when the goal is to maximize functional On time for patients while minimizing their Off episodes and troublesome dyskinesias.
Resumo:
A challenge for the clinical management of advanced Parkinson’s disease (PD) patients is the emergence of fluctuations in motor performance, which represents a significant source of disability during activities of daily living of the patients. There is a lack of objective measurement of treatment effects for in-clinic and at-home use that can provide an overview of the treatment response. The objective of this paper was to develop a method for objective quantification of advanced PD motor symptoms related to off episodes and peak dose dyskinesia, using spiral data gathered by a touch screen telemetry device. More specifically, the aim was to objectively characterize motor symptoms (bradykinesia and dyskinesia), to help in automating the process of visual interpretation of movement anomalies in spirals as rated by movement disorder specialists. Digitized upper limb movement data of 65 advanced PD patients and 10 healthy (HE) subjects were recorded as they performed spiral drawing tasks on a touch screen device in their home environment settings. Several spatiotemporal features were extracted from the time series and used as inputs to machine learning methods. The methods were validated against ratings on animated spirals scored by four movement disorder specialists who visually assessed a set of kinematic features and the motor symptom. The ability of the method to discriminate between PD patients and HE subjects and the test-retest reliability of the computed scores were also evaluated. Computed scores correlated well with mean visual ratings of individual kinematic features. The best performing classifier (Multilayer Perceptron) classified the motor symptom (bradykinesia or dyskinesia) with an accuracy of 84% and area under the receiver operating characteristics curve of 0.86 in relation to visual classifications of the raters. In addition, the method provided high discriminating power when distinguishing between PD patients and HE subjects as well as had good test-retest reliability. This study demonstrated the potential of using digital spiral analysis for objective quantification of PD-specific and/or treatment-induced motor symptoms.
Resumo:
Background. Through a national policy agreement, over 167 million Euros will be invested in the Swedish National Quality Registries (NQRs) between 2012 and 2016. One of the policy agreement¿s intentions is to increase the use of NQR data for quality improvement (QI). However, the evidence is fragmented as to how the use of medical registries and the like lead to quality improvement, and little is known about non-clinical use. The aim was therefore to investigate the perspectives of Swedish politicians and administrators on quality improvement based on national registry data. Methods. Politicians and administrators from four county councils were interviewed. A qualitative content analysis guided by the Consolidated Framework for Implementation Research (CFIR) was performed. Results. The politicians and administrators perspectives on the use of NQR data for quality improvement were mainly assigned to three of the five CFIR domains. In the domain of intervention characteristics, data reliability and access in reasonable time were not considered entirely satisfactory, making it difficult for the politico-administrative leaderships to initiate, monitor, and support timely QI efforts. Still, politicians and administrators trusted the idea of using the NQRs as a base for quality improvement. In the domain of inner setting, the organizational structures were not sufficiently developed to utilize the advantages of the NQRs, and readiness for implementation appeared to be inadequate for two reasons. Firstly, the resources for data analysis and quality improvement were not considered sufficient at politico-administrative or clinical level. Secondly, deficiencies in leadership engagement at multiple levels were described and there was a lack of consensus on the politicians¿ role and level of involvement. Regarding the domain of outer setting, there was a lack of communication and cooperation between the county councils and the national NQR organizations. Conclusions. The Swedish experiences show that a government-supported national system of well-funded, well-managed, and reputable national quality registries needs favorable local politico-administrative conditions to be used for quality improvement; such conditions are not yet in place according to local politicians and administrators.
Resumo:
Researchers analyzing spatiotemporal or panel data, which varies both in location and over time, often find that their data has holes or gaps. This thesis explores alternative methods for filling those gaps and also suggests a set of techniques for evaluating those gap-filling methods to determine which works best.
Resumo:
Instrumentation and automation plays a vital role to managing the water industry. These systems generate vast amounts of data that must be effectively managed in order to enable intelligent decision making. Time series data management software, commonly known as data historians are used for collecting and managing real-time (time series) information. More advanced software solutions provide a data infrastructure or utility wide Operations Data Management System (ODMS) that stores, manages, calculates, displays, shares, and integrates data from multiple disparate automation and business systems that are used daily in water utilities. These ODMS solutions are proven and have the ability to manage data from smart water meters to the collaboration of data across third party corporations. This paper focuses on practical, utility successes in the water industry where utility managers are leveraging instantaneous access to data from proven, commercial off-the-shelf ODMS solutions to enable better real-time decision making. Successes include saving $650,000 / year in water loss control, safeguarding water quality, saving millions of dollars in energy management and asset management. Immediate opportunities exist to integrate the research being done in academia with these ODMS solutions in the field and to leverage these successes to utilities around the world.
Resumo:
We discuss the development and performance of a low-power sensor node (hardware, software and algorithms) that autonomously controls the sampling interval of a suite of sensors based on local state estimates and future predictions of water flow. The problem is motivated by the need to accurately reconstruct abrupt state changes in urban watersheds and stormwater systems. Presently, the detection of these events is limited by the temporal resolution of sensor data. It is often infeasible, however, to increase measurement frequency due to energy and sampling constraints. This is particularly true for real-time water quality measurements, where sampling frequency is limited by reagent availability, sensor power consumption, and, in the case of automated samplers, the number of available sample containers. These constraints pose a significant barrier to the ubiquitous and cost effective instrumentation of large hydraulic and hydrologic systems. Each of our sensor nodes is equipped with a low-power microcontroller and a wireless module to take advantage of urban cellular coverage. The node persistently updates a local, embedded model of flow conditions while IP-connectivity permits each node to continually query public weather servers for hourly precipitation forecasts. The sampling frequency is then adjusted to increase the likelihood of capturing abrupt changes in a sensor signal, such as the rise in the hydrograph – an event that is often difficult to capture through traditional sampling techniques. Our architecture forms an embedded processing chain, leveraging local computational resources to assess uncertainty by analyzing data as it is collected. A network is presently being deployed in an urban watershed in Michigan and initial results indicate that the system accurately reconstructs signals of interest while significantly reducing energy consumption and the use of sampling resources. We also expand our analysis by discussing the role of this approach for the efficient real-time measurement of stormwater systems.
Resumo:
Libraries seek active ways to innovate amidst macroeconomic shifts, growing online education to help alleviate ever-growing schedule conflicts as students juggle jobs and course schedules, as well as changing business models in publishing and evolving information technologies. Patron-driven acquisition (PDA), also known as demand-driven acquisition (DDA), offers numerous strengths in supporting university curricula in the context of these significant shifts. PDA is a business model centered on short-term loans and subsequent purchases of ebooks resulting directly from patrons' natural use stemming from their discovery of the ebooks in library catalogs where the ebooks' bibliographic records are loaded at regular intervals established between the library and ebook supplier. Winthrop University's PDA plan went live in October 2011, and this article chronicles the philosophical and operational considerations, the in-library collaboration, and technical preparations in concert with the library system vendor and ebook supplier. Short-term loan is invoked after a threshold is crossed, typically number of pages or time spent in the ebook. After a certain number of short-term loans negotiated between the library and ebook supplier, the next short-term loan becomes an automatic purchase after which the library owns the ebook in perpetuity. Purchasing options include single-user and multi-user licenses. Owing to high levels of need in college and university environments, Winthrop chose the multi-user license as the preferred default purchase. Only where multi-user licenses are unavailable does the automatic purchase occur with single-user title licenses. Data on initial use between October 2011 and February 2013 reveal that of all PDA ebooks viewed, only 30% crossed the threshold into short-term loans. Of all triggered short-term loans, Psychology was the highest-using. Of all ebook views too brief to trigger short-term loans, Business was the highest-using area. Although the data are still too young to draw conclusions after only a few months, thought-provoking usage differences between academic disciplines have begun to emerge. These differences should be considered in library plans for the best possible curricular support for each academic program. As higher education struggles with costs and course-delivery methods libraries have an enduring lead role.
Resumo:
Excessive labor turnover may be considered, to a great extent, an undesirable feature of a given economy. This follows from considerations such as underinvestment in human capital by firms. Understanding the determinants and the evolution of turnover in a particular labor market is therefore of paramount importance, including policy considerations. The present paper proposes an econometric analysis of turnover in the Brazilian labor market, based on a partial observability bivariate probit model. This model considers the interdependence of decisions taken by workers and firms, helping to elucidate the causes that lead each of them to end an employment relationship. The Employment and Unemployment Survey (PED) conducted by the State System of Data Analysis (SEADE) and by the Inter-Union Department of Statistics and Socioeconomic Studies (DIEESE) provides data at the individual worker level, allowing for the estimation of the joint probabilities of decisions to quit or stay on the job on the worker’s side, and to maintain or fire the employee on the firm’s side, during a given time period. The estimated parameters relate these estimated probabilities to the characteristics of workers, job contracts, and to the potential macroeconomic determinants in different time periods. The results confirm the theoretical prediction that the probability of termination of an employment relationship tends to be smaller as the worker acquires specific skills. The results also show that the establishment of a formal employment relationship reduces the probability of a quit decision by the worker, and also the firm’s firing decision in non-industrial sectors. With regard to the evolution of quit probability over time, the results show that an increase in the unemployment rate inhibits quitting, although this tends to wane as the unemployment rate rises.
Resumo:
Retirado do blog de Marc Pickren do dia 13 jun. 2014.
Resumo:
Sound the vuvuzelas, the World Cup is officially here. The biggest sporting event in the world is set to break all kinds of viewing records. Sporting in the digital world is just as much about stats as it is about the game itself. Enter Brandwatch. The social media analytics company has taken it upon itself to track social media statistics for the entire run of the World Cup with their new real-time data visualization tool.