844 resultados para Failure time data analysis
Resumo:
A time series is a sequence of observations made over time. Examples in public health include daily ozone concentrations, weekly admissions to an emergency department or annual expenditures on health care in the United States. Time series models are used to describe the dependence of the response at each time on predictor variables including covariates and possibly previous values in the series. Time series methods are necessary to account for the correlation among repeated responses over time. This paper gives an overview of time series ideas and methods used in public health research.
Resumo:
In this dissertation, we propose a continuous-time Markov chain model to examine the longitudinal data that have three categories in the outcome variable. The advantage of this model is that it permits a different number of measurements for each subject and the duration between two consecutive time points of measurements can be irregular. Using the maximum likelihood principle, we can estimate the transition probability between two time points. By using the information provided by the independent variables, this model can also estimate the transition probability for each subject. The Monte Carlo simulation method will be used to investigate the goodness of model fitting compared with that obtained from other models. A public health example will be used to demonstrate the application of this method. ^
Resumo:
En la actualidad, el seguimiento de la dinámica de los procesos medio ambientales está considerado como un punto de gran interés en el campo medioambiental. La cobertura espacio temporal de los datos de teledetección proporciona información continua con una alta frecuencia temporal, permitiendo el análisis de la evolución de los ecosistemas desde diferentes escalas espacio-temporales. Aunque el valor de la teledetección ha sido ampliamente probado, en la actualidad solo existe un número reducido de metodologías que permiten su análisis de una forma cuantitativa. En la presente tesis se propone un esquema de trabajo para explotar las series temporales de datos de teledetección, basado en la combinación del análisis estadístico de series de tiempo y la fenometría. El objetivo principal es demostrar el uso de las series temporales de datos de teledetección para analizar la dinámica de variables medio ambientales de una forma cuantitativa. Los objetivos específicos son: (1) evaluar dichas variables medio ambientales y (2) desarrollar modelos empíricos para predecir su comportamiento futuro. Estos objetivos se materializan en cuatro aplicaciones cuyos objetivos específicos son: (1) evaluar y cartografiar estados fenológicos del cultivo del algodón mediante análisis espectral y fenometría, (2) evaluar y modelizar la estacionalidad de incendios forestales en dos regiones bioclimáticas mediante modelos dinámicos, (3) predecir el riesgo de incendios forestales a nivel pixel utilizando modelos dinámicos y (4) evaluar el funcionamiento de la vegetación en base a la autocorrelación temporal y la fenometría. Los resultados de esta tesis muestran la utilidad del ajuste de funciones para modelizar los índices espectrales AS1 y AS2. Los parámetros fenológicos derivados del ajuste de funciones permiten la identificación de distintos estados fenológicos del cultivo del algodón. El análisis espectral ha demostrado, de una forma cuantitativa, la presencia de un ciclo en el índice AS2 y de dos ciclos en el AS1 así como el comportamiento unimodal y bimodal de la estacionalidad de incendios en las regiones mediterránea y templada respectivamente. Modelos autorregresivos han sido utilizados para caracterizar la dinámica de la estacionalidad de incendios y para predecir de una forma muy precisa el riesgo de incendios forestales a nivel pixel. Ha sido demostrada la utilidad de la autocorrelación temporal para definir y caracterizar el funcionamiento de la vegetación a nivel pixel. Finalmente el concepto “Optical Functional Type” ha sido definido, donde se propone que los pixeles deberían ser considerados como unidades temporales y analizados en función de su dinámica temporal. ix SUMMARY A good understanding of land surface processes is considered as a key subject in environmental sciences. The spatial-temporal coverage of remote sensing data provides continuous observations with a high temporal frequency allowing the assessment of ecosystem evolution at different temporal and spatial scales. Although the value of remote sensing time series has been firmly proved, only few time series methods have been developed for analyzing this data in a quantitative and continuous manner. In the present dissertation a working framework to exploit Remote Sensing time series is proposed based on the combination of Time Series Analysis and phenometric approach. The main goal is to demonstrate the use of remote sensing time series to analyze quantitatively environmental variable dynamics. The specific objectives are (1) to assess environmental variables based on remote sensing time series and (2) to develop empirical models to forecast environmental variables. These objectives have been achieved in four applications which specific objectives are (1) assessing and mapping cotton crop phenological stages using spectral and phenometric analyses, (2) assessing and modeling fire seasonality in two different ecoregions by dynamic models, (3) forecasting forest fire risk on a pixel basis by dynamic models, and (4) assessing vegetation functioning based on temporal autocorrelation and phenometric analysis. The results of this dissertation show the usefulness of function fitting procedures to model AS1 and AS2. Phenometrics derived from function fitting procedure makes it possible to identify cotton crop phenological stages. Spectral analysis has demonstrated quantitatively the presence of one cycle in AS2 and two in AS1 and the unimodal and bimodal behaviour of fire seasonality in the Mediterranean and temperate ecoregions respectively. Autoregressive models has been used to characterize the dynamics of fire seasonality in two ecoregions and to forecasts accurately fire risk on a pixel basis. The usefulness of temporal autocorrelation to define and characterized land surface functioning has been demonstrated. And finally the “Optical Functional Types” concept has been proposed, in this approach pixels could be as temporal unities based on its temporal dynamics or functioning.
Resumo:
The aim of this study was to apply multifailure survival methods to analyze time to multiple occurrences of basal cell carcinoma (BCC). Data from 4.5 years of follow-up in a randomized controlled trial, the Nambour Skin Cancer Prevention Trial (1992-1996), to evaluate skin cancer prevention were used to assess the influence of sunscreen application on the time to first BCC and the time to subsequent BCCs. Three different approaches of time to ordered multiple events were applied and compared: the Andersen-Gill, Wei-Lin-Weissfeld, and Prentice-Williams-Peterson models. Robust variance estimation approaches were used for all multifailure survival models. Sunscreen treatment was not associated with time to first occurrence of a BCC (hazard ratio = 1.04, 95% confidence interval: 0.79, 1.45). Time to subsequent BCC tumors using the Andersen-Gill model resulted in a lower estimated hazard among the daily sunscreen application group, although statistical significance was not reached (hazard ratio = 0.82, 95% confidence interval: 0.59, 1.15). Similarly, both the Wei-Lin-Weissfeld marginal-hazards and the Prentice-Williams-Peterson gap-time models revealed trends toward a lower risk of subsequent BCC tumors among the sunscreen intervention group. These results demonstrate the importance of conducting multiple-event analysis for recurring events, as risk factors for a single event may differ from those where repeated events are considered.
Resumo:
In this work, we further extend the recently developed adaptive data analysis method, the Sparse Time-Frequency Representation (STFR) method. This method is based on the assumption that many physical signals inherently contain AM-FM representations. We propose a sparse optimization method to extract the AM-FM representations of such signals. We prove the convergence of the method for periodic signals under certain assumptions and provide practical algorithms specifically for the non-periodic STFR, which extends the method to tackle problems that former STFR methods could not handle, including stability to noise and non-periodic data analysis. This is a significant improvement since many adaptive and non-adaptive signal processing methods are not fully capable of handling non-periodic signals. Moreover, we propose a new STFR algorithm to study intrawave signals with strong frequency modulation and analyze the convergence of this new algorithm for periodic signals. Such signals have previously remained a bottleneck for all signal processing methods. Furthermore, we propose a modified version of STFR that facilitates the extraction of intrawaves that have overlaping frequency content. We show that the STFR methods can be applied to the realm of dynamical systems and cardiovascular signals. In particular, we present a simplified and modified version of the STFR algorithm that is potentially useful for the diagnosis of some cardiovascular diseases. We further explain some preliminary work on the nature of Intrinsic Mode Functions (IMFs) and how they can have different representations in different phase coordinates. This analysis shows that the uncertainty principle is fundamental to all oscillating signals.
Resumo:
With increasingly complex engineering assets and tight economic requirements, asset reliability becomes more crucial in Engineering Asset Management (EAM). Improving the reliability of systems has always been a major aim of EAM. Reliability assessment using degradation data has become a significant approach to evaluate the reliability and safety of critical systems. Degradation data often provide more information than failure time data for assessing reliability and predicting the remnant life of systems. In general, degradation is the reduction in performance, reliability, and life span of assets. Many failure mechanisms can be traced to an underlying degradation process. Degradation phenomenon is a kind of stochastic process; therefore, it could be modelled in several approaches. Degradation modelling techniques have generated a great amount of research in reliability field. While degradation models play a significant role in reliability analysis, there are few review papers on that. This paper presents a review of the existing literature on commonly used degradation models in reliability analysis. The current research and developments in degradation models are reviewed and summarised in this paper. This study synthesises these models and classifies them in certain groups. Additionally, it attempts to identify the merits, limitations, and applications of each model. It provides potential applications of these degradation models in asset health and reliability prediction.
Resumo:
In this paper, we use time series analysis to evaluate predictive scenarios using search engine transactional logs. Our goal is to develop models for the analysis of searchers’ behaviors over time and investigate if time series analysis is a valid method for predicting relationships between searcher actions. Time series analysis is a method often used to understand the underlying characteristics of temporal data in order to make forecasts. In this study, we used a Web search engine transactional log and time series analysis to investigate users’ actions. We conducted our analysis in two phases. In the initial phase, we employed a basic analysis and found that 10% of searchers clicked on sponsored links. However, from 22:00 to 24:00, searchers almost exclusively clicked on the organic links, with almost no clicks on sponsored links. In the second and more extensive phase, we used a one-step prediction time series analysis method along with a transfer function method. The period rarely affects navigational and transactional queries, while rates for transactional queries vary during different periods. Our results show that the average length of a searcher session is approximately 2.9 interactions and that this average is consistent across time periods. Most importantly, our findings shows that searchers who submit the shortest queries (i.e., in number of terms) click on highest ranked results. We discuss implications, including predictive value, and future research.
Resumo:
Catheter associated urinary tract infections (CAUTI) are a worldwide problem that may lead to increased patient morbidity, cost and mortality.1e3 The literature is divided on whether there are real effects from CAUTI on length of stay or mortality. Platt4 found the costs and mortality risks to be largeyetGraves et al found the opposite.5 A reviewof the published estimates of the extra length of stay showed results between zero and 30 days.6 The differences in estimates may have been caused by the different epidemiological methods applied. Accurately estimating the effects of CAUTI is difficult because it is a time-dependent exposure. This means that standard statistical techniques, such asmatched case-control studies, tend to overestimate the increased hospital stay and mortality risk due to infection. The aim of the study was to estimate excess length of stay andmortality in an intensive care unit (ICU) due to a CAUTI, using a statistical model that accounts for the timing of infection. Data collected from ICU units in lower and middle income countries were used for this analysis.7,8 There has been little research for these settings, hence the need for this paper.
Resumo:
Monitoring and assessing environmental health is becoming increasingly important as human activity and climate change place greater pressure on global biodiversity. Acoustic sensors provide the ability to collect data passively, objectively and continuously across large areas for extended periods of time. While these factors make acoustic sensors attractive as autonomous data collectors, there are significant issues associated with large-scale data manipulation and analysis. We present our current research into techniques for analysing large volumes of acoustic data effectively and efficiently. We provide an overview of a novel online acoustic environmental workbench and discuss a number of approaches to scaling analysis of acoustic data; collaboration, manual, automatic and human-in-the loop analysis.
Resumo:
Data analysis sessions are a common feature of discourse analytic communities, often involving participants with varying levels of expertise to those with significant expertise. Learning how to do data analysis and working with transcripts, however, are often new experiences for doctoral candidates within the social sciences. While many guides to doctoral education focus on procedures associated with data analysis (Heath, Hindmarsh, & Luff, 2010; McHoul & Rapley, 2001; Silverman, 2011; Wetherall, Taylor, & Yates, 2001), the in situ practices of doing data analysis are relatively undocumented. This chapter has been collaboratively written by members of a special interest research group, the Transcript Analysis Group (TAG), who meet regularly to examine transcripts representing audio- and video-recorded interactional data. Here, we investigate our own actual interactional practices and participation in this group where each member is both analyst and participant. We particularly focus on the pedagogic practices enacted in the group through investigating how members engage in the scholarly practice of data analysis. A key feature of talk within the data sessions is that members work collaboratively to identify and discuss ‘noticings’ from the audio-recorded and transcribed talk being examined, produce candidate analytic observations based on these discussions, and evaluate these observations. Our investigation of how talk constructs social practices in these sessions shows that participants move fluidly between actions that demonstrate pedagogic practices and expertise. Within any one session, members can display their expertise as analysts and, at the same time, display that they have gained an understanding that they did not have before. We take an ethnomethodological position that asks, ‘what’s going on here?’ in the data analysis session. By observing the in situ practices in fine-grained detail, we show how members participate in the data analysis sessions and make sense of a transcript.
Resumo:
Objective: To determine whether primary care management of chronic heart failure (CHF) differed between rural and urban areas in Australia. Design: A cross-sectional survey stratified by Rural, Remote and Metropolitan Areas (RRMA) classification. The primary source of data was the Cardiac Awareness Survey and Evaluation (CASE) study. Setting: Secondary analysis of data obtained from 341 Australian general practitioners and 23 845 adults aged 60 years or more in 1998. Main outcome measures: CHF determined by criteria recommended by the World Health Organization, diagnostic practices, use of pharmacotherapy, and CHF-related hospital admissions in the 12 months before the study. Results: There was a significantly higher prevalence of CHF among general practice patients in large and small rural towns (16.1%) compared with capital city and metropolitan areas (12.4%) (P < 0.001). Echocardiography was used less often for diagnosis in rural towns compared with metropolitan areas (52.0% v 67.3%, P < 0.001). Rates of specialist referral were also significantly lower in rural towns than in metropolitan areas (59.1% v 69.6%, P < 0.001), as were prescribing rates of angiotensin-converting enzyme inhibitors (51.4% v 60.1%, P < 0.001). There was no geographical variation in prescribing rates of β-blockers (12.6% [rural] v 11.8% [metropolitan], P = 0.32). Overall, few survey participants received recommended “evidence-based practice” diagnosis and management for CHF (metropolitan, 4.6%; rural, 3.9%; and remote areas, 3.7%). Conclusions: This study found a higher prevalence of CHF, and significantly lower use of recommended diagnostic methods and pharmacological treatment among patients in rural areas.
Resumo:
This report is an update of an earlier version produced in January 2010 (see Carrington et al. 2010) which remains as an ePrint through the project’s home page. The report provides an introduction to our analyses of extant secondary data with respect to violent acts and incidents relating to males living in rural settings in Australia using data which were available in public data bases at the time of production. It clarifies important aspects of our overall approach primarily by concentrating on three elements that required early scoping and resolution.
Resumo:
Citizen Science projects are initiatives in which members of the general public participate in scientific research projects and perform or manage research-related tasks such as data collection and/or data annotation. Citizen Science is technologically possible and scientifically significant. However, as the gathered information is from the crowd, the data quality is always hard to manage. There are many ways to manage data quality, and reputation management is one of the common approaches. In recent year, many research teams have deployed many audio or image sensors in natural environment in order to monitor the status of animals or plants. The collected data will be analysed by ecologists. However, as the amount of collected data is exceedingly huge and the number of ecologists is very limited, it is impossible for scientists to manually analyse all these data. The functions of existing automated tools to process the data are still very limited and the results are still not very accurate. Therefore, researchers have turned to recruiting general citizens who are interested in helping scientific research to do the pre-processing tasks such as species tagging. Although research teams can save time and money by recruiting general citizens to volunteer their time and skills to help data analysis, the reliability of contributed data varies a lot. Therefore, this research aims to investigate techniques to enhance the reliability of data contributed by general citizens in scientific research projects especially for acoustic sensing projects. In particular, we aim to investigate how to use reputation management to enhance data reliability. Reputation systems have been used to solve the uncertainty and improve data quality in many marketing and E-Commerce domains. The commercial organizations which have chosen to embrace the reputation management and implement the technology have gained many benefits. Data quality issues are significant to the domain of Citizen Science due to the quantity and diversity of people and devices involved. However, research on reputation management in this area is relatively new. We therefore start our investigation by examining existing reputation systems in different domains. Then we design novel reputation management approaches for Citizen Science projects to categorise participants and data. We have investigated some critical elements which may influence data reliability in Citizen Science projects. These elements include personal information such as location and education and performance information such as the ability to recognise certain bird calls. The designed reputation framework is evaluated by a series of experiments involving many participants for collecting and interpreting data, in particular, environmental acoustic data. Our research in exploring the advantages of reputation management in Citizen Science (or crowdsourcing in general) will help increase awareness among organizations that are unacquainted with its potential benefits.
Resumo:
Background: Malaria is a major public health burden in the tropics with the potential to significantly increase in response to climate change. Analyses of data from the recent past can elucidate how short-term variations in weather factors affect malaria transmission. This study explored the impact of climate variability on the transmission of malaria in the tropical rain forest area of Mengla County, south-west China. Methods: Ecological time-series analysis was performed on data collected between 1971 and 1999. Auto-regressive integrated moving average (ARIMA) models were used to evaluate the relationship between weather factors and malaria incidence. Results: At the time scale of months, the predictors for malaria incidence included: minimum temperature, maximum temperature, and fog day frequency. The effect of minimum temperature on malaria incidence was greater in the cool months than in the hot months. The fog day frequency in October had a positive effect on malaria incidence in May of the following year. At the time scale of years, the annual fog day frequency was the only weather predictor of the annual incidence of malaria. Conclusion: Fog day frequency was for the first time found to be a predictor of malaria incidence in a rain forest area. The one-year delayed effect of fog on malaria transmission may involve providing water input and maintaining aquatic breeding sites for mosquitoes in vulnerable times when there is little rainfall in the 6-month dry seasons. These findings should be considered in the prediction of future patterns of malaria for similar tropical rain forest areas worldwide.