21 resultados para Failure time data analysis
Resumo:
Objective: This research aims to assess apprentices' and trainees' work conditions, psychosocial factors at work, as well as health symptoms after joining the labor force. Background: Despite the fact that there are over 3.5 million young working students in Brazil, this increasing rate brings with it difficult working conditions such as work pressure, heavy workloads, and lack of safety training. Method: This study was carried out in a nongovernmental organization (NGO) with 40 young members of a first job program in the city of Sao Paulo, Brazil. They filled out a comprehensive questionnaire focused on sociodemographic variables, working conditions, and health symptoms. Individual and collective semi-structured interviews were conducted. Empirical data analysis was performed using analysis of content. Results: The majority of participants mentioned difficulties in dealing with the pressure and their share of responsibilities at work. Body pains, headaches, sleep deprivation during the workweek, and frequent colds were mentioned. Lack of appropriate task and safety training contributed to the occurrence of work injuries. Conclusion: Having a full-time job during the day coupled with evening high school attendance may jeopardize these people's health and future. Application: This study can make a contribution to the revision and implementation of work training programs for adolescents. It can also help in the creation of more sensible policies regarding youth employment.
Resumo:
In this paper, we present approximate distributions for the ratio of the cumulative wavelet periodograms considering stationary and non-stationary time series generated from independent Gaussian processes. We also adapt an existing procedure to use this statistic and its approximate distribution in order to test if two regularly or irregularly spaced time series are realizations of the same generating process. Simulation studies show good size and power properties for the test statistic. An application with financial microdata illustrates the test usefulness. We conclude advocating the use of these approximate distributions instead of the ones obtained through randomizations, mainly in the case of irregular time series. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Dimensionality reduction is employed for visual data analysis as a way to obtaining reduced spaces for high dimensional data or to mapping data directly into 2D or 3D spaces. Although techniques have evolved to improve data segregation on reduced or visual spaces, they have limited capabilities for adjusting the results according to user's knowledge. In this paper, we propose a novel approach to handling both dimensionality reduction and visualization of high dimensional data, taking into account user's input. It employs Partial Least Squares (PLS), a statistical tool to perform retrieval of latent spaces focusing on the discriminability of the data. The method employs a training set for building a highly precise model that can then be applied to a much larger data set very effectively. The reduced data set can be exhibited using various existing visualization techniques. The training data is important to code user's knowledge into the loop. However, this work also devises a strategy for calculating PLS reduced spaces when no training data is available. The approach produces increasingly precise visual mappings as the user feeds back his or her knowledge and is capable of working with small and unbalanced training sets.
Resumo:
Background: Atrial fibrillation is a serious public health problem posing a considerable burden to not only patients, but the healthcare environment due to high rates of morbidity, mortality, and medical resource utilization. There are limited data on the variation in treatment practice patterns across different countries, healthcare settings and the associated health outcomes. Methods/design: RHYTHM-AF was a prospective observational multinational study of management of recent onset atrial fibrillation patients considered for cardioversion designed to collect data on international treatment patterns and short term outcomes related to cardioversion. We present data collected in 10 countries between May 2010 and June 2011. Enrollment was ongoing in Italy and Brazil at the time of data analysis. Data were collected at the time of atrial fibrillation episode in all countries (Australia, Brazil, France, Germany, Italy, Netherlands, Poland, Spain, Sweden, United Kingdom), and cumulative follow-up data were collected at day 60 (+/- 10) in all but Spain. Information on center characteristics, enrollment data, patient demographics, detail of atrial fibrillation episode, medical history, diagnostic procedures, acute treatment of atrial fibrillation, discharge information and the follow-up data on major events and rehospitalizations up to day 60 were collected. Discussion: A total of 3940 patients were enrolled from 175 acute care centers. 70.5% of the centers were either academic (44%) or teaching (26%) hospitals with an overall median capacity of 510 beds. The sites were mostly specialized with anticoagulation clinics (65.9%), heart failure (75.1%) and hypertension clinics (60.1%) available. The RHYTHM-AF registry will provide insight into regional variability of antiarrhythmic and antithrombotic treatment of atrial fibrillation, the appropriateness of such treatments with respect to outcomes, and their cost-efficacy. Observations will help inform strategies to improve cardiovascular outcomes in patients with atrial fibrillation.
Resumo:
Current scientific applications have been producing large amounts of data. The processing, handling and analysis of such data require large-scale computing infrastructures such as clusters and grids. In this area, studies aim at improving the performance of data-intensive applications by optimizing data accesses. In order to achieve this goal, distributed storage systems have been considering techniques of data replication, migration, distribution, and access parallelism. However, the main drawback of those studies is that they do not take into account application behavior to perform data access optimization. This limitation motivated this paper which applies strategies to support the online prediction of application behavior in order to optimize data access operations on distributed systems, without requiring any information on past executions. In order to accomplish such a goal, this approach organizes application behaviors as time series and, then, analyzes and classifies those series according to their properties. By knowing properties, the approach selects modeling techniques to represent series and perform predictions, which are, later on, used to optimize data access operations. This new approach was implemented and evaluated using the OptorSim simulator, sponsored by the LHC-CERN project and widely employed by the scientific community. Experiments confirm this new approach reduces application execution time in about 50 percent, specially when handling large amounts of data.
Resumo:
In this article, we propose a new Bayesian flexible cure rate survival model, which generalises the stochastic model of Klebanov et al. [Klebanov LB, Rachev ST and Yakovlev AY. A stochastic-model of radiation carcinogenesis - latent time distributions and their properties. Math Biosci 1993; 113: 51-75], and has much in common with the destructive model formulated by Rodrigues et al. [Rodrigues J, de Castro M, Balakrishnan N and Cancho VG. Destructive weighted Poisson cure rate models. Technical Report, Universidade Federal de Sao Carlos, Sao Carlos-SP. Brazil, 2009 (accepted in Lifetime Data Analysis)]. In our approach, the accumulated number of lesions or altered cells follows a compound weighted Poisson distribution. This model is more flexible than the promotion time cure model in terms of dispersion. Moreover, it possesses an interesting and realistic interpretation of the biological mechanism of the occurrence of the event of interest as it includes a destructive process of tumour cells after an initial treatment or the capacity of an individual exposed to irradiation to repair altered cells that results in cancer induction. In other words, what is recorded is only the damaged portion of the original number of altered cells not eliminated by the treatment or repaired by the repair system of an individual. Markov Chain Monte Carlo (MCMC) methods are then used to develop Bayesian inference for the proposed model. Also, some discussions on the model selection and an illustration with a cutaneous melanoma data set analysed by Rodrigues et al. [Rodrigues J, de Castro M, Balakrishnan N and Cancho VG. Destructive weighted Poisson cure rate models. Technical Report, Universidade Federal de Sao Carlos, Sao Carlos-SP. Brazil, 2009 (accepted in Lifetime Data Analysis)] are presented.