957 resultados para Statistical approach
Acceptance of relapse fears in breast cancer patients: effects of an act-based abridged intervention
Resumo:
Objective: Relapse fear is a common psychological scar in cancer survivors. The aim of this study is to assess the effects of an abridged version of Acceptance and Commitment Therapy (ACT) in breast cancer patients.Method: An open trial was developed with 12 non-metastatic breast cancer patients assigned to 2 conditions, ACT and waiting list. Interventions were applied in just one session and focused on the acceptance of relapse fears through a ‘defusion’ exercise. Interference and intensity of fear measured through subjective scales were collected after each intervention and again 3 months later. Distress, hypochondria and ‘anxious preocupation’ were also evaluated through standardized questionnaires.Results: The analysis revealed that ‘defusion’ contributed to decrease the interference of the fear of recurrence, and these changes were maintained three months after intervention in most subjects. 87% of participants showed clinically significant decreases in interference at follow-up sessions whereas no patient in the waiting list showed such changes. Statistical analysis revealed that the changes in interference were significant when comparing pre, post and follow-up treatment, and also when comparing ACT and waiting list groups. Changes in intensity of fear, distress, anxious preoccupation and hypochondria were also observed.Conclusions: Exposure through ‘defusion’ techniques might be considered a useful option for treatment of persistent fears in cancer patients. This study provides evidence for therapies focusing on psychological acceptance in cancer patients through short, simple and feasible therapeutic methods.
Resumo:
RÉSUMÉ Certains auteurs ont développé un intérêt pour la compréhension des aptitudes associées à la gestion du temps. Ainsi, plusieurs définitions théoriques ont été proposées afin de mieux cerner ce concept et une multitude de questionnaires a été développée afin de le mesurer. La présente étude visait à valider la traduction française d’un de ces outils, soit le Time Personality Indicator (TPI). Des analyses exploratoires et confirmatoires ont été effectuées sur l’ensemble des données recueillies auprès de 1 267 étudiants et employés de l’Université Laval ayant complété la version française du TPI ainsi que d’autres mesures de la personnalité. Les résultats ont révélé qu’une solution à huit facteurs permet de mieux décrire les données de l’échantillon. La discussion présente les raisons pour lesquelles la version française du TPI est valide, identifie certaines limites de la présente étude et souligne l’utilité de cet outil pour la recherche sur la gestion du temps. (ABSTRACT: Numerous authors have developed an interest towards the understanding of the abilities related to time management. As a consequence, multiple theoretical definitions have been proposed to explain time management. Likewise, several questionnaires have been developed in order to measure this concept. The aim of this study was to validate a French version of one of these tools, namely the Time Personality Indicator (TPI). The French version of the TPI and other personality questionnaires were completed by 1267 students and employees of Université Laval. The statistical approach used included exploratory and confirmatory analyses. Results revealed that an eight factor model provided a better adjustment to the data. The discussion provides arguments supporting the validity of the French version of the TPI and underlines the importance of such a tool for the research on time management)
Resumo:
Background: Agro-wastes were used for the production of fibrinolytic enzyme in solid-state fermentation. The process parameters were optimized to enhance the production of fibrinolytic enzyme from Bacillus halodurans IND18 by statistical approach. The fibrinolytic enzyme was purified, and the properties were studied. Results: A two-level full factorial design was used to screen the significant factors. The factors such as moisture, pH, and peptone were significantly affected enzyme production and these three factors were selected for further optimization using central composite design. The optimum medium for fibrinolytic enzyme production was wheat bran medium containing 1% peptone and 80% moisture with pH 8.32. Under these optimized conditions, the production of fibrinolytic enzyme was found to be 6851 U/g. The fibrinolytic enzyme was purified by 3.6-fold with 1275 U/mg specific activity. The molecular mass of fibrinolytic enzyme was determined by sodium dodecyl sulphate polyacrylamide gel electrophoresis, and it was observed as 29 kDa. The fibrinolytic enzyme depicted an optimal pH of 9.0 and was stable at a range of pH from 8.0 to 10.0. The optimal temperature was 60°C and was stable up to 50°C. This enzyme activated plasminogen and also degraded the fibrin net of blood clot, which suggested its potential as an effective thrombolytic agent. Conclusions: Wheat bran was found to be an effective substrate for the production of fibrinolytic enzyme. The purified fibrinolytic enzyme degraded fibrin clot. The fibrinolytic enzyme could be useful to make as an effective thrombolytic agent.
Resumo:
Optimization of Carnobacterium divergens V41 growth and bacteriocin activity in a culture medium deprived of animal protein, needs for food bioprotection, was performed by using a statistical approach. In a screening experiment, twelve factors (pH, temperature, carbohydrates, NaCl, yeast extract, soy peptone, sodium acetate, ammonium citrate, magnesium sulphate, manganese sulphate, ascorbic acid and thiamine) were tested for their influence on the maximal growth and bacteriocin activity using a two-level incomplete factorial design with 192 experiments performed in microtiter plate wells. Based on results, a basic medium was developed and three variables (pH, temperature and carbohydrates concentration) were selected for a scale-up study in bioreactor. A 23 complete factorial design was performed, allowing the estimation of linear effects of factors and all the first order interactions. The best conditions for the cell production were obtained with a temperature of 15°C and a carbohydrates concentration of 20 g/l whatever the pH (in the range 6.5-8), and the best conditions for bacteriocin activity were obtained at 15°C and pH 6.5 whatever the carbohydrates concentration (in the range 2-20 g/l). The predicted final count of C. divergens V41 and the bacteriocin activity under the optimized conditions (15°C, pH 6.5, 20 g/l carbohydrates) were 2.4 x 1010 CFU/ml and 819200 AU/ml respectively. C. divergens V41 cells cultivated in the optimized conditions were able to grow in cold-smoked salmon and totally inhibited the growth of Listeria monocytogenes (< 50 CFU g-1) during five weeks of vacuum storage at 4° and 8°C.
Resumo:
The idea behind the project is to develop a methodology for analyzing and developing techniques for the diagnosis and the prediction of the state of charge and health of lithium-ion batteries for automotive applications. For lithium-ion batteries, residual functionality is measured in terms of state of health; however, this value cannot be directly associated with a measurable value, so it must be estimated. The development of the algorithms is based on the identification of the causes of battery degradation, in order to model and predict the trend. Therefore, models have been developed that are able to predict the electrical, thermal and aging behavior. In addition to the model, it was necessary to develop algorithms capable of monitoring the state of the battery, online and offline. This was possible with the use of algorithms based on Kalman filters, which allow the estimation of the system status in real time. Through machine learning algorithms, which allow offline analysis of battery deterioration using a statistical approach, it is possible to analyze information from the entire fleet of vehicles. Both systems work in synergy in order to achieve the best performance. Validation was performed with laboratory tests on different batteries and under different conditions. The development of the model allowed to reduce the time of the experimental tests. Some specific phenomena were tested in the laboratory, and the other cases were artificially generated.
Resumo:
This thesis analyzes the impact of heat extremes in urban and rural environments, considering processes related to severely high temperatures and unusual dryness. The first part deals with the influence of large-scale heatwave events on the local-scale urban heat island (UHI) effect. The temperatures recorded over a 20-year summer period by meteorological stations in 37 European cities are examined to evaluate the variations of UHI during heatwaves with respect to non-heatwave days. A statistical analysis reveals a negligible impact of large-scale extreme temperatures on the local daytime urban climate, while a notable exacerbation of UHI effect at night. A comparison with the UrbClim model outputs confirms the UHI strengthening during heatwave episodes, with an intensity independent of the climate zone. The investigation of the relationship between large-scale temperature anomalies and UHI highlights a smooth and continuous dependence, but with a strong variability. The lack of a threshold behavior in this relationship suggests that large-scale temperature variability can affect the local-scale UHI even in different conditions than during extreme events. The second part examines the transition from meteorological to agricultural drought, being the first stage of the drought propagation process. A multi-year reanalysis dataset involving numerous drought events over the Iberian Peninsula is considered. The behavior of different non-parametric standardized drought indices in drought detection is evaluated. A statistical approach based on run theory is employed, analyzing the main characteristics of drought propagation. The propagation from meteorological to agricultural drought events is found to develop in about 1-2 months. The duration of agricultural drought appears shorter than that of meteorological drought, but the onset is delayed. The propagation probability increases with the severity of the originating meteorological drought. A new combined agricultural drought index is developed to be a useful tool for balancing the characteristics of other adopted indices.
Resumo:
Aim of the present study was to develop a statistical approach to define the best cut-off Copy number alterations (CNAs) calling from genomic data provided by high throughput experiments, able to predict a specific clinical end-point (early relapse, 18 months) in the context of Multiple Myeloma (MM). 743 newly diagnosed MM patients with SNPs array-derived genomic and clinical data were included in the study. CNAs were called both by a conventional (classic, CL) and an outcome-oriented (OO) method, and Progression Free Survival (PFS) hazard ratios of CNAs called by the two approaches were compared. The OO approach successfully identified patients at higher risk of relapse and the univariate survival analysis showed stronger prognostic effects for OO-defined high-risk alterations, as compared to that defined by CL approach, statistically significant for 12 CNAs. Overall, 155/743 patients relapsed within 18 months from the therapy start. A small number of OO-defined CNAs were significantly recurrent in early-relapsed patients (ER-CNAs) - amp1q, amp2p, del2p, del12p, del17p, del19p -. Two groups of patients were identified either carrying or not ≥1 ER-CNAs (249 vs. 494, respectively), the first one with significantly shorter PFS and overall survivals (OS) (PFS HR 2.15, p<0001; OS HR 2.37, p<0.0001). The risk of relapse defined by the presence of ≥1 ER-CNAs was independent from those conferred both by R-IIS 3 (HR=1.51; p=0.01) and by low quality (< stable disease) clinical response (HR=2.59 p=0.004). Notably, the type of induction therapy was not descriptive, suggesting that ER is strongly related to patients’ baseline genomic architecture. In conclusion, the OO- approach employed allowed to define CNAs-specific dynamic clonality cut-offs, improving the CNAs calls’ accuracy to identify MM patients with the highest probability to ER. As being outcome-dependent, the OO-approach is dynamic and might be adjusted according to the selected outcome variable of interest.
Resumo:
This article intends to contribute to the reflection on the Educational Statistics as being source for the researches on History of Education. The main concern was to reveal the way Educational Statistics related to the period from 1871 to 1931 were produced, in central government. Official reports - from the General Statistics Directory - and Statistics yearbooks released by that department were analyzed and, on this analysis, recommendations and definitions to perform the works were sought. By rending problematic to the documental issues on Educational Statistics and their usual interpretations, the intention was to reduce the ignorance about the origin of the school numbers, which are occasionally used in current researches without the convenient critical exam.
Resumo:
In this work, a previously-developed, statistical-based, damage-detection approach was validated for its ability to autonomously detect damage in bridges. The damage-detection approach uses statistical differences in the actual and predicted behavior of the bridge caused under a subset of ambient trucks. The predicted behavior is derived from a statistics-based model trained with field data from the undamaged bridge (not a finite element model). The differences between actual and predicted responses, called residuals, are then used to construct control charts, which compare undamaged and damaged structure data. Validation of the damage-detection approach was achieved by using sacrificial specimens that were mounted to the bridge and exposed to ambient traffic loads and which simulated actual damage-sensitive locations. Different damage types and levels were introduced to the sacrificial specimens to study the sensitivity and applicability. The damage-detection algorithm was able to identify damage, but it also had a high false-positive rate. An evaluation of the sub-components of the damage-detection methodology and methods was completed for the purpose of improving the approach. Several of the underlying assumptions within the algorithm were being violated, which was the source of the false-positives. Furthermore, the lack of an automatic evaluation process was thought to potentially be an impediment to widespread use. Recommendations for the improvement of the methodology were developed and preliminarily evaluated. These recommendations are believed to improve the efficacy of the damage-detection approach.
Resumo:
The climate belongs to the class of non-equilibrium forced and dissipative systems, for which most results of quasi-equilibrium statistical mechanics, including the fluctuation-dissipation theorem, do not apply. In this paper we show for the first time how the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, can be applied with great success to analyze the climatic response to general forcings. The crucial value of the Ruelle theory lies in the fact that it allows to compute the response of the system in terms of expectation values of explicit and computable functions of the phase space averaged over the invariant measure of the unperturbed state. We choose as test bed a classical version of the Lorenz 96 model, which, in spite of its simplicity, has a well-recognized prototypical value as it is a spatially extended one-dimensional model and presents the basic ingredients, such as dissipation, advection and the presence of an external forcing, of the actual atmosphere. We recapitulate the main aspects of the general response theory and propose some new general results. We then analyze the frequency dependence of the response of both local and global observables to perturbations having localized as well as global spatial patterns. We derive analytically several properties of the corresponding susceptibilities, such as asymptotic behavior, validity of Kramers-Kronig relations, and sum rules, whose main ingredient is the causality principle. We show that all the coefficients of the leading asymptotic expansions as well as the integral constraints can be written as linear function of parameters that describe the unperturbed properties of the system, such as its average energy. Some newly obtained empirical closure equations for such parameters allow to define such properties as an explicit function of the unperturbed forcing parameter alone for a general class of chaotic Lorenz 96 models. We then verify the theoretical predictions from the outputs of the simulations up to a high degree of precision. The theory is used to explain differences in the response of local and global observables, to define the intensive properties of the system, which do not depend on the spatial resolution of the Lorenz 96 model, and to generalize the concept of climate sensitivity to all time scales. We also show how to reconstruct the linear Green function, which maps perturbations of general time patterns into changes in the expectation value of the considered observable for finite as well as infinite time. Finally, we propose a simple yet general methodology to study general Climate Change problems on virtually any time scale by resorting to only well selected simulations, and by taking full advantage of ensemble methods. The specific case of globally averaged surface temperature response to a general pattern of change of the CO2 concentration is discussed. We believe that the proposed approach may constitute a mathematically rigorous and practically very effective way to approach the problem of climate sensitivity, climate prediction, and climate change from a radically new perspective.
Resumo:
We explore the potential for making statistical decadal predictions of sea surface temperatures (SSTs) in a perfect model analysis, with a focus on the Atlantic basin. Various statistical methods (Lagged correlations, Linear Inverse Modelling and Constructed Analogue) are found to have significant skill in predicting the internal variability of Atlantic SSTs for up to a decade ahead in control integrations of two different global climate models (GCMs), namely HadCM3 and HadGEM1. Statistical methods which consider non-local information tend to perform best, but which is the most successful statistical method depends on the region considered, GCM data used and prediction lead time. However, the Constructed Analogue method tends to have the highest skill at longer lead times. Importantly, the regions of greatest prediction skill can be very different to regions identified as potentially predictable from variance explained arguments. This finding suggests that significant local decadal variability is not necessarily a prerequisite for skillful decadal predictions, and that the statistical methods are capturing some of the dynamics of low-frequency SST evolution. In particular, using data from HadGEM1, significant skill at lead times of 6–10 years is found in the tropical North Atlantic, a region with relatively little decadal variability compared to interannual variability. This skill appears to come from reconstructing the SSTs in the far north Atlantic, suggesting that the more northern latitudes are optimal for SST observations to improve predictions. We additionally explore whether adding sub-surface temperature data improves these decadal statistical predictions, and find that, again, it depends on the region, prediction lead time and GCM data used. Overall, we argue that the estimated prediction skill motivates the further development of statistical decadal predictions of SSTs as a benchmark for current and future GCM-based decadal climate predictions.
Resumo:
The occurrence of mid-latitude windstorms is related to strong socio-economic effects. For detailed and reliable regional impact studies, large datasets of high-resolution wind fields are required. In this study, a statistical downscaling approach in combination with dynamical downscaling is introduced to derive storm related gust speeds on a high-resolution grid over Europe. Multiple linear regression models are trained using reanalysis data and wind gusts from regional climate model simulations for a sample of 100 top ranking windstorm events. The method is computationally inexpensive and reproduces individual windstorm footprints adequately. Compared to observations, the results for Germany are at least as good as pure dynamical downscaling. This new tool can be easily applied to large ensembles of general circulation model simulations and thus contribute to a better understanding of the regional impact of windstorms based on decadal and climate change projections.