925 resultados para training model
Resumo:
This paper addressed the problem of water-demand forecasting for real-time operation of water supply systems. The present study was conducted to identify the best fit model using hourly consumption data from the water supply system of Araraquara, Sa approximate to o Paulo, Brazil. Artificial neural networks (ANNs) were used in view of their enhanced capability to match or even improve on the regression model forecasts. The ANNs used were the multilayer perceptron with the back-propagation algorithm (MLP-BP), the dynamic neural network (DAN2), and two hybrid ANNs. The hybrid models used the error produced by the Fourier series forecasting as input to the MLP-BP and DAN2, called ANN-H and DAN2-H, respectively. The tested inputs for the neural network were selected literature and correlation analysis. The results from the hybrid models were promising, DAN2 performing better than the tested MLP-BP models. DAN2-H, identified as the best model, produced a mean absolute error (MAE) of 3.3 L/s and 2.8 L/s for training and test set, respectively, for the prediction of the next hour, which represented about 12% of the average consumption. The best forecasting model for the next 24 hours was again DAN2-H, which outperformed other compared models, and produced a MAE of 3.1 L/s and 3.0 L/s for training and test set respectively, which represented about 12% of average consumption. DOI: 10.1061/(ASCE)WR.1943-5452.0000177. (C) 2012 American Society of Civil Engineers.
Resumo:
The purpose of this study was to assess the effect of training load regulation, using the CMJ at the beginning of the session, on the total plyometric training load and the vertical jump performance. 44 males were divided into 4 groups: No Regulation Group (nRG), Regulation Group (RG), Yoked Group (YG) and Control Group (CG). The nRG received 6 weeks of plyometric training, with no adjustment in training load. The RG underwent the same training; however, the training load was adjusted according to the CMJ performance at the beginning of each session. The adjustment made in RG was replicated for the volunteers from the corresponding quartile in the YG, with no consideration given to the YG participant's condition at the beginning of its session. At the end of the training, the CMJ and SJ performance of all of the participants was reassessed. The total training load was significantly lower (p=0.036; ES=0.82) in the RG and the YG (1905 +/- 37 jumps) compared to the nRG (1926 +/- 0 jumps). The enhancement in vertical jump performance was significant for the groups that underwent the training (p<0.001). Vertical jump performance, performed at the beginning of the session, as a tool to regulate the training load resulted in a decrease of the total training load, without decreasing the long-term effects on vertical jump performance.
Resumo:
Objective. - The aim of this study was to identify the effects of strength training on plasma parameters, body composition and the liver of ovariectomized rats. Methods. - Wistar sedentary (SHAM), ovariectomized (OVX), and ovariectomized trained rats (strength training [OVX-EXE]) of 85% of one maximal repetition (1 RM), three times per week, for 10 weeks, were used on this study. We monitored the body weight and visceral (uterine, mesenteric and retroperitoneal) and subcutaneous adiposity, total cholesterol, triglycerides, HDL, blood glucose and liver morphology to identify the presence of macrovesicular steotosis (haematoxylin and eosin staining). Results. - We observed that strength training changed body weight (SHAM 293.0 +/- 14.5 g; OVX 342.6 +/- 10.8 g; OVX-EXE 317.7 +/- 11.9 g, P < 0.05), visceral and subcutaneous adiposity, glucose (SHAM 111.2 +/- 10.0 mg/dL; OVX 147.4 +/- 18.8 mg/dL; OVX-EXE 118.5 +/- 2.2 mg/dL, P < 0.05), increased HDL (SHAM 82.7 +/- 1.4 mg/dL; OVX 64.6 +/- 2.8 mg/dL; OVX-EXE 91.4 +/- 2.6 mg/dL, P < 0.05) and reduced macrovesicular steatosis in liver tissue. Conclusions. - Considering the data obtained in this research, we emphasise the use of strength exercise training as a therapeutic means to combat or control the metabolic disturbances associated with menopause, including adiposity, and adverse changes in blood glucose, blood HDL and macrovesicular steatosis. (C) 2011 Elsevier Masson SAS. All rights reserved.
Resumo:
Exercise training is a well-known coadjuvant in heart failure treatment; however, the molecular mechanisms underlying its beneficial effects remain elusive. Despite the primary cause, heart failure is often preceded by two distinct phenomena: mitochondria dysfunction and cytosolic protein quality control disruption. The objective of the study was to determine the contribution of exercise training in regulating cardiac mitochondria metabolism and cytosolic protein quality control in a post-myocardial infarction-induced heart failure (MI-HF) animal model. Our data demonstrated that isolated cardiac mitochondria from MI-HF rats displayed decreased oxygen consumption, reduced maximum calcium uptake and elevated H2O2 release. These changes were accompanied by exacerbated cardiac oxidative stress and proteasomal insufficiency. Declined proteasomal activity contributes to cardiac protein quality control disruption in our MI-HF model. Using cultured neonatal cardiomyocytes, we showed that either antimycin A or H2O2 resulted in inactivation of proteasomal peptidase activity, accumulation of oxidized proteins and cell death, recapitulating our in vivo model. Of interest, eight weeks of exercise training improved cardiac function, peak oxygen uptake and exercise tolerance in MI-HF rats. Moreover, exercise training restored mitochondrial oxygen consumption, increased Ca2+-induced permeability transition and reduced H2O2 release in MI-HF rats. These changes were followed by reduced oxidative stress and better cardiac protein quality control. Taken together, our findings uncover the potential contribution of mitochondrial dysfunction and cytosolic protein quality control disruption to heart failure and highlight the positive effects of exercise training in re-establishing cardiac mitochondrial physiology and protein quality control, reinforcing the importance of this intervention as a nonpharmacological tool for heart failure therapy.
Resumo:
Abstract We aimed to investigate the effects of creatine (Cr) supplementation on the plasma lipid profile in sedentary male subjects undergoing aerobic training. Methods Subjects (n = 22) were randomly divided into two groups and were allocated to receive treatment with either creatine monohydrate (CR) (~20 g·day-1 for one week followed by ~10 g·day-1 for a further eleven weeks) or placebo (PL) (dextrose) in a double blind fashion. All subjects undertook moderate intensity aerobic training during three 40-minute sessions per week, over 3 months. High-density lipoprotein cholesterol (HDL), low-density lipoprotein cholesterol (LDL), very low-density lipoprotein cholesterol (VLDL), total cholesterol (TC), triglyceride (TAG), fasting insulin and fasting glycemia were analyzed in plasma. Thereafter, the homeostasis model assessment (HOMA) was calculated. Tests were performed at baseline (Pre) and after four (Post 4), eight (Post 8) and twelve (Post 12) weeks. Results We observed main time effects in both groups for HDL (Post 4 versus Post 8; P = 0.01), TAG and VLDL (Pre versus Post 4 and Post 8; P = 0.02 and P = 0.01, respectively). However, no between group differences were noted in HDL, LDL, CT, VLDL and TAG. Additionally, fasting insulin, fasting glycemia and HOMA did not change significantly. Conclusion These findings suggest that Cr supplementation does not exert any additional effect on the improvement in the plasma lipid profile than aerobic training alone.
Resumo:
[EN] Background: Spain has gone from a surplus to a shortage of medical doctors in very few years. Medium and long-term planning for health professionals has become a high priority for health authorities. Methods: We created a supply and demand/need simulation model for 43 medical specialties using system dynamics. The model includes demographic, education and labour market variables. Several scenarios were defined. Variables controllable by health planners can be set as parameters to simulate different scenarios. The model calculates the supply and the deficit or surplus. Experts set the ratio of specialists needed per 1000 inhabitants with a Delphi method. Results: In the scenario of the baseline model with moderate population growth, the deficit of medical specialists will grow from 2% at present (2800 specialists) to 14.3% in 2025 (almost 21 000). The specialties with the greatest medium-term shortages are Anesthesiology, Orthopedic and Traumatic Surgery, Pediatric Surgery, Plastic Aesthetic and Reparatory Surgery, Family and Community Medicine, Pediatrics, Radiology, and Urology. Conclusions: The model suggests the need to increase the number of students admitted to medical school. Training itineraries should be redesigned to facilitate mobility among specialties. In the meantime, the need to make more flexible the supply in the short term is being filled by the immigration of physicians from new members of the European Union and from Latin America.
Resumo:
Il CP-ESFR è un progetto integrato di cooperazione europeo sui reattori a sodio SFR realizzato sotto il programma quadro EURATOM 7, che unisce il contributo di venticinque partner europei. Il CP-ESFR ha l'ambizione di contribuire all'istituzione di una "solida base scientifica e tecnica per il reattore veloce refrigerato a sodio, al fine di accelerare gli sviluppi pratici per la gestione sicura dei rifiuti radioattivi a lunga vita, per migliorare le prestazioni di sicurezza, l'efficienza delle risorse e il costo-efficacia di energia nucleare al fine di garantire un sistema solido e socialmente accettabile di protezione della popolazione e dell'ambiente contro gli effetti delle radiazioni ionizzanti. " La presente tesi di laurea è un contributo allo sviluppo di modelli e metodi, basati sull’uso di codici termo-idraulici di sistema, per l’ analisi di sicurezza di reattori di IV Generazione refrigerati a metallo liquido. L'attività è stata svolta nell'ambito del progetto FP-7 PELGRIMM ed in sinergia con l’Accordo di Programma MSE-ENEA(PAR-2013). Il progetto FP7 PELGRIMM ha come obbiettivo lo sviluppo di combustibili contenenti attinidi minori 1. attraverso lo studio di due diverse forme: pellet (oggetto della presente tesi) e spherepac 2. valutandone l’impatto sul progetto del reattore CP-ESFR. La tesi propone lo sviluppo di un modello termoidraulico di sistema dei circuiti primario e intermedio del reattore con il codice RELAP5-3D© (INL, US). Tale codice, qualificato per il licenziamento dei reattori nucleari ad acqua, è stato utilizzato per valutare come variano i parametri del core del reattore rilevanti per la sicurezza (es. temperatura di camicia e di centro combustibile, temperatura del fluido refrigerante, etc.), quando il combustibile venga impiegato per “bruciare” gli attinidi minori (isotopi radioattivi a lunga vita contenuti nelle scorie nucleari). Questo ha comportato, una fase di training sul codice, sui suoi modelli e sulle sue capacità. Successivamente, lo sviluppo della nodalizzazione dell’impianto CP-ESFR, la sua qualifica, e l’analisi dei risultati ottenuti al variare della configurazione del core, del bruciamento e del tipo di combustibile impiegato (i.e. diverso arricchimento di attinidi minori). Il testo è suddiviso in sei sezioni. La prima fornisce un’introduzione allo sviluppo tecnologico dei reattori veloci, evidenzia l’ambito in cui è stata svolta questa tesi e ne definisce obbiettivi e struttura. Nella seconda sezione, viene descritto l’impianto del CP-ESFR con attenzione alla configurazione del nocciolo e al sistema primario. La terza sezione introduce il codice di sistema termico-idraulico utilizzato per le analisi e il modello sviluppato per riprodurre l’impianto. Nella sezione quattro vengono descritti: i test e le verifiche effettuate per valutare le prestazioni del modello, la qualifica della nodalizzazione, i principali modelli e le correlazioni più rilevanti per la simulazione e le configurazioni del core considerate per l’analisi dei risultati. I risultati ottenuti relativamente ai parametri di sicurezza del nocciolo in condizioni di normale funzionamento e per un transitorio selezionato sono descritti nella quinta sezione. Infine, sono riportate le conclusioni dell’attività.
Resumo:
Seventeen bones (sixteen cadaveric bones and one plastic bone) were used to validate a method for reconstructing a surface model of the proximal femur from 2D X-ray radiographs and a statistical shape model that was constructed from thirty training surface models. Unlike previously introduced validation studies, where surface-based distance errors were used to evaluate the reconstruction accuracy, here we propose to use errors measured based on clinically relevant morphometric parameters. For this purpose, a program was developed to robustly extract those morphometric parameters from the thirty training surface models (training population), from the seventeen surface models reconstructed from X-ray radiographs, and from the seventeen ground truth surface models obtained either by a CT-scan reconstruction method or by a laser-scan reconstruction method. A statistical analysis was then performed to classify the seventeen test bones into two categories: normal cases and outliers. This classification step depends on the measured parameters of the particular test bone. In case all parameters of a test bone were covered by the training population's parameter ranges, this bone is classified as normal bone, otherwise as outlier bone. Our experimental results showed that statistically there was no significant difference between the morphometric parameters extracted from the reconstructed surface models of the normal cases and those extracted from the reconstructed surface models of the outliers. Therefore, our statistical shape model based reconstruction technique can be used to reconstruct not only the surface model of a normal bone but also that of an outlier bone.
Resumo:
Training can change the functional and structural organization of the brain, and animal models demonstrate that the hippocampus formation is particularly susceptible to training-related neuroplasticity. In humans, however, direct evidence for functional plasticity of the adult hippocampus induced by training is still missing. Here, we used musicians' brains as a model to test for plastic capabilities of the adult human hippocampus. By using functional magnetic resonance imaging optimized for the investigation of auditory processing, we examined brain responses induced by temporal novelty in otherwise isochronous sound patterns in musicians and musical laypersons, since the hippocampus has been suggested previously to be crucially involved in various forms of novelty detection. In the first cross-sectional experiment, we identified enhanced neural responses to temporal novelty in the anterior left hippocampus of professional musicians, pointing to expertise-related differences in hippocampal processing. In the second experiment, we evaluated neural responses to acoustic temporal novelty in a longitudinal approach to disentangle training-related changes from predispositional factors. For this purpose, we examined an independent sample of music academy students before and after two semesters of intensive aural skills training. After this training period, hippocampal responses to temporal novelty in sounds were enhanced in musical students, and statistical interaction analysis of brain activity changes over time suggests training rather than predisposition effects. Thus, our results provide direct evidence for functional changes of the adult hippocampus in humans related to musical training.
Resumo:
Statistical models have been recently introduced in computational orthopaedics to investigate the bone mechanical properties across several populations. A fundamental aspect for the construction of statistical models concerns the establishment of accurate anatomical correspondences among the objects of the training dataset. Various methods have been proposed to solve this problem such as mesh morphing or image registration algorithms. The objective of this study is to compare a mesh-based and an image-based statistical appearance model approaches for the creation of nite element(FE) meshes. A computer tomography (CT) dataset of 157 human left femurs was used for the comparison. For each approach, 30 finite element meshes were generated with the models. The quality of the obtained FE meshes was evaluated in terms of volume, size and shape of the elements. Results showed that the quality of the meshes obtained with the image-based approach was higher than the quality of the mesh-based approach. Future studies are required to evaluate the impact of this finding on the final mechanical simulations.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
Model-based calibration of steady-state engine operation is commonly performed with highly parameterized empirical models that are accurate but not very robust, particularly when predicting highly nonlinear responses such as diesel smoke emissions. To address this problem, and to boost the accuracy of more robust non-parametric methods to the same level, GT-Power was used to transform the empirical model input space into multiple input spaces that simplified the input-output relationship and improved the accuracy and robustness of smoke predictions made by three commonly used empirical modeling methods: Multivariate Regression, Neural Networks and the k-Nearest Neighbor method. The availability of multiple input spaces allowed the development of two committee techniques: a 'Simple Committee' technique that used averaged predictions from a set of 10 pre-selected input spaces chosen by the training data and the "Minimum Variance Committee" technique where the input spaces for each prediction were chosen on the basis of disagreement between the three modeling methods. This latter technique equalized the performance of the three modeling methods. The successively increasing improvements resulting from the use of a single best transformed input space (Best Combination Technique), Simple Committee Technique and Minimum Variance Committee Technique were verified with hypothesis testing. The transformed input spaces were also shown to improve outlier detection and to improve k-Nearest Neighbor performance when predicting dynamic emissions with steady-state training data. An unexpected finding was that the benefits of input space transformation were unaffected by changes in the hardware or the calibration of the underlying GT-Power model.
Resumo:
This study investigates whether a 6-wk intermittent hypoxia training (IHT), designed to avoid reductions in training loads and intensities, improves the endurance performance capacity of competitive distance runners. Eighteen athletes were randomly assigned to train in normoxia [Nor group; n = 9; maximal oxygen uptake (VO2 max) = 61.5 +/- 1.1 ml x kg(-1) x min(-1)] or intermittently in hypoxia (Hyp group; n = 9; VO2 max = 64.2 +/- 1.2 ml x kg(-1) x min(-1)). Into their usual normoxic training schedule, athletes included two weekly high-intensity (second ventilatory threshold) and moderate-duration (24-40 min) training sessions, performed either in normoxia [inspired O2 fraction (FiO2) = 20.9%] or in normobaric hypoxia (FiO2) = 14.5%). Before and after training, all athletes realized 1) a normoxic and hypoxic incremental test to determine VO2 max and ventilatory thresholds (first and second ventilatory threshold), and 2) an all-out test at the pretraining minimal velocity eliciting VO2 max to determine their time to exhaustion (T(lim)) and the parameters of O2 uptake (VO2) kinetics. Only the Hyp group significantly improved VO2 max (+5% at both FiO2, P < 0.05), without changes in blood O2-carrying capacity. Moreover, T(lim) lengthened in the Hyp group only (+35%, P < 0.001), without significant modifications of VO2 kinetics. Despite similar training load, the Nor group displayed no such improvements, with unchanged VO2 max (+1%, nonsignificant), T(lim) (+10%, nonsignificant), and VO2 kinetics. In addition, T(lim) improvements in the Hyp group were not correlated with concomitant modifications of other parameters, including VO2 max or VO2 kinetics. The present IHT model, involving specific high-intensity and moderate-duration hypoxic sessions, may potentialize the metabolic stimuli of training in already trained athletes and elicit peripheral muscle adaptations, resulting in increased endurance performance capacity.
Resumo:
Submicroscopic changes in chromosomal DNA copy number dosage are common and have been implicated in many heritable diseases and cancers. Recent high-throughput technologies have a resolution that permits the detection of segmental changes in DNA copy number that span thousands of basepairs across the genome. Genome-wide association studies (GWAS) may simultaneously screen for copy number-phenotype and SNP-phenotype associations as part of the analytic strategy. However, genome-wide array analyses are particularly susceptible to batch effects as the logistics of preparing DNA and processing thousands of arrays often involves multiple laboratories and technicians, or changes over calendar time to the reagents and laboratory equipment. Failure to adjust for batch effects can lead to incorrect inference and requires inefficient post-hoc quality control procedures that exclude regions that are associated with batch. Our work extends previous model-based approaches for copy number estimation by explicitly modeling batch effects and using shrinkage to improve locus-specific estimates of copy number uncertainty. Key features of this approach include the use of diallelic genotype calls from experimental data to estimate batch- and locus-specific parameters of background and signal without the requirement of training data. We illustrate these ideas using a study of bipolar disease and a study of chromosome 21 trisomy. The former has batch effects that dominate much of the observed variation in quantile-normalized intensities, while the latter illustrates the robustness of our approach to datasets where as many as 25% of the samples have altered copy number. Locus-specific estimates of copy number can be plotted on the copy-number scale to investigate mosaicism and guide the choice of appropriate downstream approaches for smoothing the copy number as a function of physical position. The software is open source and implemented in the R package CRLMM available at Bioconductor (http:www.bioconductor.org).
Resumo:
BACKGROUND: Faculties face the permanent challenge to design training programs with well-balanced educational outcomes, and to offer various organised and individual learning opportunities. AIM: To apply our original model to a postgraduate training program in rheumatology in general, and to various learning experiences in particular, in order to analyse the balance between different educational objectives. METHODS: Learning times of various educational activities were reported by the junior staff as targeted learners. The suitability of different learning experiences to achieve cognitive, affective and psychomotor learning objectives was estimated. Learning points with respect to efficacy were calculated by multiplication of the estimated learning times by the perceived appropriateness of the educational strategies. RESULTS: Out of 780 hours of professional learning per year (17.7 hours/week), 37.7% of the time was spent under individual supervision of senior staff, 24.4% in organised structured learning, 22.6% in self-studies, and 15.3% in organised patient-oriented learning. The balance between the different types of learning objectives was appropriate for the overall program, but not for each particular learning experience. Acquisition of factual knowledge and problem solving was readily aimed for during organised teaching sessions of different formats, and by personal targeted reading. Attitudes, skills and competencies, as well as behavioural and performance changes were mostly learned during caring for patients under interactive supervision by experts. CONCLUSION: We encourage other faculties to apply this approach to any other curriculum of undergraduate education, postgraduate training or continuous professional development in order to foster the development of well-balanced learning experiences.