41 resultados para Training Models
em Université de Lausanne, Switzerland
Resumo:
The classical approach to predicting the geographical extent of species invasions consists of training models in the native range and projecting them in distinct, potentially invasible areas. However, recent studies have demonstrated that this approach could be hampered by a change of the realized climatic niche, allowing invasive species to spread into habitats in the invaded ranges that are climatically distinct from those occupied in the native range. We propose an alternative approach that involves fitting models with pooled data from all ranges. We show that this pooled approach improves prediction of the extent of invasion of spotted knapweed (Centaurea maculosa) in North America on models based solely on the European native range. Furthermore, it performs equally well on models based on the invaded range, while ensuring the inclusion of areas with similar climate to the European niche, where the species is likely to spread further. We then compare projections from these models for 2080 under a severe climate warming scenario. Projections from the pooled models show fewer areas of intermediate climatic suitability than projections from the native or invaded range models, suggesting a better consensus among modelling techniques and reduced uncertainty.
Resumo:
Introduction: As part of the MicroArray Quality Control (MAQC)-II project, this analysis examines how the choice of univariate feature-selection methods and classification algorithms may influence the performance of genomic predictors under varying degrees of prediction difficulty represented by three clinically relevant endpoints. Methods: We used gene-expression data from 230 breast cancers (grouped into training and independent validation sets), and we examined 40 predictors (five univariate feature-selection methods combined with eight different classifiers) for each of the three endpoints. Their classification performance was estimated on the training set by using two different resampling methods and compared with the accuracy observed in the independent validation set. Results: A ranking of the three classification problems was obtained, and the performance of 120 models was estimated and assessed on an independent validation set. The bootstrapping estimates were closer to the validation performance than were the cross-validation estimates. The required sample size for each endpoint was estimated, and both gene-level and pathway-level analyses were performed on the obtained models. Conclusions: We showed that genomic predictor accuracy is determined largely by an interplay between sample size and classification difficulty. Variations on univariate feature-selection methods and choice of classification algorithm have only a modest impact on predictor performance, and several statistically equally good predictors can be developed for any given classification problem.
Resumo:
Predictive species distribution modelling (SDM) has become an essential tool in biodiversity conservation and management. The choice of grain size (resolution) of environmental layers used in modelling is one important factor that may affect predictions. We applied 10 distinct modelling techniques to presence-only data for 50 species in five different regions, to test whether: (1) a 10-fold coarsening of resolution affects predictive performance of SDMs, and (2) any observed effects are dependent on the type of region, modelling technique, or species considered. Results show that a 10 times change in grain size does not severely affect predictions from species distribution models. The overall trend is towards degradation of model performance, but improvement can also be observed. Changing grain size does not equally affect models across regions, techniques, and species types. The strongest effect is on regions and species types, with tree species in the data sets (regions) with highest locational accuracy being most affected. Changing grain size had little influence on the ranking of techniques: boosted regression trees remain best at both resolutions. The number of occurrences used for model training had an important effect, with larger sample sizes resulting in better models, which tended to be more sensitive to grain. Effect of grain change was only noticeable for models reaching sufficient performance and/or with initial data that have an intrinsic error smaller than the coarser grain size.
Resumo:
PURPOSE: Not in Education, Employment, or Training (NEET) youth are youth disengaged from major social institutions and constitute a worrying concern. However, little is known about this subgroup of vulnerable youth. This study aimed to examine if NEET youth differ from other contemporaries in terms of personality, mental health, and substance use and to provide longitudinal examination of NEET status, testing its stability and prospective pathways with mental health and substance use. METHODS: As part of the Cohort Study on Substance Use Risk Factors, 4,758 young Swiss men in their early 20s answered questions concerning their current professional and educational status, personality, substance use, and symptomatology related to mental health. Descriptive statistics, generalized linear models for cross-sectional comparisons, and cross-lagged panel models for longitudinal associations were computed. RESULTS: NEET youth were 6.1% at baseline and 7.4% at follow-up with 1.4% being NEET at both time points. Comparisons between NEET and non-NEET youth showed significant differences in substance use and depressive symptoms only. Longitudinal associations showed that previous mental health, cannabis use, and daily smoking increased the likelihood of being NEET. Reverse causal paths were nonsignificant. CONCLUSIONS: NEET status seemed to be unlikely and transient among young Swiss men, associated with differences in mental health and substance use but not in personality. Causal paths presented NEET status as a consequence of mental health and substance use rather than a cause. Additionally, this study confirmed that cannabis use and daily smoking are public health problems. Prevention programs need to focus on these vulnerable youth to avoid them being disengaged.
Resumo:
There is enormous interest in designing training methods for reducing cognitive decline in healthy older adults. Because it is impaired with aging, multitasking has often been targeted and has been shown to be malleable with appropriate training. Investigating the effects of cognitive training on functional brain activation might provide critical indication regarding the mechanisms that underlie those positive effects, as well as provide models for selecting appropriate training methods. The few studies that have looked at brain correlates of cognitive training indicate a variable pattern and location of brain changes - a result that might relate to differences in training formats. The goal of this study was to measure the neural substrates as a function of whether divided attentional training programs induced the use of alternative processes or whether it relied on repeated practice. Forty-eight older adults were randomly allocated to one of three training programs. In the SINGLE REPEATED training, participants practiced an alphanumeric equation and a visual detection task, each under focused attention. In the DIVIDED FIXED training, participants practiced combining verification and detection by divided attention, with equal attention allocated to both tasks. In the DIVIDED VARIABLE training, participants completed the task by divided attention, but were taught to vary the attentional priority allocated to each task. Brain activation was measured with fMRI pre- and post-training while completing each task individually and the two tasks combined. The three training programs resulted in markedly different brain changes. Practice on individual tasks in the SINGLE REPEATED training resulted in reduced brain activation whereas DIVIDED VARIABLE training resulted in a larger recruitment of the right superior and middle frontal gyrus, a region that has been involved in multitasking. The type of training is a critical factor in determining the pattern of brain activation.
Resumo:
The purpose of this study was to test the hypothesis that athletes having a slower oxygen uptake ( VO(2)) kinetics would benefit more, in terms of time spent near VO(2max), from an increase in the intensity of an intermittent running training (IT). After determination of VO(2max), vVO(2max) (i.e. the minimal velocity associated with VO(2max) in an incremental test) and the time to exhaustion sustained at vVO(2max) ( T(lim)), seven well-trained triathletes performed in random order two IT sessions. The two IT comprised 30-s work intervals at either 100% (IT(100%)) or 105% (IT(105%)) of vVO(2max) with 30-s recovery intervals at 50% of vVO(2max) between each repeat. The parameters of the VO(2) kinetics (td(1), tau(1), A(1), td(2), tau(2), A(2), i.e. time delay, time constant and amplitude of the primary phase and slow component, respectively) during the T(lim) test were modelled with two exponential functions. The highest VO(2) reached was significantly lower ( P<0.01) in IT(100%) run at 19.8 (0.9) km(.)h(-1) [66.2 (4.6) ml(.)min(-1.)kg(-1)] than in IT(105%) run at 20.8 (1.0) km(.)h(-1) [71.1 (4.9) ml(.)min(-1.)kg(-1)] or in the incremental test [71.2 (4.2) ml(.)min(-1.)kg(-1)]. The time sustained above 90% of VO(2max) in IT(105%) [338 (149) s] was significantly higher ( P<0.05) than in IT(100%) [168 (131) s]. The average T(lim) was 244 (39) s, tau(1) was 15.8 (5.9) s and td(2) was 96 (13) s. tau(1) was correlated with the difference in time spent above 90% of VO(2max) ( r=0.91; P<0.01) between IT(105%) and IT(100%). In conclusion, athletes with a slower VO(2) kinetics in a vVO(2max) constant-velocity test benefited more from the 5% rise of IT work intensity, exercising for longer above 90% of VO(2max) when the IT intensity was increased from 100 to 105% of vVO(2max).
Resumo:
Trans-apical aortic valve replacement (AVR) is a new and rapidly growing therapy. However, there are only few training opportunities. The objective of our work is to build an appropriate artificial model of the heart that can replace the use of animals for surgical training in trans-apical AVR procedures. To reduce the necessity for fluoroscopy, we pursued the goal of building a translucent model of the heart that has nature-like dimensions. A simplified 3D model of a human heart with its aortic root was created in silico using the SolidWorks Computer-Aided Design (CAD) program. This heart model was printed using a rapid prototyping system developed by the Fab@Home project and dip-coated two times with dispersion silicone. The translucency of the heart model allows the perception of the deployment area of the valved-stent without using heavy imaging support. The final model was then placed in a human manikin for surgical training on trans-apical AVR procedure. Trans-apical AVR with all the necessary steps (puncture, wiring, catheterization, ballooning etc.) can be realized repeatedly in this setting.
Resumo:
Gene expression data from microarrays are being applied to predict preclinical and clinical endpoints, but the reliability of these predictions has not been established. In the MAQC-II project, 36 independent teams analyzed six microarray data sets to generate predictive models for classifying a sample with respect to one of 13 endpoints indicative of lung or liver toxicity in rodents, or of breast cancer, multiple myeloma or neuroblastoma in humans. In total, >30,000 models were built using many combinations of analytical methods. The teams generated predictive models without knowing the biological meaning of some of the endpoints and, to mimic clinical reality, tested the models on data that had not been used for training. We found that model performance depended largely on the endpoint and team proficiency and that different approaches generated models of similar performance. The conclusions and recommendations from MAQC-II should be useful for regulatory agencies, study committees and independent investigators that evaluate methods for global gene expression analysis.
Resumo:
BACKGROUND: Physician training in smoking cessation counseling has been shown to be effective as a means to increase quit success. We assessed the cost-effectiveness ratio of a smoking cessation counseling training programme. Its effectiveness was previously demonstrated in a cluster randomized, control trial performed in two Swiss university outpatients clinics, in which residents were randomized to receive training in smoking interventions or a control educational intervention. DESIGN AND METHODS: We used a Markov simulation model for effectiveness analysis. This model incorporates the intervention efficacy, the natural quit rate, and the lifetime probability of relapse after 1-year abstinence. We used previously published results in addition to hospital service and outpatient clinic cost data. The time horizon was 1 year, and we opted for a third-party payer perspective. RESULTS: The incremental cost of the intervention amounted to US$2.58 per consultation by a smoker, translating into a cost per life-year saved of US$25.4 for men and 35.2 for women. One-way sensitivity analyses yielded a range of US$4.0-107.1 in men and US$9.7-148.6 in women. Variations in the quit rate of the control intervention, the length of training effectiveness, and the discount rate yielded moderately large effects on the outcome. Variations in the natural cessation rate, the lifetime probability of relapse, the cost of physician training, the counseling time, the cost per hour of physician time, and the cost of the booklets had little effect on the cost-effectiveness ratio. CONCLUSIONS: Training residents in smoking cessation counseling is a very cost-effective intervention and may be more efficient than currently accepted tobacco control interventions.
Resumo:
Continuous field mapping has to address two conflicting remote sensing requirements when collecting training data. On one hand, continuous field mapping trains fractional land cover and thus favours mixed training pixels. On the other hand, the spectral signature has to be preferably distinct and thus favours pure training pixels. The aim of this study was to evaluate the sensitivity of training data distribution along fractional and spectral gradients on the resulting mapping performance. We derived four continuous fields (tree, shrubherb, bare, water) from aerial photographs as response variables and processed corresponding spectral signatures from multitemporal Landsat 5 TM data as explanatory variables. Subsequent controlled experiments along fractional cover gradients were then based on generalised linear models. Resulting fractional and spectral distribution differed between single continuous fields, but could be satisfactorily trained and mapped. Pixels with fractional or without respective cover were much more critical than pure full cover pixels. Error distribution of continuous field models was non-uniform with respect to horizontal and vertical spatial distribution of target fields. We conclude that a sampling for continuous field training data should be based on extent and densities in the fractional and spectral, rather than the real spatial space. Consequently, adequate training plots are most probably not systematically distributed in the real spatial space, but cover the gradient and covariate structure of the fractional and spectral space well. (C) 2009 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.
Resumo:
Risk maps summarizing landscape suitability of novel areas for invading species can be valuable tools for preventing species' invasions or controlling their spread, but methods employed for development of such maps remain variable and unstandardized. We discuss several considerations in development of such models, including types of distributional information that should be used, the nature of explanatory variables that should be incorporated, and caveats regarding model testing and evaluation. We highlight that, in the case of invasive species, such distributional predictions should aim to derive the best hypothesis of the potential distribution of the species by using (1) all distributional information available, including information from both the native range and other invaded regions; (2) predictors linked as directly as is feasible to the physiological requirements of the species; and (3) modelling procedures that carefully avoid overfitting to the training data. Finally, model testing and evaluation should focus on well-predicted presences, and less on efficient prediction of absences; a k-fold regional cross-validation test is discussed.
Resumo:
OBJECTIVE: The aim of this study was to evaluate the impact of communication skills training (CST) on working alliance and to identify specific communicational elements related to working alliance. METHODS: Pre- and post-training simulated patient interviews (6-month interval) of oncology physicians and nurses (N=56) who benefited from CST were compared to two simulated patient interviews with a 6-month interval of oncology physicians and nurses (N=57) who did not benefit from CST. The patient-clinician interaction was analyzed by means of the Roter Interaction Analysis System (RIAS). Alliance was measured by the Working Alliance Inventory - Short Revised Form. RESULTS: While working alliance did not improve with CST, generalized linear mixed effect models demonstrated that the quality of verbal communication was related to alliance. Positive talk and psychosocial counseling fostered alliance whereas negative talk, biomedical information and patient's questions diminished alliance. CONCLUSION: Patient-clinician alliance is related to specific verbal communication behaviors. PRACTICE IMPLICATIONS: Working alliance is a key element of patient-physician communication which deserves further investigation as a new marker and efficacy criterion of CST outcome.
Resumo:
We all make decisions of varying levels of importance every day. Because making a decision implies that there are alternative choices to be considered, almost all decision involves some conflicts or dissatisfaction. Traditional economic models esteem that a person must weight the positive and negative outcomes of each option, and based on all these inferences, determines which option is the best for that particular situation. However, individuals rather act as irrational agents and tend to deviate from these rational choices. They somewhat evaluate the outcomes' subjective value, namely, when they face a risky choice leading to losses, people are inclined to have some preference for risk over certainty, while when facing a risky choice leading to gains, people often avoid to take risks and choose the most certain option. Yet, it is assumed that decision making is balanced between deliberative and emotional components. Distinct neural regions underpin these factors: the deliberative pathway that corresponds to executive functions, implies the activation of the prefrontal cortex, while the emotional pathway tends to activate the limbic system. These circuits appear to be altered in individuals with ADHD, and result, amongst others, in impaired decision making capacities. Their impulsive and inattentive behaviors are likely to be the cause of their irrational attitude towards risk taking. Still, a possible solution is to administrate these individuals a drug treatment, with the knowledge that it might have several side effects. However, an alternative treatment that relies on cognitive rehabilitation might be appropriate. This project was therefore aimed at investigate whether an intensive working memory training could have a spillover effect on decision making in adults with ADHD and in age-matched healthy controls. We designed a decision making task where the participants had to select an amount to gamble with the chance of 1/3 to win four times the chosen amount, while in the other cases they could loose their investment. Their performances were recorded using electroencephalography prior and after a one-month Dual N-Back training and the possible near and far transfer effects were investigated. Overall, we found that the performance during the gambling task was modulated by personality factors and by the importance of the symptoms at the pretest session. At posttest, we found that all individuals demonstrated an improvement on the Dual N-Back and on similar untrained dimensions. In addition, we discovered that not only the adults with ADHD showed a stable decrease of the symptomatology, as evaluated by the CAARS inventory, but this reduction was also detected in the control samples. In addition, Event-Related Potential (ERP) data are in favor of an change within prefrontal and parietal cortices. These results suggest that cognitive remediation can be effective in adults with ADHD, and in healthy controls. An important complement of this work would be the examination of the data in regard to the attentional networks, which could empower the fact that complex programs covering the remediation of several executive functions' dimensions is not required, a unique working memory training can be sufficient. -- Nous prenons tous chaque jour des décisions ayant des niveaux d'importance variables. Toutes les décisions ont une composante conflictuelle et d'insatisfaction, car prendre une décision implique qu'il y ait des choix alternatifs à considérer. Les modèles économiques traditionnels estiment qu'une personne doit peser les conséquences positives et négatives de chaque option et en se basant sur ces inférences, détermine quelle option est la meilleure dans une situation particulière. Cependant, les individus peuvent dévier de ces choix rationnels. Ils évaluent plutôt les valeur subjective des résultats, c'est-à-dire que lorsqu'ils sont face à un choix risqué pouvant les mener à des pertes, les gens ont tendance à avoir des préférences pour le risque à la place de la certitude, tandis que lorsqu'ils sont face à un choix risqué pouvant les conduire à un gain, ils évitent de prendre des risques et choisissent l'option la plus su^re. De nos jours, il est considéré que la prise de décision est balancée entre des composantes délibératives et émotionnelles. Ces facteurs sont sous-tendus par des régions neurales distinctes: le chemin délibératif, correspondant aux fonctions exécutives, implique l'activation du cortex préfrontal, tandis que le chemin émotionnel active le système limbique. Ces circuits semblent être dysfonctionnels chez les individus ayant un TDAH, et résulte, entre autres, en des capacités de prise de décision altérées. Leurs comportements impulsifs et inattentifs sont probablement la cause de ces attitudes irrationnelles face au risque. Cependant, une solution possible est de leur administrer un traitement médicamenteux, en prenant en compte les potentiels effets secondaires. Un traitement alternatif se reposant sur une réhabilitation cognitive pourrait être appropriée. Le but de ce projet est donc de déterminer si un entrainement intensif de la mémoire de travail peut avoir un effet sur la prise de décision chez des adultes ayant un TDAH et chez des contrôles sains du même âge. Nous avons conçu une tâche de prise de décision dans laquelle les participants devaient sélectionner un montant à jouer en ayant une chance sur trois de gagner quatre fois le montant choisi, alors que dans l'autre cas, ils pouvaient perdre leur investissement. Leurs performances ont été enregistrées en utilisant l'électroencéphalographie avant et après un entrainement d'un mois au Dual N-Back, et nous avons étudié les possibles effets de transfert. Dans l'ensemble, nous avons trouvé au pré-test que les performances au cours du jeu d'argent étaient modulées par les facteurs de personnalité, et par le degré des sympt^omes. Au post-test, nous avons non seulement trouvé que les adultes ayant un TDAH montraient une diminutions stable des symptômes, qui étaient évalués par le questionnaire du CAARS, mais que cette réduction était également perçue dans l'échantillon des contrôles. Les rsultats expérimentaux mesurés à l'aide de l'éléctroencéphalographie suggèrent un changement dans les cortex préfrontaux et pariétaux. Ces résultats suggèrent que la remédiation cognitive est efficace chez les adultes ayant un TDAH, mais produit aussi un effet chez les contrôles sains. Un complément important de ce travail pourrait examiner les données sur l'attention, qui pourraient renforcer l'idée qu'il n'est pas nécessaire d'utiliser des programmes complexes englobant la remédiation de plusieurs dimensions des fonctions exécutives, un simple entraiment de la mémoire de travail devrait suffire.
Resumo:
Abiotic factors are considered strong drivers of species distribution and assemblages. Yet these spatial patterns are also influenced by biotic interactions. Accounting for competitors or facilitators may improve both the fit and the predictive power of species distribution models (SDMs). We investigated the influence of a dominant species, Empetrum nigrum ssp. hermaphroditum, on the distribution of 34 subordinate species in the tundra of northern Norway. We related SDM parameters of those subordinate species to their functional traits and their co-occurrence patterns with E. hermaphroditum across three spatial scales. By combining both approaches, we sought to understand whether these species may be limited by competitive interactions and/or benefit from habitat conditions created by the dominant species. The model fit and predictive power increased for most species when the frequency of occurrence of E. hermaphroditum was included in the SDMs as a predictor. The largest increase was found for species that 1) co-occur most of the time with E. hermaphroditum, both at large (i.e. 750 m) and small spatial scale (i.e. 2 m) or co-occur with E. hermaphroditum at large scale but not at small scale and 2) have particularly low or high leaf dry matter content (LDMC). Species that do not co-occur with E. hermaphroditum at the smallest scale are generally palatable herbaceous species with low LDMC, thus showing a weak ability to tolerate resource depletion that is directly or indirectly induced by E. hermaphroditum. Species with high LDMC, showing a better aptitude to face resource depletion and grazing, are often found in the proximity of E. hermaphroditum. Our results are consistent with previous findings that both competition and facilitation structure plant distribution and assemblages in the Arctic tundra. The functional and co-occurrence approaches used were complementary and provided a deeper understanding of the observed patterns by refinement of the pool of potential direct and indirect ecological effects of E. hermaphroditum on the distribution of subordinate species. Our correlative study would benefit being complemented by experimental approaches.