213 resultados para precision metrology


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Occupational exposure modeling is widely used in the context of the E.U. regulation on the registration, evaluation, authorization, and restriction of chemicals (REACH). First tier tools, such as European Centre for Ecotoxicology and TOxicology of Chemicals (ECETOC) targeted risk assessment (TRA) or Stoffenmanager, are used to screen a wide range of substances. Those of concern are investigated further using second tier tools, e.g., Advanced REACH Tool (ART). Local sensitivity analysis (SA) methods are used here to determine dominant factors for three models commonly used within the REACH framework: ECETOC TRA v3, Stoffenmanager 4.5, and ART 1.5. Based on the results of the SA, the robustness of the models is assessed. For ECETOC, the process category (PROC) is the most important factor. A failure to identify the correct PROC has severe consequences for the exposure estimate. Stoffenmanager is the most balanced model and decision making uncertainties in one modifying factor are less severe in Stoffenmanager. ART requires a careful evaluation of the decisions in the source compartment since it constitutes ∼75% of the total exposure range, which corresponds to an exposure estimate of 20-22 orders of magnitude. Our results indicate that there is a trade off between accuracy and precision of the models. Previous studies suggested that ART may lead to more accurate results in well-documented exposure situations. However, the choice of the adequate model should ultimately be determined by the quality of the available exposure data: if the practitioner is uncertain concerning two or more decisions in the entry parameters, Stoffenmanager may be more robust than ART.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Estimation of glomerular filtration rate (eGFR) using a common formula for both adult and pediatric populations is challenging. Using inulin clearances (iGFRs), this study aims to investigate the existence of a precise age cutoff beyond which the Modification of Diet in Renal Disease (MDRD), the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI), or the Cockroft-Gault (CG) formulas, can be applied with acceptable precision. Performance of the new Schwartz formula according to age is also evaluated. METHOD: We compared 503 iGFRs for 503 children aged between 33 months and 18 years to eGFRs. To define the most precise age cutoff value for each formula, a circular binary segmentation method analyzing the formulas' bias values according to the children's ages was performed. Bias was defined by the difference between iGFRs and eGFRs. To validate the identified cutoff, 30% accuracy was calculated. RESULTS: For MDRD, CKD-EPI and CG, the best age cutoff was ≥14.3, ≥14.2 and ≤10.8 years, respectively. The lowest mean bias and highest accuracy were -17.11 and 64.7% for MDRD, 27.4 and 51% for CKD-EPI, and 8.31 and 77.2% for CG. The Schwartz formula showed the best performance below the age of 10.9 years. CONCLUSION: For the MDRD and CKD-EPI formulas, the mean bias values decreased with increasing child age and these formulas were more accurate beyond an age cutoff of 14.3 and 14.2 years, respectively. For the CG and Schwartz formulas, the lowest mean bias values and the best accuracies were below an age cutoff of 10.8 and 10.9 years, respectively. Nevertheless, the accuracies of the formulas were still below the National Kidney Foundation Kidney Disease Outcomes Quality Initiative target to be validated in these age groups and, therefore, none of these formulas can be used to estimate GFR in children and adolescent populations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Target identification for tractography studies requires solid anatomical knowledge validated by an extensive literature review across species for each seed structure to be studied. Manual literature review to identify targets for a given seed region is tedious and potentially subjective. Therefore, complementary approaches would be useful. We propose to use text-mining models to automatically suggest potential targets from the neuroscientific literature, full-text articles and abstracts, so that they can be used for anatomical connection studies and more specifically for tractography. We applied text-mining models to three structures: two well-studied structures, since validated deep brain stimulation targets, the internal globus pallidus and the subthalamic nucleus and, the nucleus accumbens, an exploratory target for treating psychiatric disorders. We performed a systematic review of the literature to document the projections of the three selected structures and compared it with the targets proposed by text-mining models, both in rat and primate (including human). We ran probabilistic tractography on the nucleus accumbens and compared the output with the results of the text-mining models and literature review. Overall, text-mining the literature could find three times as many targets as two man-weeks of curation could. The overall efficiency of the text-mining against literature review in our study was 98% recall (at 36% precision), meaning that over all the targets for the three selected seeds, only one target has been missed by text-mining. We demonstrate that connectivity for a structure of interest can be extracted from a very large amount of publications and abstracts. We believe this tool will be useful in helping the neuroscience community to facilitate connectivity studies of particular brain regions. The text mining tools used for the study are part of the HBP Neuroinformatics Platform, publicly available at http://connectivity-brainer.rhcloud.com/.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: After cardiac surgery with cardiopulmonary bypass (CPB), acquired coagulopathy often leads to post-CPB bleeding. Though multifactorial in origin, this coagulopathy is often aggravated by deficient fibrinogen levels. OBJECTIVE: To assess whether laboratory and thrombelastometric testing on CPB can predict plasma fibrinogen immediately after CPB weaning. PATIENTS / METHODS: This prospective study in 110 patients undergoing major cardiovascular surgery at risk of post-CPB bleeding compares fibrinogen level (Clauss method) and function (fibrin-specific thrombelastometry) in order to study the predictability of their course early after termination of CPB. Linear regression analysis and receiver operating characteristics were used to determine correlations and predictive accuracy. RESULTS: Quantitative estimation of post-CPB Clauss fibrinogen from on-CPB fibrinogen was feasible with small bias (+0.19 g/l), but with poor precision and a percentage of error >30%. A clinically useful alternative approach was developed by using on-CPB A10 to predict a Clauss fibrinogen range of interest instead of a discrete level. An on-CPB A10 ≤10 mm identified patients with a post-CPB Clauss fibrinogen of ≤1.5 g/l with a sensitivity of 0.99 and a positive predictive value of 0.60; it also identified those without a post-CPB Clauss fibrinogen <2.0 g/l with a specificity of 0.83. CONCLUSIONS: When measured on CPB prior to weaning, a FIBTEM A10 ≤10 mm is an early alert for post-CPB fibrinogen levels below or within the substitution range (1.5-2.0 g/l) recommended in case of post-CPB coagulopathic bleeding. This helps to minimize the delay to data-based hemostatic management after weaning from CPB.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Deep brain stimulation (DBS) is recognized as an effective treatment for movement disorders. We recently changed our technique, limiting the number of brain penetrations to three per side. OBJECTIVES: The first aim was to evaluate the electrode precision on both sides of surgery since we implemented this surgical technique. The second aim was to analyse whether or not the electrode placement was improved with microrecording and macrostimulation. METHODS: We retrospectively reviewed operation protocols and MRIs of 30 patients who underwent bilateral DBS. For microrecording and macrostimulation, we used three parallel channels of the 'Ben Gun' centred on the MRI-planned target. Pre- and post-operative MRIs were merged. The distance between the planned target and the centre of the implanted electrode artefact was measured. RESULTS: There was no significant difference in targeting precision on both sides of surgery. There was more intra-operative adjustment of the second electrode positioning based on microrecording and macrostimulation, which allowed to significantly approach the MRI-planned target on the medial-lateral axis. CONCLUSION: There was more electrode adjustment needed on the second side, possibly in relation with brain shift. We thus suggest performing a single central track with electrophysiological and clinical assessment, with multidirectional exploration on demand for suboptimal clinical responses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Huntington's disease is an incurable neurodegenerative disease caused by inheritance of an expanded cytosine-adenine-guanine (CAG) trinucleotide repeat within the Huntingtin gene. Extensive volume loss and altered diffusion metrics in the basal ganglia, cortex and white matter are seen when patients with Huntington's disease (HD) undergo structural imaging, suggesting that changes in basal ganglia-cortical structural connectivity occur. The aims of this study were to characterise altered patterns of basal ganglia-cortical structural connectivity with high anatomical precision in premanifest and early manifest HD, and to identify associations between structural connectivity and genetic or clinical markers of HD. 3-Tesla diffusion tensor magnetic resonance images were acquired from 14 early manifest HD subjects, 17 premanifest HD subjects and 18 controls. Voxel-based analyses of probabilistic tractography were used to quantify basal ganglia-cortical structural connections. Canonical variate analysis was used to demonstrate disease-associated patterns of altered connectivity and to test for associations between connectivity and genetic and clinical markers of HD; this is the first study in which such analyses have been used. Widespread changes were seen in basal ganglia-cortical structural connectivity in early manifest HD subjects; this has relevance for development of therapies targeting the striatum. Premanifest HD subjects had a pattern of connectivity more similar to that of controls, suggesting progressive change in connections over time. Associations between structural connectivity patterns and motor and cognitive markers of disease severity were present in early manifest subjects. Our data suggest the clinical phenotype in manifest HD may be at least partly a result of altered connectivity. Hum Brain Mapp 36:1728-1740, 2015. © 2015 Wiley Periodicals, Inc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this work was to combine the advantages of the dried blood spot (DBS) sampling process with the highly sensitive and selective negative-ion chemical ionization tandem mass spectrometry (NICI-MS-MS) to analyze for recent antidepressants including fluoxetine, norfluoxetine, reboxetine, and paroxetine from micro whole blood samples (i.e., 10 microL). Before analysis, DBS samples were punched out, and antidepressants were simultaneously extracted and derivatized in a single step by use of pentafluoropropionic acid anhydride and 0.02% triethylamine in butyl chloride for 30 min at 60 degrees C under ultrasonication. Derivatives were then separated on a gas chromatograph coupled with a triple-quadrupole mass spectrometer operating in negative selected reaction monitoring mode for a total run time of 5 min. To establish the validity of the method, trueness, precision, and selectivity were determined on the basis of the guidelines of the "Société Française des Sciences et des Techniques Pharmaceutiques" (SFSTP). The assay was found to be linear in the concentration ranges 1 to 500 ng mL(-1) for fluoxetine and norfluoxetine and 20 to 500 ng mL(-1) for reboxetine and paroxetine. Despite the small sampling volume, the limit of detection was estimated at 20 pg mL(-1) for all the analytes. The stability of DBS was also evaluated at -20 degrees C, 4 degrees C, 25 degrees C, and 40 degrees C for up to 30 days. Furthermore, the method was successfully applied to a pharmacokinetic investigation performed on a healthy volunteer after oral administration of a single 40-mg dose of fluoxetine. Thus, this validated DBS method combines an extractive-derivative single step with a fast and sensitive GC-NICI-MS-MS technique. Using microliter blood samples, this procedure offers a patient-friendly tool in many biomedical fields such as checking treatment adherence, therapeutic drug monitoring, toxicological analyses, or pharmacokinetic studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim: Modelling species at the assemblage level is required to make effective forecast of global change impacts on diversity and ecosystem functioning. Community predictions may be achieved using macroecological properties of communities (MEM), or by stacking of individual species distribution models (S-SDMs). To obtain more realistic predictions of species assemblages, the SESAM framework suggests applying successive filters to the initial species source pool, by combining different modelling approaches and rules. Here we provide a first test of this framework in mountain grassland communities. Location: The western Swiss Alps. Methods: Two implementations of the SESAM framework were tested: a "Probability ranking" rule based on species richness predictions and rough probabilities from SDMs, and a "Trait range" rule that uses the predicted upper and lower bound of community-level distribution of three different functional traits (vegetative height, specific leaf area and seed mass) to constraint a pool of environmentally filtered species from binary SDMs predictions. Results: We showed that all independent constraints expectedly contributed to reduce species richness overprediction. Only the "Probability ranking" rule allowed slightly but significantly improving predictions of community composition. Main conclusion: We tested various ways to implement the SESAM framework by integrating macroecological constraints into S-SDM predictions, and report one that is able to improve compositional predictions. We discuss possible improvements, such as further improving the causality and precision of environmental predictors, using other assembly rules and testing other types of ecological or functional constraints.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present study was initiated with the aim to assess the in vivo electrochemical corrosion behaviour of CoCrMo biomedical alloys in human synovial fluids in an attempt to identify possible patient or pathology specific effects. For this, electrochemical measurements (open circuit potential OCP, polarization resistance Rp, potentiodynamic polarization curves, electrochemical impedance spectroscopy EIS) were carried out on fluids extracted from patients with different articular pathologies and prosthesis revisions. Those electrochemical measurements could be carried out with outstanding precision and signal stability. The results show that the corrosion behaviour of CoCrMo alloy in synovial fluids not only depends on material reactivity but also on the specific reactions of synovial fluid components, most likely involving reactive oxygen species. In some patients the latter were found to determine the whole cathodic and anodic electrochemical response. Depending on patients, corrosion rates varied significantly between 50 and 750mgdm(-2)year(-1).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study investigated fingermark residues using Fourier transform infrared microscopy (μ- FTIR) in order to obtain fundamental information about the marks' initial composition and aging kinetics. This knowledge would be an asset for fundamental research on fingermarks, such as for dating purposes. Attenuated Total Reflection (ATR) and single-point reflection modes were tested on fresh fingermarks. ATR proved to be better suited and this mode was subsequently selected for further aging studies. Eccrine and sebaceous material was found in fresh and aged fingermarks and the spectral regions 1000-1850 cm-1 and 2700-3600 cm-1 were identified as the most informative. The impact of substrates (aluminium and glass slides) and storage conditions (storage in the light and in the dark) on fingermark aging was also studied. Chemometric analyses showed that fingermarks could be grouped according to their age regardless of the substrate when they were stored in an open box kept in an air-conditioned laboratory at around 20°C next to a window. On the contrary, when fingermarks were stored in the dark, only specimens deposited on the same substrate could be grouped by age. Thus, the substrate appeared to influence aging of fingermarks in the dark. Furthermore, PLS regression analyses were conducted in order to study the possibility of modelling fingermark aging for potential fingermark dating applications. The resulting models showed an overall precision of ±3 days and clearly demonstrated their capability to differentiate older fingermarks (20 and 34-days old) from newer ones (1, 3, 7 and 9-days old) regardless of the substrate and lighting conditions. These results are promising from a fingermark dating perspective. Further research is required to fully validate such models and assess their robustness and limitations in uncontrolled casework conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L'imagerie par résonance magnétique (IRM) peut fournir aux cardiologues des informations diagnostiques importantes sur l'état de la maladie de l'artère coronarienne dans les patients. Le défi majeur pour l'IRM cardiaque est de gérer toutes les sources de mouvement qui peuvent affecter la qualité des images en réduisant l'information diagnostique. Cette thèse a donc comme but de développer des nouvelles techniques d'acquisitions des images IRM, en changeant les techniques de compensation du mouvement, pour en augmenter l'efficacité, la flexibilité, la robustesse et pour obtenir plus d'information sur le tissu et plus d'information temporelle. Les techniques proposées favorisent donc l'avancement de l'imagerie des coronaires dans une direction plus maniable et multi-usage qui peut facilement être transférée dans l'environnement clinique. La première partie de la thèse s'est concentrée sur l'étude du mouvement des artères coronariennes sur des patients en utilisant la techniques d'imagerie standard (rayons x), pour mesurer la précision avec laquelle les artères coronariennes retournent dans la même position battement après battement (repositionnement des coronaires). Nous avons découvert qu'il y a des intervalles dans le cycle cardiaque, tôt dans la systole et à moitié de la diastole, où le repositionnement des coronaires est au minimum. En réponse nous avons développé une nouvelle séquence d'acquisition (T2-post) capable d'acquérir les données aussi tôt dans la systole. Cette séquence a été testée sur des volontaires sains et on a pu constater que la qualité de visualisation des artère coronariennes est égale à celle obtenue avec les techniques standard. De plus, le rapport signal sur bruit fourni par la séquence d'acquisition proposée est supérieur à celui obtenu avec les techniques d'imagerie standard. La deuxième partie de la thèse a exploré un paradigme d'acquisition des images cardiaques complètement nouveau pour l'imagerie du coeur entier. La technique proposée dans ce travail acquiert les données sans arrêt (free-running) au lieu d'être synchronisée avec le mouvement cardiaque. De cette façon, l'efficacité de la séquence d'acquisition est augmentée de manière significative et les images produites représentent le coeur entier dans toutes les phases cardiaques (quatre dimensions, 4D). Par ailleurs, l'auto-navigation de la respiration permet d'effectuer cette acquisition en respiration libre. Cette technologie rend possible de visualiser et évaluer l'anatomie du coeur et de ses vaisseaux ainsi que la fonction cardiaque en quatre dimensions et avec une très haute résolution spatiale et temporelle, sans la nécessité d'injecter un moyen de contraste. Le pas essentiel qui a permis le développement de cette technique est l'utilisation d'une trajectoire d'acquisition radiale 3D basée sur l'angle d'or. Avec cette trajectoire, il est possible d'acquérir continûment les données d'espace k, puis de réordonner les données et choisir les paramètres temporel des images 4D a posteriori. L'acquisition 4D a été aussi couplée avec un algorithme de reconstructions itératif (compressed sensing) qui permet d'augmenter la résolution temporelle tout en augmentant la qualité des images. Grâce aux images 4D, il est possible maintenant de visualiser les artères coronariennes entières dans chaque phase du cycle cardiaque et, avec les mêmes données, de visualiser et mesurer la fonction cardiaque. La qualité des artères coronariennes dans les images 4D est la même que dans les images obtenues avec une acquisition 3D standard, acquise en diastole Par ailleurs, les valeurs de fonction cardiaque mesurées au moyen des images 4D concorde avec les valeurs obtenues avec les images 2D standard. Finalement, dans la dernière partie de la thèse une technique d'acquisition a temps d'écho ultra-court (UTE) a été développée pour la visualisation in vivo des calcifications des artères coronariennes. Des études récentes ont démontré que les acquisitions UTE permettent de visualiser les calcifications dans des plaques athérosclérotiques ex vivo. Cepandent le mouvement du coeur a entravé jusqu'à maintenant l'utilisation des techniques UTE in vivo. Pour résoudre ce problème nous avons développé une séquence d'acquisition UTE avec trajectoire radiale 3D et l'avons testée sur des volontaires. La technique proposée utilise une auto-navigation 3D pour corriger le mouvement respiratoire et est synchronisée avec l'ECG. Trois échos sont acquis pour extraire le signal de la calcification avec des composants au T2 très court tout en permettant de séparer le signal de la graisse depuis le signal de l'eau. Les résultats sont encore préliminaires mais on peut affirmer que la technique développé peut potentiellement montrer les calcifications des artères coronariennes in vivo. En conclusion, ce travail de thèse présente trois nouvelles techniques pour l'IRM du coeur entier capables d'améliorer la visualisation et la caractérisation de la maladie athérosclérotique des coronaires. Ces techniques fournissent des informations anatomiques et fonctionnelles en quatre dimensions et des informations sur la composition du tissu auparavant indisponibles. CORONARY artery magnetic resonance imaging (MRI) has the potential to provide the cardiologist with relevant diagnostic information relative to coronary artery disease of patients. The major challenge of cardiac MRI, though, is dealing with all sources of motions that can corrupt the images affecting the diagnostic information provided. The current thesis, thus, focused on the development of new MRI techniques that change the standard approach to cardiac motion compensation in order to increase the efficiency of cardioavscular MRI, to provide more flexibility and robustness, new temporal information and new tissue information. The proposed approaches help in advancing coronary magnetic resonance angiography (MRA) in the direction of an easy-to-use and multipurpose tool that can be translated to the clinical environment. The first part of the thesis focused on the study of coronary artery motion through gold standard imaging techniques (x-ray angiography) in patients, in order to measure the precision with which the coronary arteries assume the same position beat after beat (coronary artery repositioning). We learned that intervals with minimal coronary artery repositioning occur in peak systole and in mid diastole and we responded with a new pulse sequence (T2~post) that is able to provide peak-systolic imaging. Such a sequence was tested in healthy volunteers and, from the image quality comparison, we learned that the proposed approach provides coronary artery visualization and contrast-to-noise ratio (CNR) comparable with the standard acquisition approach, but with increased signal-to-noise ratio (SNR). The second part of the thesis explored a completely new paradigm for whole- heart cardiovascular MRI. The proposed techniques acquires the data continuously (free-running), instead of being triggered, thus increasing the efficiency of the acquisition and providing four dimensional images of the whole heart, while respiratory self navigation allows for the scan to be performed in free breathing. This enabling technology allows for anatomical and functional evaluation in four dimensions, with high spatial and temporal resolution and without the need for contrast agent injection. The enabling step is the use of a golden-angle based 3D radial trajectory, which allows for a continuous sampling of the k-space and a retrospective selection of the timing parameters of the reconstructed dataset. The free-running 4D acquisition was then combined with a compressed sensing reconstruction algorithm that further increases the temporal resolution of the 4D dataset, while at the same time increasing the overall image quality by removing undersampling artifacts. The obtained 4D images provide visualization of the whole coronary artery tree in each phases of the cardiac cycle and, at the same time, allow for the assessment of the cardiac function with a single free- breathing scan. The quality of the coronary arteries provided by the frames of the free-running 4D acquisition is in line with the one obtained with the standard ECG-triggered one, and the cardiac function evaluation matched the one measured with gold-standard stack of 2D cine approaches. Finally, the last part of the thesis focused on the development of ultrashort echo time (UTE) acquisition scheme for in vivo detection of calcification in the coronary arteries. Recent studies showed that UTE imaging allows for the coronary artery plaque calcification ex vivo, since it is able to detect the short T2 components of the calcification. The heart motion, though, prevented this technique from being applied in vivo. An ECG-triggered self-navigated 3D radial triple- echo UTE acquisition has then been developed and tested in healthy volunteers. The proposed sequence combines a 3D self-navigation approach with a 3D radial UTE acquisition enabling data collection during free breathing. Three echoes are simultaneously acquired to extract the short T2 components of the calcification while a water and fat separation technique allows for proper visualization of the coronary arteries. Even though the results are still preliminary, the proposed sequence showed great potential for the in vivo visualization of coronary artery calcification. In conclusion, the thesis presents three novel MRI approaches aimed at improved characterization and assessment of atherosclerotic coronary artery disease. These approaches provide new anatomical and functional information in four dimensions, and support tissue characterization for coronary artery plaques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study shows how a new generation of terrestrial laser scanners can be used to investigate glacier surface ablation and other elements of glacial hydrodynamics at exceptionally high spatial and temporal resolution. The study area is an Alpine valley glacier, Haut Glacier d'Arolla, Switzerland. Here we use an ultra-long-range lidar RIEGL VZ-6000 scanner, having a laser specifically designed for measurement of snow- and ice-cover surfaces. We focus on two timescales: seasonal and daily. Our results show that a near-infrared scanning laser system can provide high-precision elevation change and ablation data from long ranges, and over relatively large sections of the glacier surface. We use it to quantify spatial variations in the patterns of surface melt at the seasonal scale, as controlled by both aspect and differential debris cover. At the daily scale, we quantify the effects of ogive-related differences in ice surface debris content on spatial patterns of ablation. Daily scale measurements point to possible hydraulic jacking of the glacier associated with short-term water pressure rises. This latter demonstration shows that this type of lidar may be used to address subglacial hydrologic questions, in addition to motion and ablation measurements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present study proposes a method based on ski fixed inertial sensors to automatically compute spatio-temporal parameters (phase durations, cycle speed and cycle length) for the diagonal stride in classical cross-country skiing. The proposed system was validated against a marker-based motion capture system during indoor treadmill skiing. Skiing movement of 10 junior to world-cup athletes was measured for four different conditions. The accuracy (i.e. median error) and precision (i.e. interquartile range of error) of the system was below 6ms for cycle duration and ski thrust duration and below 35ms for pole push duration. Cycle speed precision (accuracy) was below 0.1m/s (0.005m/s) and cycle length precision (accuracy) was below 0.15m (0.005m). The system was sensitive to changes of conditions and was accurate enough to detect significant differences reported in previous studies. Since capture volume is not limited and setup is simple, the system would be well suited for outdoor measurements on snow.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nowadays, Species Distribution Models (SDMs) are a widely used tool. Using different statistical approaches these models reconstruct the realized niche of a species using presence data and a set of variables, often topoclimatic. There utilization range is quite large from understanding single species requirements, to the creation of nature reserve based on species hotspots, or modeling of climate change impact, etc... Most of the time these models are using variables at a resolution of 50km x 50km or 1 km x 1 km. However in some cases these models are used with resolutions below the kilometer scale and thus called high resolution models (100 m x 100 m or 25 m x 25 m). Quite recently a new kind of data has emerged enabling precision up to lm x lm and thus allowing very high resolution modeling. However these new variables are very costly and need an important amount of time to be processed. This is especially the case when these variables are used in complex calculation like models projections over large areas. Moreover the importance of very high resolution data in SDMs has not been assessed yet and is not well understood. Some basic knowledge on what drive species presence-absences is still missing. Indeed, it is not clear whether in mountain areas like the Alps coarse topoclimatic gradients are driving species distributions or if fine scale temperature or topography are more important or if their importance can be neglected when balance to competition or stochasticity. In this thesis I investigated the importance of very high resolution data (2-5m) in species distribution models using either very high resolution topographic, climatic or edaphic variables over a 2000m elevation gradient in the Western Swiss Alps. I also investigated more local responses of these variables for a subset of species living in this area at two precise elvation belts. During this thesis I showed that high resolution data necessitates very good datasets (species and variables for the models) to produce satisfactory results. Indeed, in mountain areas, temperature is the most important factor driving species distribution and needs to be modeled at very fine resolution instead of being interpolated over large surface to produce satisfactory results. Despite the instinctive idea that topographic should be very important at high resolution, results are mitigated. However looking at the importance of variables over a large gradient buffers the importance of the variables. Indeed topographic factors have been shown to be highly important at the subalpine level but their importance decrease at lower elevations. Wether at the mountane level edaphic and land use factors are more important high resolution topographic data is more imporatant at the subalpine level. Finally the biggest improvement in the models happens when edaphic variables are added. Indeed, adding soil variables is of high importance and variables like pH are overpassing the usual topographic variables in SDMs in term of importance in the models. To conclude high resolution is very important in modeling but necessitate very good datasets. Only increasing the resolution of the usual topoclimatic predictors is not sufficient and the use of edaphic predictors has been highlighted as fundamental to produce significantly better models. This is of primary importance, especially if these models are used to reconstruct communities or as basis for biodiversity assessments. -- Ces dernières années, l'utilisation des modèles de distribution d'espèces (SDMs) a continuellement augmenté. Ces modèles utilisent différents outils statistiques afin de reconstruire la niche réalisée d'une espèce à l'aide de variables, notamment climatiques ou topographiques, et de données de présence récoltées sur le terrain. Leur utilisation couvre de nombreux domaines allant de l'étude de l'écologie d'une espèce à la reconstruction de communautés ou à l'impact du réchauffement climatique. La plupart du temps, ces modèles utilisent des occur-rences issues des bases de données mondiales à une résolution plutôt large (1 km ou même 50 km). Certaines bases de données permettent cependant de travailler à haute résolution, par conséquent de descendre en dessous de l'échelle du kilomètre et de travailler avec des résolutions de 100 m x 100 m ou de 25 m x 25 m. Récemment, une nouvelle génération de données à très haute résolution est apparue et permet de travailler à l'échelle du mètre. Les variables qui peuvent être générées sur la base de ces nouvelles données sont cependant très coûteuses et nécessitent un temps conséquent quant à leur traitement. En effet, tout calcul statistique complexe, comme des projections de distribution d'espèces sur de larges surfaces, demande des calculateurs puissants et beaucoup de temps. De plus, les facteurs régissant la distribution des espèces à fine échelle sont encore mal connus et l'importance de variables à haute résolution comme la microtopographie ou la température dans les modèles n'est pas certaine. D'autres facteurs comme la compétition ou la stochasticité naturelle pourraient avoir une influence toute aussi forte. C'est dans ce contexte que se situe mon travail de thèse. J'ai cherché à comprendre l'importance de la haute résolution dans les modèles de distribution d'espèces, que ce soit pour la température, la microtopographie ou les variables édaphiques le long d'un important gradient d'altitude dans les Préalpes vaudoises. J'ai également cherché à comprendre l'impact local de certaines variables potentiellement négligées en raison d'effets confondants le long du gradient altitudinal. Durant cette thèse, j'ai pu monter que les variables à haute résolution, qu'elles soient liées à la température ou à la microtopographie, ne permettent qu'une amélioration substantielle des modèles. Afin de distinguer une amélioration conséquente, il est nécessaire de travailler avec des jeux de données plus importants, tant au niveau des espèces que des variables utilisées. Par exemple, les couches climatiques habituellement interpolées doivent être remplacées par des couches de température modélisées à haute résolution sur la base de données de terrain. Le fait de travailler le long d'un gradient de température de 2000m rend naturellement la température très importante au niveau des modèles. L'importance de la microtopographie est négligeable par rapport à la topographie à une résolution de 25m. Cependant, lorsque l'on regarde à une échelle plus locale, la haute résolution est une variable extrêmement importante dans le milieu subalpin. À l'étage montagnard par contre, les variables liées aux sols et à l'utilisation du sol sont très importantes. Finalement, les modèles de distribution d'espèces ont été particulièrement améliorés par l'addition de variables édaphiques, principalement le pH, dont l'importance supplante ou égale les variables topographique lors de leur ajout aux modèles de distribution d'espèces habituels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Owing to recent advances in genomic technologies, personalized oncology is poised to fundamentally alter cancer therapy. In this paradigm, the mutational and transcriptional profiles of tumors are assessed, and personalized treatments are designed based on the specific molecular abnormalities relevant to each patient's cancer. To date, such approaches have yielded impressive clinical responses in some patients. However, a major limitation of this strategy has also been revealed: the vast majority of tumor mutations are not targetable by current pharmacological approaches. Immunotherapy offers a promising alternative to exploit tumor mutations as targets for clinical intervention. Mutated proteins can give rise to novel antigens (called neoantigens) that are recognized with high specificity by patient T cells. Indeed, neoantigen-specific T cells have been shown to underlie clinical responses to many standard treatments and immunotherapeutic interventions. Moreover, studies in mouse models targeting neoantigens, and early results from clinical trials, have established proof of concept for personalized immunotherapies targeting next-generation sequencing identified neoantigens. Here, we review basic immunological principles related to T-cell recognition of neoantigens, and we examine recent studies that use genomic data to design personalized immunotherapies. We discuss the opportunities and challenges that lie ahead on the road to improving patient outcomes by incorporating immunotherapy into the paradigm of personalized oncology.