250 resultados para Improved processes


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Engineered nanomaterials (ENMs) exhibit special physicochemical properties and thus are finding their way into an increasing number of industries, enabling products with improved properties. Their increased use brings a greater likelihood of exposure to the nanoparticles (NPs) that could be released during the life cycle of nano-abled products. The field of nanotoxicology has emerged as a consequence of the development of these novel materials, and it has gained ever more attention due to the urgent need to gather information on exposure to them and to understand the potential hazards they engender. However, current studies on nanotoxicity tend to focus on pristine ENMs, and they use these toxicity results to generalize risk assessments on human exposure to NPs. ENMs released into the environment can interact with their surroundings, change characteristics and exhibit toxicity effects distinct from those of pristine ENMs. Furthermore, NPs' large surface areas provide extra-large potential interfaces, thus promoting more significant interactions between NPs and other co-existing species. In such processes, other species can attach to a NP's surface and modify its surface functionality, in addition to the toxicity in normally exhibits. One particular occupational health scenario involves NPs and low-volatile organic compounds (LVOC), a common type of pollutant existing around many potential sources of NPs. LVOC can coat a NP's surface and then dominate its toxicity. One important mechanism in nanotoxicology is the creation of reactive oxygen species (ROS) on a NP's surface; LVOC can modify the production of these ROS. In summary, nanotoxicity research should not be limited to the toxicity of pristine NPs, nor use their toxicity to evaluate the health effects of exposure to environmental NPs. Instead, the interactions which NPs have with other environmental species should also be considered and researched. The potential health effects of exposure to NPs should be derived from these real world NPs with characteristics modified by the environment and their distinct toxicity. Failure to suitably address toxicity results could lead to an inappropriate treatment of nano- release, affect the environment and public health and put a blemish on the development of sustainable nanotechnologies as a whole. The main objective of this thesis is to demonstrate a process for coating NP surfaces with LVOC using a well-controlled laboratory design and, with regard to these NPs' capacity to generate ROS, explore the consequences of changing particle toxicity. The dynamic coating system developed yielded stable and replicable coating performance, simulating an important realistic scenario. Clear changes in the size distribution of airborne NPs were observed using a scanning mobility particle sizer, were confirmed using both liquid nanotracking analyses and transmission electron microscopy (TEM) imaging, and were verified thanks to the LVOC coating. Coating thicknesses corresponded to the amount of coating material used and were controlled using the parameters of the LVOC generator. The capacity of pristine silver NPs (Ag NPs) to generate ROS was reduced when they were given a passive coating of inert paraffin: this coating blocked the reactive zones on the particle surfaces. In contrast, a coating of active reduced-anthraquinone contributed to redox reactions and generated ROS itself, despite the fact that ROS generation due to oxidation by Ag NPs themselves was quenched. Further objectives of this thesis included development of ROS methodology and the analysis of ROS case studies. Since the capacity of NPs to create ROS is an important effect in nanotoxicity, we attempted to refine and standardize the use of 2'7-dichlorodihydrofluorescin (DCFH) as a chemical tailored for the characterization of NPs' capacity for ROS generation. Previous studies had reported a wide variety of results, which were due to a number of insufficiently well controlled factors. We therefore cross-compared chemicals and concentrations, explored ways of dispersing NP samples in liquid solutions, identified sources of contradictions in the literature and investigated ways of reducing artificial results. The most robust results were obtained by sonicating an optimal sample of NPs in a DCFH-HRP solution made of 5,M DCFH and 0.5 unit/ml horseradish peroxidase (HRP). Our findings explained how the major reasons for previously conflicting results were the different experimental approaches used and the potential artifacts appearing when using high sample concentrations. Applying our advanced DCFH protocol with other physicochemical characterizations and biological analyses, we conducted several case studies, characterizing aerosols and NP samples. Exposure to aged brake wear dust engenders a risk of potential deleterious health effects in occupational scenarios. We performed microscopy and elemental analyses, as well as ROS measurements, with acellular and cellular DCFH assays. TEM images revealed samples to be heterogeneous mixtures with few particles in the nano-scale. Metallic and non-metallic elements were identified, primarily iron, carbon and oxygen. Moderate amounts of ROS were detected in the cell-free fluorescent tests; however, exposed cells were not dramatically activated. In addition to their highly aged state due to oxidation, the reason aged brake wear samples caused less oxidative stress than fresh brake wear samples may be because of their larger size and thus smaller relative reactive surface area. Other case studies involving welding fumes and differently charged NPs confirmed the performance of our DCFH assay and found ROS generation linked to varying characteristics, especially the surface functionality of the samples. Les nanomatériaux manufacturés (ENM) présentent des propriétés physico-chimiques particulières et ont donc trouvés des applications dans un nombre croissant de secteurs, permettant de réaliser des produits ayant des propriétés améliorées. Leur utilisation accrue engendre un plus grand risque pour les êtres humains d'être exposés à des nanoparticules (NP) qui sont libérées au long de leur cycle de vie. En conséquence, la nanotoxicologie a émergé et gagné de plus en plus d'attention dû à la nécessité de recueillir les renseignements nécessaires sur l'exposition et les risques associés à ces nouveaux matériaux. Cependant, les études actuelles sur la nanotoxicité ont tendance à se concentrer sur les ENM et utiliser ces résultats toxicologiques pour généraliser l'évaluation des risques sur l'exposition humaine aux NP. Les ENM libérés dans l'environnement peuvent interagir avec l'environnement, changeant leurs caractéristiques, et montrer des effets de toxicité distincts par rapport aux ENM originaux. Par ailleurs, la grande surface des NP fournit une grande interface avec l'extérieur, favorisant les interactions entre les NP et les autres espèces présentes. Dans ce processus, d'autres espèces peuvent s'attacher à la surface des NP et modifier leur fonctionnalité de surface ainsi que leur toxicité. Un scénario d'exposition professionnel particulier implique à la fois des NP et des composés organiques peu volatils (LVOC), un type commun de polluant associé à de nombreuses sources de NP. Les LVOC peuvent se déposer sur la surface des NP et donc dominer la toxicité globale de la particule. Un mécanisme important en nanotoxicologie est la création d'espèces réactives d'oxygène (ROS) sur la surface des particules, et les LVOC peuvent modifier cette production de ROS. En résumé, la recherche en nanotoxicité ne devrait pas être limitée à la toxicité des ENM originaux, ni utiliser leur toxicité pour évaluer les effets sur la santé de l'exposition aux NP de l'environnement; mais les interactions que les NP ont avec d'autres espèces environnementales doivent être envisagées et étudiées. Les effets possibles sur la santé de l'exposition aux NP devraient être dérivés de ces NP aux caractéristiques modifiées et à la toxicité distincte. L'utilisation de résultats de toxicité inappropriés peut conduire à une mauvaise prise en charge de l'exposition aux NP, de détériorer l'environnement et la santé publique et d'entraver le développement durable des industries de la nanotechnologie dans leur ensemble. L'objectif principal de cette thèse est de démontrer le processus de déposition des LVOC sur la surface des NP en utilisant un environnement de laboratoire bien contrôlé et d'explorer les conséquences du changement de toxicité des particules sur leur capacité à générer des ROS. Le système de déposition dynamique développé a abouti à des performances de revêtement stables et reproductibles, en simulant des scénarios réalistes importants. Des changements clairs dans la distribution de taille des NP en suspension ont été observés par spectrométrie de mobilité électrique des particules, confirmé à la fois par la méthode dite liquid nanotracking analysis et par microscopie électronique à transmission (MET), et a été vérifié comme provenant du revêtement par LVOC. La correspondance entre l'épaisseur de revêtement et la quantité de matériau de revêtement disponible a été démontré et a pu être contrôlé par les paramètres du générateur de LVOC. La génération de ROS dû aux NP d'argent (Ag NP) a été diminuée par un revêtement passif de paraffine inerte bloquant les zones réactives à la surface des particules. Au contraire, le revêtement actif d'anthraquinone réduit a contribué aux réactions redox et a généré des ROS, même lorsque la production de ROS par oxydation des Ag NP avec l'oxygène a été désactivé. Les objectifs associés comprennent le développement de la méthodologie et des études de cas spécifique aux ROS. Etant donné que la capacité des NP à générer des ROS contribue grandement à la nanotoxicité, nous avons tenté de définir un standard pour l'utilisation de 27- dichlorodihydrofluorescine (DCFH) adapté pour caractériser la génération de ROS par les NP. Des etudes antérieures ont rapporté une grande variété de résultats différents, ce qui était dû à un contrôle insuffisant des plusieurs facteurs. Nous avons donc comparé les produits chimiques et les concentrations utilisés, exploré les moyens de dispersion des échantillons HP en solution liquide, investigué les sources de conflits identifiées dans les littératures et étudié les moyens de réduire les résultats artificiels. De très bon résultats ont été obtenus par sonication d'une quantité optimale d'échantillons de NP en solution dans du DCFH-HRP, fait de 5 nM de DCFH et de 0,5 unité/ml de Peroxydase de raifort (HRP). Notre étude a démontré que les principales raisons causant les conflits entre les études précédemment conduites dans la littérature étaient dues aux différentes approches expérimentales et à des artefacts potentiels dus à des concentrations élevées de NP dans les échantillons. Utilisant notre protocole DCFH avancé avec d'autres caractérisations physico-chimiques et analyses biologiques, nous avons mené plusieurs études de cas, caractérisant les échantillons d'aérosols et les NP. La vielle poussière de frein en particulier présente un risque élevé d'exposition dans les scénarios professionnels, avec des effets potentiels néfastes sur la santé. Nous avons effectué des analyses d'éléments et de microscopie ainsi que la mesure de ROS avec DCFH cellulaire et acellulaire. Les résultats de MET ont révélé que les échantillons se présentent sous la forme de mélanges de particules hétérogènes, desquels une faible proportion se trouve dans l'échelle nano. Des éléments métalliques et non métalliques ont été identifiés, principalement du fer, du carbone et de l'oxygène. Une quantité modérée de ROS a été détectée dans le test fluorescent acellulaire; cependant les cellules exposées n'ont pas été très fortement activées. La raison pour laquelle les échantillons de vielle poussière de frein causent un stress oxydatif inférieur par rapport à la poussière de frein nouvelle peut-être à cause de leur plus grande taille engendrant une surface réactive proportionnellement plus petite, ainsi que leur état d'oxydation avancé diminuant la réactivité. D'autres études de cas sur les fumées de soudage et sur des NP différemment chargées ont confirmé la performance de notre test DCFH et ont trouvé que la génération de ROS est liée à certaines caractéristiques, notamment la fonctionnalité de surface des échantillons.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Atherosclerosis is a chronic cardiovascular disease that involves the thicken¬ing of the artery walls as well as the formation of plaques (lesions) causing the narrowing of the lumens, in vessels such as the aorta, the coronary and the carotid arteries. Magnetic resonance imaging (MRI) is a promising modality for the assessment of atherosclerosis, as it is a non-invasive and patient-friendly procedure that does not use ionizing radiation. MRI offers high soft tissue con¬trast already without the need of intravenous contrast media; while modifica¬tion of the MR pulse sequences allows for further adjustment of the contrast for specific diagnostic needs. As such, MRI can create angiographic images of the vessel lumens to assess stenoses at the late stage of the disease, as well as blood flow-suppressed images for the early investigation of the vessel wall and the characterization of the atherosclerotic plaques. However, despite the great technical progress that occurred over the past two decades, MRI is intrinsically a low sensitive technique and some limitations still exist in terms of accuracy and performance. A major challenge for coronary artery imaging is respiratory motion. State- of-the-art diaphragmatic navigators rely on an indirect measure of motion, per¬form a ID correction, and have long and unpredictable scan time. In response, self-navigation (SM) strategies have recently been introduced that offer 100% scan efficiency and increased ease of use. SN detects respiratory motion di¬rectly from the image data obtained at the level of the heart, and retrospectively corrects the same data before final image reconstruction. Thus, SN holds po-tential for multi-dimensional motion compensation. To this regard, this thesis presents novel SN methods that estimate 2D and 3D motion parameters from aliased sub-images that are obtained from the same raw data composing the final image. Combination of all corrected sub-images produces a final image with reduced motion artifacts for the visualization of the coronaries. The first study (section 2.2, 2D Self-Navigation with Compressed Sensing) consists of a method for 2D translational motion compensation. Here, the use of com- pressed sensing (CS) reconstruction is proposed and investigated to support motion detection by reducing aliasing artifacts. In healthy human subjects, CS demonstrated an improvement in motion detection accuracy with simula¬tions on in vivo data, while improved coronary artery visualization was demon¬strated on in vivo free-breathing acquisitions. However, the motion of the heart induced by respiration has been shown to occur in three dimensions and to be more complex than a simple translation. Therefore, the second study (section 2.3,3D Self-Navigation) consists of a method for 3D affine motion correction rather than 2D only. Here, different techniques were adopted to reduce background signal contribution in respiratory motion tracking, as this can be adversely affected by the static tissue that surrounds the heart. The proposed method demonstrated to improve conspicuity and vi¬sualization of coronary arteries in healthy and cardiovascular disease patient cohorts in comparison to a conventional ID SN method. In the third study (section 2.4, 3D Self-Navigation with Compressed Sensing), the same tracking methods were used to obtain sub-images sorted according to the respiratory position. Then, instead of motion correction, a compressed sensing reconstruction was performed on all sorted sub-image data. This process ex¬ploits the consistency of the sorted data to reduce aliasing artifacts such that the sub-image corresponding to the end-expiratory phase can directly be used to visualize the coronaries. In a healthy volunteer cohort, this strategy improved conspicuity and visualization of the coronary arteries when compared to a con¬ventional ID SN method. For the visualization of the vessel wall and atherosclerotic plaques, the state- of-the-art dual inversion recovery (DIR) technique is able to suppress the signal coming from flowing blood and provide positive wall-lumen contrast. How¬ever, optimal contrast may be difficult to obtain and is subject to RR variability. Furthermore, DIR imaging is time-inefficient and multislice acquisitions may lead to prolonged scanning times. In response and as a fourth study of this thesis (chapter 3, Vessel Wall MRI of the Carotid Arteries), a phase-sensitive DIR method has been implemented and tested in the carotid arteries of a healthy volunteer cohort. By exploiting the phase information of images acquired after DIR, the proposed phase-sensitive method enhances wall-lumen contrast while widens the window of opportunity for image acquisition. As a result, a 3-fold increase in volumetric coverage is obtained at no extra cost in scanning time, while image quality is improved. In conclusion, this thesis presented novel methods to address some of the main challenges for MRI of atherosclerosis: the suppression of motion and flow artifacts for improved visualization of vessel lumens, walls and plaques. Such methods showed to significantly improve image quality in human healthy sub¬jects, as well as scan efficiency and ease-of-use of MRI. Extensive validation is now warranted in patient populations to ascertain their diagnostic perfor¬mance. Eventually, these methods may bring the use of atherosclerosis MRI closer to the clinical practice. Résumé L'athérosclérose est une maladie cardiovasculaire chronique qui implique le épaississement de la paroi des artères, ainsi que la formation de plaques (lé¬sions) provoquant le rétrécissement des lumières, dans des vaisseaux tels que l'aorte, les coronaires et les artères carotides. L'imagerie par résonance magné¬tique (IRM) est une modalité prometteuse pour l'évaluation de l'athérosclérose, car il s'agit d'une procédure non-invasive et conviviale pour les patients, qui n'utilise pas des rayonnements ionisants. L'IRM offre un contraste des tissus mous très élevé sans avoir besoin de médias de contraste intraveineux, tan¬dis que la modification des séquences d'impulsions de RM permet en outre le réglage du contraste pour des besoins diagnostiques spécifiques. À ce titre, l'IRM peut créer des images angiographiques des lumières des vaisseaux pour évaluer les sténoses à la fin du stade de la maladie, ainsi que des images avec suppression du flux sanguin pour une première enquête des parois des vais¬seaux et une caractérisation des plaques d'athérosclérose. Cependant, malgré les grands progrès techniques qui ont eu lieu au cours des deux dernières dé¬cennies, l'IRM est une technique peu sensible et certaines limitations existent encore en termes de précision et de performance. Un des principaux défis pour l'imagerie de l'artère coronaire est le mou¬vement respiratoire. Les navigateurs diaphragmatiques de pointe comptent sur une mesure indirecte de mouvement, effectuent une correction 1D, et ont un temps d'acquisition long et imprévisible. En réponse, les stratégies d'auto- navigation (self-navigation: SN) ont été introduites récemment et offrent 100% d'efficacité d'acquisition et une meilleure facilité d'utilisation. Les SN détectent le mouvement respiratoire directement à partir des données brutes de l'image obtenue au niveau du coeur, et rétrospectivement corrigent ces mêmes données avant la reconstruction finale de l'image. Ainsi, les SN détiennent un poten¬tiel pour une compensation multidimensionnelle du mouvement. A cet égard, cette thèse présente de nouvelles méthodes SN qui estiment les paramètres de mouvement 2D et 3D à partir de sous-images qui sont obtenues à partir des mêmes données brutes qui composent l'image finale. La combinaison de toutes les sous-images corrigées produit une image finale pour la visualisation des coronaires ou les artefacts du mouvement sont réduits. La première étude (section 2.2,2D Self-Navigation with Compressed Sensing) traite d'une méthode pour une compensation 2D de mouvement de translation. Ici, on étudie l'utilisation de la reconstruction d'acquisition comprimée (compressed sensing: CS) pour soutenir la détection de mouvement en réduisant les artefacts de sous-échantillonnage. Chez des sujets humains sains, CS a démontré une amélioration de la précision de la détection de mouvement avec des simula¬tions sur des données in vivo, tandis que la visualisation de l'artère coronaire sur des acquisitions de respiration libre in vivo a aussi été améliorée. Pourtant, le mouvement du coeur induite par la respiration se produit en trois dimensions et il est plus complexe qu'un simple déplacement. Par conséquent, la deuxième étude (section 2.3, 3D Self-Navigation) traite d'une méthode de cor¬rection du mouvement 3D plutôt que 2D uniquement. Ici, différentes tech¬niques ont été adoptées pour réduire la contribution du signal du fond dans le suivi de mouvement respiratoire, qui peut être influencé négativement par le tissu statique qui entoure le coeur. La méthode proposée a démontré une amélioration, par rapport à la procédure classique SN de correction 1D, de la visualisation des artères coronaires dans le groupe de sujets sains et des pa¬tients avec maladies cardio-vasculaires. Dans la troisième étude (section 2.4,3D Self-Navigation with Compressed Sensing), les mêmes méthodes de suivi ont été utilisées pour obtenir des sous-images triées selon la position respiratoire. Au lieu de la correction du mouvement, une reconstruction de CS a été réalisée sur toutes les sous-images triées. Cette procédure exploite la cohérence des données pour réduire les artefacts de sous- échantillonnage de telle sorte que la sous-image correspondant à la phase de fin d'expiration peut directement être utilisée pour visualiser les coronaires. Dans un échantillon de volontaires en bonne santé, cette stratégie a amélioré la netteté et la visualisation des artères coronaires par rapport à une méthode classique SN ID. Pour la visualisation des parois des vaisseaux et de plaques d'athérosclérose, la technique de pointe avec double récupération d'inversion (DIR) est capa¬ble de supprimer le signal provenant du sang et de fournir un contraste posi¬tif entre la paroi et la lumière. Pourtant, il est difficile d'obtenir un contraste optimal car cela est soumis à la variabilité du rythme cardiaque. Par ailleurs, l'imagerie DIR est inefficace du point de vue du temps et les acquisitions "mul- tislice" peuvent conduire à des temps de scan prolongés. En réponse à ce prob¬lème et comme quatrième étude de cette thèse (chapitre 3, Vessel Wall MRI of the Carotid Arteries), une méthode de DIR phase-sensitive a été implémenté et testé

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper investigates the impacts of globalization processes on the Swiss business elite community during the 1980-2010 period. Switzerland has been characterized in the 20th century by its extraordinary stability and by the strong cohesion of its elite community. To study recent changes, we focus on Switzerland's 110 largest firms' by adopting a diachronic perspective based on three elite cohorts (1980, 2000, and 2010). An analysis of interlocking directorates allows us to describe the decline of the Swiss corporate network. The second analysis focuses on top managers' profiles in terms of education, nationality as well as participation in national community networks that used to reinforce the cultural cohesion of the Swiss elite community, especially the militia army. Our results highlight a slow but profound transformation of top management profiles, characterized by a decline of traditional national elements of legitimacy and the emergence of new "global" elements. The diachronic and combined analysis brings into light the strong cultural changes experienced by the national business elite community.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recognition of environmental sounds is believed to proceed through discrimination steps from broad to more narrow categories. Very little is known about the neural processes that underlie fine-grained discrimination within narrow categories or about their plasticity in relation to newly acquired expertise. We investigated how the cortical representation of birdsongs is modulated by brief training to recognize individual species. During a 60-minute session, participants learned to recognize a set of birdsongs; they improved significantly their performance for trained (T) but not control species (C), which were counterbalanced across participants. Auditory evoked potentials (AEPs) were recorded during pre- and post-training sessions. Pre vs. post changes in AEPs were significantly different between T and C i) at 206-232ms post stimulus onset within a cluster on the anterior part of the left superior temporal gyrus; ii) at 246-291ms in the left middle frontal gyrus; and iii) 512-545ms in the left middle temporal gyrus as well as bilaterally in the cingulate cortex. All effects were driven by weaker activity for T than C species. Thus, expertise in discriminating T species modulated early stages of semantic processing, during and immediately after the time window that sustains the discrimination between human vs. animal vocalizations. Moreover, the training-induced plasticity is reflected by the sharpening of a left lateralized semantic network, including the anterior part of the temporal convexity and the frontal cortex. Training to identify birdsongs influenced, however, also the processing of C species, but at a much later stage. Correct discrimination of untrained sounds seems to require an additional step which results from lower-level features analysis such as apperception. We therefore suggest that the access to objects within an auditory semantic category is different and depends on subject's level of expertise. More specifically, correct intra-categorical auditory discrimination for untrained items follows the temporal hierarchy and transpires in a late stage of semantic processing. On the other hand, correct categorization of individually trained stimuli occurs earlier, during a period contemporaneous with human vs. animal vocalization discrimination, and involves a parallel semantic pathway requiring expertise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: According to estimations around 230 people die as a result of radon exposure in Switzerland. This public health concern makes reliable indoor radon prediction and mapping methods necessary in order to improve risk communication to the public. The aim of this study was to develop an automated method to classify lithological units according to their radon characteristics and to develop mapping and predictive tools in order to improve local radon prediction. METHOD: About 240 000 indoor radon concentration (IRC) measurements in about 150 000 buildings were available for our analysis. The automated classification of lithological units was based on k-medoids clustering via pair-wise Kolmogorov distances between IRC distributions of lithological units. For IRC mapping and prediction we used random forests and Bayesian additive regression trees (BART). RESULTS: The automated classification groups lithological units well in terms of their IRC characteristics. Especially the IRC differences in metamorphic rocks like gneiss are well revealed by this method. The maps produced by random forests soundly represent the regional difference of IRCs in Switzerland and improve the spatial detail compared to existing approaches. We could explain 33% of the variations in IRC data with random forests. Additionally, the influence of a variable evaluated by random forests shows that building characteristics are less important predictors for IRCs than spatial/geological influences. BART could explain 29% of IRC variability and produced maps that indicate the prediction uncertainty. CONCLUSION: Ensemble regression trees are a powerful tool to model and understand the multidimensional influences on IRCs. Automatic clustering of lithological units complements this method by facilitating the interpretation of radon properties of rock types. This study provides an important element for radon risk communication. Future approaches should consider taking into account further variables like soil gas radon measurements as well as more detailed geological information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ample evidence indicates that inhibitory control (IC), a key executive component referring to the ability to suppress cognitive or motor processes, relies on a right-lateralized fronto-basal brain network. However, whether and how IC can be improved with training and the underlying neuroplastic mechanisms remains largely unresolved. We used functional and structural magnetic resonance imaging to measure the effects of 2 weeks of training with a Go/NoGo task specifically designed to improve frontal top-down IC mechanisms. The training-induced behavioral improvements were accompanied by a decrease in neural activity to inhibition trials within the right pars opercularis and triangularis, and in the left pars orbitalis of the inferior frontal gyri. Analyses of changes in brain anatomy induced by the IC training revealed increases in grey matter volume in the right pars orbitalis and modulations of white matter microstructure in the right pars triangularis. The task-specificity of the effects of training was confirmed by an absence of change in neural activity to a control working memory task. Our combined anatomical and functional findings indicate that differential patterns of functional and structural plasticity between and within inferior frontal gyri enhanced the speed of top-down inhibition processes and in turn IC proficiency. The results suggest that training-based interventions might help overcoming the anatomic and functional deficits of inferior frontal gyri manifesting in inhibition-related clinical conditions. More generally, we demonstrate how multimodal neuroimaging investigations of training-induced neuroplasticity enable revealing novel anatomo-functional dissociations within frontal executive brain networks. Hum Brain Mapp 36:2527-2543, 2015. © 2015 Wiley Periodicals, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Currently available molecular biology tools allow forensic scientists to characterize DNA evidence found at crime scenes for a large variety of samples, including those of limited quantity and quality, and achieve high levels of individualization. Yet, standard forensic markers provide limited or no results when applied to mixed DNA samples where the contributors are present in very different proportions (unbalanced DNA mixtures). This becomes an issue mostly for the analysis of trace samples collected on the victim or from touched objects. To this end, we recently proposed an innovative type of genetic marker, named DIP-STR that relies on pairing deletion/insertion polymorphisms (DIP) with standard short tandem repeats (STR). This novel compound marker allows detection of the minor DNA contributor in a DNA mixture of any gender and cellular origin with unprecedented resolution (beyond a DNA ratio of 1:1000). To provide a novel analytical tool useful in practice to common forensic laboratories, this article describes the first set of 10 DIP-STR markers selected according to forensic technical standards. The novel DIP-STR regions are short (between 146 and 271 bp), include only highly polymorphic tri-, tetra- and pentanucleotide tandem repeats and are located on different chromosomes or chromosomal arms to provide statistically independent results. This novel set of DIP-STR can target the amplification of 0.03-0.1 ng of DNA when mixed with a 1000-fold excess of major DNA. DIP-STR relative allele frequencies are estimated based on a survey of 103 Swiss individuals. Finally, this study provides an estimate of the occurrence of informative alleles and a calculation of the corresponding random match probability of the detected minor DIP-STR genotype assessed across 10,506 pairwise conceptual mixtures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Characterizing the geological features and structures in three dimensions over inaccessible rock cliffs is needed to assess natural hazards such as rockfalls and rockslides and also to perform investigations aimed at mapping geological contacts and building stratigraphy and fold models. Indeed, the detailed 3D data, such as LiDAR point clouds, allow to study accurately the hazard processes and the structure of geologic features, in particular in vertical and overhanging rock slopes. Thus, 3D geological models have a great potential of being applied to a wide range of geological investigations both in research and applied geology projects, such as mines, tunnels and reservoirs. Recent development of ground-based remote sensing techniques (LiDAR, photogrammetry and multispectral / hyperspectral images) are revolutionizing the acquisition of morphological and geological information. As a consequence, there is a great potential for improving the modeling of geological bodies as well as failure mechanisms and stability conditions by integrating detailed remote data. During the past ten years several large rockfall events occurred along important transportation corridors where millions of people travel every year (Switzerland: Gotthard motorway and railway; Canada: Sea to sky highway between Vancouver and Whistler). These events show that there is still a lack of knowledge concerning the detection of potential rockfalls, making mountain residential settlements and roads highly risky. It is necessary to understand the main factors that destabilize rocky outcrops even if inventories are lacking and if no clear morphological evidences of rockfall activity are observed. In order to increase the possibilities of forecasting potential future landslides, it is crucial to understand the evolution of rock slope stability. Defining the areas theoretically most prone to rockfalls can be particularly useful to simulate trajectory profiles and to generate hazard maps, which are the basis for land use planning in mountainous regions. The most important questions to address in order to assess rockfall hazard are: Where are the most probable sources for future rockfalls located? What are the frequencies of occurrence of these rockfalls? I characterized the fracturing patterns in the field and with LiDAR point clouds. Afterwards, I developed a model to compute the failure mechanisms on terrestrial point clouds in order to assess the susceptibility to rockfalls at the cliff scale. Similar procedures were already available to evaluate the susceptibility to rockfalls based on aerial digital elevation models. This new model gives the possibility to detect the most susceptible rockfall sources with unprecented detail in the vertical and overhanging areas. The results of the computation of the most probable rockfall source areas in granitic cliffs of Yosemite Valley and Mont-Blanc massif were then compared to the inventoried rockfall events to validate the calculation methods. Yosemite Valley was chosen as a test area because it has a particularly strong rockfall activity (about one rockfall every week) which leads to a high rockfall hazard. The west face of the Dru was also chosen for the relevant rockfall activity and especially because it was affected by some of the largest rockfalls that occurred in the Alps during the last 10 years. Moreover, both areas were suitable because of their huge vertical and overhanging cliffs that are difficult to study with classical methods. Limit equilibrium models have been applied to several case studies to evaluate the effects of different parameters on the stability of rockslope areas. The impact of the degradation of rockbridges on the stability of large compartments in the west face of the Dru was assessed using finite element modeling. In particular I conducted a back-analysis of the large rockfall event of 2005 (265'000 m3) by integrating field observations of joint conditions, characteristics of fracturing pattern and results of geomechanical tests on the intact rock. These analyses improved our understanding of the factors that influence the stability of rock compartments and were used to define the most probable future rockfall volumes at the Dru. Terrestrial laser scanning point clouds were also successfully employed to perform geological mapping in 3D, using the intensity of the backscattered signal. Another technique to obtain vertical geological maps is combining triangulated TLS mesh with 2D geological maps. At El Capitan (Yosemite Valley) we built a georeferenced vertical map of the main plutonio rocks that was used to investigate the reasons for preferential rockwall retreat rate. Additional efforts to characterize the erosion rate were made at Monte Generoso (Ticino, southern Switzerland) where I attempted to improve the estimation of long term erosion by taking into account also the volumes of the unstable rock compartments. Eventually, the following points summarize the main out puts of my research: The new model to compute the failure mechanisms and the rockfall susceptibility with 3D point clouds allows to define accurately the most probable rockfall source areas at the cliff scale. The analysis of the rockbridges at the Dru shows the potential of integrating detailed measurements of the fractures in geomechanical models of rockmass stability. The correction of the LiDAR intensity signal gives the possibility to classify a point cloud according to the rock type and then use this information to model complex geologic structures. The integration of these results, on rockmass fracturing and composition, with existing methods can improve rockfall hazard assessments and enhance the interpretation of the evolution of steep rockslopes. -- La caractérisation de la géologie en 3D pour des parois rocheuses inaccessibles est une étape nécessaire pour évaluer les dangers naturels tels que chutes de blocs et glissements rocheux, mais aussi pour réaliser des modèles stratigraphiques ou de structures plissées. Les modèles géologiques 3D ont un grand potentiel pour être appliqués dans une vaste gamme de travaux géologiques dans le domaine de la recherche, mais aussi dans des projets appliqués comme les mines, les tunnels ou les réservoirs. Les développements récents des outils de télédétection terrestre (LiDAR, photogrammétrie et imagerie multispectrale / hyperspectrale) sont en train de révolutionner l'acquisition d'informations géomorphologiques et géologiques. Par conséquence, il y a un grand potentiel d'amélioration pour la modélisation d'objets géologiques, ainsi que des mécanismes de rupture et des conditions de stabilité, en intégrant des données détaillées acquises à distance. Pour augmenter les possibilités de prévoir les éboulements futurs, il est fondamental de comprendre l'évolution actuelle de la stabilité des parois rocheuses. Définir les zones qui sont théoriquement plus propices aux chutes de blocs peut être très utile pour simuler les trajectoires de propagation des blocs et pour réaliser des cartes de danger, qui constituent la base de l'aménagement du territoire dans les régions de montagne. Les questions plus importantes à résoudre pour estimer le danger de chutes de blocs sont : Où se situent les sources plus probables pour les chutes de blocs et éboulement futurs ? Avec quelle fréquence vont se produire ces événements ? Donc, j'ai caractérisé les réseaux de fractures sur le terrain et avec des nuages de points LiDAR. Ensuite, j'ai développé un modèle pour calculer les mécanismes de rupture directement sur les nuages de points pour pouvoir évaluer la susceptibilité au déclenchement de chutes de blocs à l'échelle de la paroi. Les zones sources de chutes de blocs les plus probables dans les parois granitiques de la vallée de Yosemite et du massif du Mont-Blanc ont été calculées et ensuite comparés aux inventaires des événements pour vérifier les méthodes. Des modèles d'équilibre limite ont été appliqués à plusieurs cas d'études pour évaluer les effets de différents paramètres sur la stabilité des parois. L'impact de la dégradation des ponts rocheux sur la stabilité de grands compartiments de roche dans la paroi ouest du Petit Dru a été évalué en utilisant la modélisation par éléments finis. En particulier j'ai analysé le grand éboulement de 2005 (265'000 m3), qui a emporté l'entier du pilier sud-ouest. Dans le modèle j'ai intégré des observations des conditions des joints, les caractéristiques du réseau de fractures et les résultats de tests géoméchaniques sur la roche intacte. Ces analyses ont amélioré l'estimation des paramètres qui influencent la stabilité des compartiments rocheux et ont servi pour définir des volumes probables pour des éboulements futurs. Les nuages de points obtenus avec le scanner laser terrestre ont été utilisés avec succès aussi pour produire des cartes géologiques en 3D, en utilisant l'intensité du signal réfléchi. Une autre technique pour obtenir des cartes géologiques des zones verticales consiste à combiner un maillage LiDAR avec une carte géologique en 2D. A El Capitan (Yosemite Valley) nous avons pu géoréferencer une carte verticale des principales roches plutoniques que j'ai utilisé ensuite pour étudier les raisons d'une érosion préférentielle de certaines zones de la paroi. D'autres efforts pour quantifier le taux d'érosion ont été effectués au Monte Generoso (Ticino, Suisse) où j'ai essayé d'améliorer l'estimation de l'érosion au long terme en prenant en compte les volumes des compartiments rocheux instables. L'intégration de ces résultats, sur la fracturation et la composition de l'amas rocheux, avec les méthodes existantes permet d'améliorer la prise en compte de l'aléa chute de pierres et éboulements et augmente les possibilités d'interprétation de l'évolution des parois rocheuses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

INTRODUCTION: Occupational exposure to grain dust causes respiratory symptoms and pathologies. To decrease these effects, major changes have occurred in the grain processing industry in the last twenty years. However, there are no data on the effects of these changes on workers' respiratory health. OBJECTIVES: The aim of this study was to evaluate the respiratory health of grain workers and farmers involved in different steps of the processing industry of wheat, the most frequently used cereal in Europe, fifteen years after major improvements in collective protective equipment due to mechanisation. MATERIALS AND METHOD: Information on estimated personal exposure to wheat dust was collected from 87 workers exposed to wheat dust and from 62 controls. Lung function (FEV1, FVC, and PEF), exhaled nitrogen monoxide (FENO) and respiratory symptoms were assessed after the period of highest exposure to wheat during the year. Linear regression models were used to explore the associations between exposure indices and respiratory effects. RESULTS: Acute symptoms - cough, sneezing, runny nose, scratchy throat - were significantly more frequent in exposed workers than in controls. Increased mean exposure level, increased cumulative exposure and chronic exposure to more than 6 mg.m (-3) of inhaled wheat dust were significantly associated with decreased spirometric parameters, including FEV1 and PEF (40 ml and 123 ml.s (-1) ), FEV1 and FVC (0.4 ml and 0.5 ml per 100 h.mg.m (-3) ), FEV1 and FVC (20 ml and 20 ml per 100 h at >6 mg.m (-3) ). However, no increase in FENO was associated with increased exposure indices. CONCLUSIONS: The lung functions of wheat-related workers are still affected by their cumulative exposure to wheat dust, despite improvements in the use of collective protective equipment.

Relevância:

20.00% 20.00%

Publicador: