108 resultados para High-Order Accurate Scheme
em Université de Lausanne, Switzerland
The Mixture Transition Distribution Model for High-Order Markov Chains and Non-Gaussian Time Series.
Resumo:
Phosphorylation of transcription factors is a rapid and reversible process linking cell signaling and control of gene expression, therefore understanding how it controls the transcription factor functions is one of the challenges of functional genomics. We performed such analysis for the forkhead transcription factor FOXC2 mutated in human hereditary disease lymphedemadistichiasis and important for the development of venous and lymphatic valves and lymphatic collecting vessels. We found that FOXC2 is phosphorylated in a cell-cycle dependent manner on eight evolutionary conserved serine/threonine residues, seven of which are clustered within a 70 amino acid domain. Surprisingly, the mutation of phosphorylation sites or a complete deletion of the domain did not affect the transcriptional activity of FOXC2 in a synthetic reporter assay. However, overexpression of the wild type or phosphorylation-deficient mutant resulted in overlapping but distinct gene expression profiles suggesting that binding of FOXC2 to individual sites under physiological conditions is affected by phosphorylation. To gain a direct insight into the role of FOXC2 phosphorylation, we performed comparative genome-wide location analysis (ChIP-chip) of wild type and phosphorylation-deficient FOXC2 in primary lymphatic endothelial cells. The effect of loss of phosphorylation on FOXC2 binding to genomic sites ranged from no effect to nearly complete inhibition of binding, suggesting a mechanism for how FOXC2 transcriptional program can be differentially regulated depending on FOXC2 phosphorylation status. Based on these results, we propose an extension to the enhanceosome model, where a network of genomic context-dependent DNA-protein and protein-protein interactions not only distinguishes a functional site from a nonphysiological site, but also determines whether binding to the functional site can be regulated by phosphorylation. Moreover, our results indicate that FOXC2 may have different roles in quiescent versus proliferating lymphatic endothelial cells in vivo.
Resumo:
Introduction: Rotenone is a botanical pesticide derived from extracts of Derris roots, which is traditionally used as piscicide, but also as an industrial insecticide for home gardens. Its mechanism of action is potent inhibition of mitochondrial respiratory chain by uncoupling oxidative phosphorylation by blocking electron transport at complex-I. Despite its classification as mild to moderately toxic to humans (estimated LD50, 300-500 mg/kg), there is a striking variety of acute toxicity of rotenone depending on the formulation (solvents). Human fatalities with rotenone-containing insecticides have been rarely reported, and a rapid deterioration within a few hours of the ingestion has been described previously in one case. Case report: A 49-year-old Tamil man with a history of asthma, ingested 250 mL of an insecticide containing 1.24% of rotenone (3.125 g, 52.1-62.5 mg/kg) in a suicide attempt at home. The product was not labeled as toxic. One hour later, he vomited repeatedly and emergency services were alerted. He was found unconscious with irregular respiration and was intubated. On arrival at the emergency department, he was comatose (GCS 3) with fixed and dilated pupils, and absent corneal reflexes. Physical examination revealed hemodynamic instability with hypotension (55/30 mmHg) and bradycardia (52 bpm). Significant laboratory findings were lactic acidosis (pH 6.97, lactate 17 mmol/L) and hypokalemia (2 mmol/L). Cranial computed tomography (CT) showed early cerebral edema. A single dose of activated charcoal was given. Intravenous hydration, ephedrine, repeated boli of dobutamine, and a perfusor with 90 micrograms/h norepinephine stabilized blood pressure temporarily. Atropine had a minimal effect on heart rate (58 bpm). Intravenous lipid emulsion was considered (log Pow 4.1), but there was a rapid deterioration with refractory hypotension and acute circulatory failure. The patient died 5h after ingestion of the insecticide. No autopsy was performed. Quantitative analysis of serum performed by high-resolution/accurate mass-mass spectrometry and liquid chromatography (LC-HR/AM-MS): 560 ng/mL rotenone. Other substances were excluded by gas chromatography-mass spectrometry (GC-MS) and liquid chromatography-mass spectrometry (LC-MS/MS). Conclusion: The clinical course was characterized by early severe symptoms and a rapidly fatal evolution, compatible with inhibition of mitochondrial energy supply. Although rotenone is classified as mild to moderately toxic, physicians must be aware that suicidal ingestion of emulsified concentrates may be rapidly fatal. (n=3): stridor, cyanosis, cough (one each). Local swelling after chewing or swallowing soap developed at the earliest after 20 minutes and persisted beyond 24 hours in some cases. Treatment with antihistamines and/or steroids relieved the symptoms in 9 cases. Conclusion: Bar soap ingestion by seniors carries a risk of severe local reactions. Half the patients developed symptoms, predominantly swellings of tongue and/or lips (38%). Cognitive impairment, particularly in the cases of dementia (37%), may increase the risk of unintentional ingestion. Chewing and intraoral retention of soap leads to prolonged contact with the mucosal membranes. Age-associated physiological changes of oral mucosa probably promote the irritant effects of the surfactants. Medical treatment with antihistamines and corticosteroids usually leads to rapid decline of symptoms. Without treatment, there may be a risk of airway obstruction.
Resumo:
Objectives: Our aim was to study the brain regions involved in a divided attention tracking task related to driving in occasional cannabis smokers. In addition we assessed the relationship between THC levels in whole blood and changes in brain activity, behavioural and psychomotor performances. Methods: Twenty-one smokers participated to two independent cross-over fMRI experiments before and after smoking cannabis and a placebo. The paradigm was based on a visuo-motor tracking task, alternating active tracking blocks with passive tracking viewing and rest condition. Half of the active tracking conditions included randomly presented traffic lights as distractors. Blood samples were taken at regular intervals to determine the time-profiles of the major cannabinoids. Their levels during the fMRI experiments were interpolated from concentrations measured by GCMS/ MS just before and after brain imaging. Results: Behavioural data, such as the discard between target and cursor, the time of correct tracking and the reaction time during traffic lights appearance showed a statistical significant impairment of subject s skills due to THC intoxication. Highest THC blood concentrations were measured soon after smoking and ranged between 28.8 and 167.9 ng/ml. These concentrations reached values of a few ng/ml during the fMRI. fMRI results pointed out that under the effect of THC, high order visual areas (V3d) and Intraparietal sulcus (IPS) showed an higher activation compared to the control condition. The opposite comparison showed a decrease of activation during the THC condition in the anterior cingulate gyrus and orbitofrontal areas. In these locations, the BOLD showed a negative correlation with the THC level. Conclusion: Acute cannabis smoking significantly impairs performances and brain activity during active tracking tasks, partly reorganizing the recruitment of brain areas of the attention network. Neural activity in the anterior cingulate might be responsible of the changes in the cognitive controls required in our divided attention task.
Resumo:
Inhibitory control refers to the ability to suppress planned or ongoing cognitive or motor processes. Electrophysiological indices of inhibitory control failure have been found to manifest even before the presentation of the stimuli triggering the inhibition, suggesting that pre-stimulus brain-states modulate inhibition performance. However, previous electrophysiological investigations on the state-dependency of inhibitory control were based on averaged event-related potentials (ERPs), a method eliminating the variability in the ongoing brain activity not time-locked to the event of interest. These studies thus left unresolved whether spontaneous variations in the brain-state immediately preceding unpredictable inhibition-triggering stimuli also influence inhibitory control performance. To address this question, we applied single-trial EEG topographic analyses on the time interval immediately preceding NoGo stimuli in conditions where the responses to NoGo trials were correctly inhibited [correct rejection (CR)] vs. committed [false alarms (FAs)] during an auditory spatial Go/NoGo task. We found a specific configuration of the EEG voltage field manifesting more frequently before correctly inhibited responses to NoGo stimuli than before FAs. There was no evidence for an EEG topography occurring more frequently before FAs than before CR. The visualization of distributed electrical source estimations of the EEG topography preceding successful response inhibition suggested that it resulted from the activity of a right fronto-parietal brain network. Our results suggest that the fluctuations in the ongoing brain activity immediately preceding stimulus presentation contribute to the behavioral outcomes during an inhibitory control task. Our results further suggest that the state-dependency of sensory-cognitive processing might not only concern perceptual processes, but also high-order, top-down inhibitory control mechanisms.
Resumo:
Abstract : The occupational health risk involved with handling nanoparticles is the probability that a worker will experience an adverse health effect: this is calculated as a function of the worker's exposure relative to the potential biological hazard of the material. Addressing the risks of nanoparticles requires therefore knowledge on occupational exposure and the release of nanoparticles into the environment as well as toxicological data. However, information on exposure is currently not systematically collected; therefore this risk assessment lacks quantitative data. This thesis aimed at, first creating the fundamental data necessary for a quantitative assessment and, second, evaluating methods to measure the occupational nanoparticle exposure. The first goal was to determine what is being used where in Swiss industries. This was followed by an evaluation of the adequacy of existing measurement methods to assess workplace nanopaiticle exposure to complex size distributions and concentration gradients. The study was conceived as a series of methodological evaluations aimed at better understanding nanoparticle measurement devices and methods. lt focused on inhalation exposure to airborne particles, as respiration is considered to be the most important entrance pathway for nanoparticles in the body in terms of risk. The targeted survey (pilot study) was conducted as a feasibility study for a later nationwide survey on the handling of nanoparticles and the applications of specific protection means in industry. The study consisted of targeted phone interviews with health and safety officers of Swiss companies that were believed to use or produce nanoparticles. This was followed by a representative survey on the level of nanoparticle usage in Switzerland. lt was designed based on the results of the pilot study. The study was conducted among a representative selection of clients of the Swiss National Accident Insurance Fund (SUVA), covering about 85% of Swiss production companies. The third part of this thesis focused on the methods to measure nanoparticles. Several pre- studies were conducted studying the limits of commonly used measurement devices in the presence of nanoparticle agglomerates, This focus was chosen, because several discussions with users and producers of the measurement devices raised questions about their accuracy measuring nanoparticle agglomerates and because, at the same time, the two survey studies revealed that such powders are frequently used in industry. The first preparatory experiment focused on the accuracy of the scanning mobility particle sizer (SMPS), which showed an improbable size distribution when measuring powders of nanoparticle agglomerates. Furthermore, the thesis includes a series of smaller experiments that took a closer look at problems encountered with other measurement devices in the presence of nanoparticle agglomerates: condensation particle counters (CPC), portable aerosol spectrometer (PAS) a device to estimate the aerodynamic diameter, as well as diffusion size classifiers. Some initial feasibility tests for the efficiency of filter based sampling and subsequent counting of carbon nanotubes (CNT) were conducted last. The pilot study provided a detailed picture of the types and amounts of nanoparticles used and the knowledge of the health and safety experts in the companies. Considerable maximal quantities (> l'000 kg/year per company) of Ag, Al-Ox, Fe-Ox, SiO2, TiO2, and ZnO (mainly first generation particles) were declared by the contacted Swiss companies, The median quantity of handled nanoparticles, however, was 100 kg/year. The representative survey was conducted by contacting by post mail a representative selection of l '626 SUVA-clients (Swiss Accident Insurance Fund). It allowed estimation of the number of companies and workers dealing with nanoparticles in Switzerland. The extrapolation from the surveyed companies to all companies of the Swiss production sector suggested that l'309 workers (95%-confidence interval l'073 to l'545) of the Swiss production sector are potentially exposed to nanoparticles in 586 companies (145 to l'027). These numbers correspond to 0.08% (0.06% to 0.09%) of all workers and to 0.6% (0.2% to 1.1%) of companies in the Swiss production sector. To measure airborne concentrations of sub micrometre-sized particles, a few well known methods exist. However, it was unclear how well the different instruments perform in the presence of the often quite large agglomerates of nanostructured materials. The evaluation of devices and methods focused on nanoparticle agglomerate powders. lt allowed the identification of the following potential sources of inaccurate measurements at workplaces with considerable high concentrations of airborne agglomerates: - A standard SMPS showed bi-modal particle size distributions when measuring large nanoparticle agglomerates. - Differences in the range of a factor of a thousand were shown between diffusion size classifiers and CPC/SMPS. - The comparison between CPC/SMPS and portable aerosol Spectrometer (PAS) was much better, but depending on the concentration, size or type of the powders measured, the differences were still of a high order of magnitude - Specific difficulties and uncertainties in the assessment of workplaces were identified: the background particles can interact with particles created by a process, which make the handling of background concentration difficult. - Electric motors produce high numbers of nanoparticles and confound the measurement of the process-related exposure. Conclusion: The surveys showed that nanoparticles applications exist in many industrial sectors in Switzerland and that some companies already use high quantities of them. The representative survey demonstrated a low prevalence of nanoparticle usage in most branches of the Swiss industry and led to the conclusion that the introduction of applications using nanoparticles (especially outside industrial chemistry) is only beginning. Even though the number of potentially exposed workers was reportedly rather small, it nevertheless underscores the need for exposure assessments. Understanding exposure and how to measure it correctly is very important because the potential health effects of nanornaterials are not yet fully understood. The evaluation showed that many devices and methods of measuring nanoparticles need to be validated for nanoparticles agglomerates before large exposure assessment studies can begin. Zusammenfassung : Das Gesundheitsrisiko von Nanopartikel am Arbeitsplatz ist die Wahrscheinlichkeit dass ein Arbeitnehmer einen möglichen Gesundheitsschaden erleidet wenn er diesem Stoff ausgesetzt ist: sie wird gewöhnlich als Produkt von Schaden mal Exposition gerechnet. Für eine gründliche Abklärung möglicher Risiken von Nanomaterialien müssen also auf der einen Seite Informationen über die Freisetzung von solchen Materialien in die Umwelt vorhanden sein und auf der anderen Seite solche über die Exposition von Arbeitnehmenden. Viele dieser Informationen werden heute noch nicht systematisch gesarnmelt und felilen daher für Risikoanalysen, Die Doktorarbeit hatte als Ziel, die Grundlagen zu schaffen für eine quantitative Schatzung der Exposition gegenüber Nanopartikel am Arbeitsplatz und die Methoden zu evaluieren die zur Messung einer solchen Exposition nötig sind. Die Studie sollte untersuchen, in welchem Ausmass Nanopartikel bereits in der Schweizer Industrie eingesetzt werden, wie viele Arbeitnehrner damit potentiel] in Kontakt komrrien ob die Messtechnologie für die nötigen Arbeitsplatzbelastungsmessungen bereits genügt, Die Studie folcussierte dabei auf Exposition gegenüber luftgetragenen Partikel, weil die Atmung als Haupteintrittspforte iïlr Partikel in den Körper angesehen wird. Die Doktorarbeit besteht baut auf drei Phasen auf eine qualitative Umfrage (Pilotstudie), eine repräsentative, schweizerische Umfrage und mehrere technische Stndien welche dem spezitischen Verständnis der Mëglichkeiten und Grenzen einzelner Messgeräte und - teclmikeri dienen. Die qualitative Telephonumfrage wurde durchgeführt als Vorstudie zu einer nationalen und repräsentativen Umfrage in der Schweizer Industrie. Sie zielte auf Informationen ab zum Vorkommen von Nanopartikeln, und den angewendeten Schutzmassnahmen. Die Studie bestand aus gezielten Telefoninterviews mit Arbeit- und Gesundheitsfachpersonen von Schweizer Unternehmen. Die Untemehmen wurden aufgrund von offentlich zugànglichen lnformationen ausgewählt die darauf hinwiesen, dass sie mit Nanopartikeln umgehen. Der zweite Teil der Dolctorarbeit war die repräsentative Studie zur Evalniernng der Verbreitnng von Nanopaitikelanwendungen in der Schweizer lndustrie. Die Studie baute auf lnformationen der Pilotstudie auf und wurde mit einer repräsentativen Selektion von Firmen der Schweizerischen Unfall Versicherungsanstalt (SUVA) durchgeüihxt. Die Mehrheit der Schweizerischen Unternehmen im lndustrieselctor wurde damit abgedeckt. Der dritte Teil der Doktorarbeit fokussierte auf die Methodik zur Messung von Nanopartikeln. Mehrere Vorstudien wurden dnrchgefîihrt, um die Grenzen von oft eingesetzten Nanopartikelmessgeräten auszuloten, wenn sie grösseren Mengen von Nanopartikel Agglomeraten ausgesetzt messen sollen. Dieser F okns wurde ans zwei Gründen gewählt: weil mehrere Dislcussionen rnit Anwendem und auch dem Produzent der Messgeràte dort eine Schwachstelle vermuten liessen, welche Zweifel an der Genauigkeit der Messgeräte aufkommen liessen und weil in den zwei Umfragestudien ein häufiges Vorkommen von solchen Nanopartikel-Agglomeraten aufgezeigt wurde. i Als erstes widmete sich eine Vorstndie der Genauigkeit des Scanning Mobility Particle Sizer (SMPS). Dieses Messgerät zeigte in Präsenz von Nanopartikel Agglorneraten unsinnige bimodale Partikelgrössenverteilung an. Eine Serie von kurzen Experimenten folgte, welche sich auf andere Messgeräte und deren Probleme beim Messen von Nanopartikel-Agglomeraten konzentrierten. Der Condensation Particle Counter (CPC), der portable aerosol spectrometer (PAS), ein Gerät zur Schàtzung des aerodynamischen Durchniessers von Teilchen, sowie der Diffusion Size Classifier wurden getestet. Einige erste Machbarkeitstests zur Ermittlnng der Effizienz von tilterbasierter Messung von luftgetragenen Carbon Nanotubes (CNT) wnrden als letztes durchgeiührt. Die Pilotstudie hat ein detailliiertes Bild der Typen und Mengen von genutzten Nanopartikel in Schweizer Unternehmen geliefert, und hat den Stand des Wissens der interviewten Gesundheitsschntz und Sicherheitsfachleute aufgezeigt. Folgende Typen von Nanopaitikeln wurden von den kontaktierten Firmen als Maximalmengen angegeben (> 1'000 kg pro Jahr / Unternehrnen): Ag, Al-Ox, Fe-Ox, SiO2, TiO2, und ZnO (hauptsächlich Nanopartikel der ersten Generation). Die Quantitäten von eingesetzten Nanopartikeln waren stark verschieden mit einem ein Median von 100 kg pro Jahr. ln der quantitativen Fragebogenstudie wurden l'626 Unternehmen brieflich kontaktiert; allesamt Klienten der Schweizerischen Unfallversicherringsanstalt (SUVA). Die Resultate der Umfrage erlaubten eine Abschätzung der Anzahl von Unternehmen und Arbeiter, welche Nanopartikel in der Schweiz anwenden. Die Hochrechnung auf den Schweizer lndnstriesektor hat folgendes Bild ergeben: ln 586 Unternehmen (95% Vertrauensintervallz 145 bis 1'027 Unternehmen) sind 1'309 Arbeiter potentiell gegenüber Nanopartikel exponiert (95%-Vl: l'073 bis l'545). Diese Zahlen stehen für 0.6% der Schweizer Unternehmen (95%-Vl: 0.2% bis 1.1%) und 0.08% der Arbeiternehmerschaft (95%-V1: 0.06% bis 0.09%). Es gibt einige gut etablierte Technologien um die Luftkonzentration von Submikrometerpartikel zu messen. Es besteht jedoch Zweifel daran, inwiefern sich diese Technologien auch für die Messurrg von künstlich hergestellten Nanopartikeln verwenden lassen. Aus diesem Grund folcussierten die vorbereitenden Studien für die Arbeitsplatzbeurteilnngen auf die Messung von Pulverri, welche Nan0partike1-Agg10merate enthalten. Sie erlaubten die ldentifikation folgender rnöglicher Quellen von fehlerhaften Messungen an Arbeitsplätzen mit erhöhter Luft-K0nzentrati0n von Nanopartikel Agglomeratenz - Ein Standard SMPS zeigte eine unglaubwürdige bimodale Partikelgrössenverteilung wenn er grössere Nan0par'til<e1Agg10merate gemessen hat. - Grosse Unterschiede im Bereich von Faktor tausend wurden festgestellt zwischen einem Diffusion Size Classiîier und einigen CPC (beziehungsweise dem SMPS). - Die Unterschiede zwischen CPC/SMPS und dem PAS waren geringer, aber abhängig von Grosse oder Typ des gemessenen Pulvers waren sie dennoch in der Grössenordnung von einer guten Grössenordnung. - Spezifische Schwierigkeiten und Unsicherheiten im Bereich von Arbeitsplatzmessungen wurden identitiziert: Hintergrundpartikel können mit Partikeln interagieren die während einem Arbeitsprozess freigesetzt werden. Solche Interaktionen erschweren eine korrekte Einbettung der Hintergrunds-Partikel-Konzentration in die Messdaten. - Elektromotoren produzieren grosse Mengen von Nanopartikeln und können so die Messung der prozessbezogenen Exposition stören. Fazit: Die Umfragen zeigten, dass Nanopartikel bereits Realitàt sind in der Schweizer Industrie und dass einige Unternehmen bereits grosse Mengen davon einsetzen. Die repräsentative Umfrage hat diese explosive Nachricht jedoch etwas moderiert, indem sie aufgezeigt hat, dass die Zahl der Unternehmen in der gesamtschweizerischen Industrie relativ gering ist. In den meisten Branchen (vor allem ausserhalb der Chemischen Industrie) wurden wenig oder keine Anwendungen gefunden, was schliessen last, dass die Einführung dieser neuen Technologie erst am Anfang einer Entwicklung steht. Auch wenn die Zahl der potentiell exponierten Arbeiter immer noch relativ gering ist, so unterstreicht die Studie dennoch die Notwendigkeit von Expositionsmessungen an diesen Arbeitsplätzen. Kenntnisse um die Exposition und das Wissen, wie solche Exposition korrekt zu messen, sind sehr wichtig, vor allem weil die möglichen Auswirkungen auf die Gesundheit noch nicht völlig verstanden sind. Die Evaluation einiger Geräte und Methoden zeigte jedoch, dass hier noch Nachholbedarf herrscht. Bevor grössere Mess-Studien durgefîihrt werden können, müssen die Geräte und Methodem für den Einsatz mit Nanopartikel-Agglomeraten validiert werden.
Resumo:
INTRODUCTION: Inhibitory control refers to our ability to suppress ongoing motor, affective or cognitive processes and mostly depends on a fronto-basal brain network. Inhibitory control deficits participate in the emergence of several prominent psychiatric conditions, including attention deficit/hyperactivity disorder or addiction. The rehabilitation of these pathologies might therefore benefit from training-based behavioral interventions aiming at improving inhibitory control proficiency and normalizing the underlying neurophysiological mechanisms. The development of an efficient inhibitory control training regimen first requires determining the effects of practicing inhibition tasks. METHODS: We addressed this question by contrasting behavioral performance and electrical neuroimaging analyses of event-related potentials (ERPs) recorded from humans at the beginning versus the end of 1 h of practice on a stop-signal task (SST) involving the withholding of responses when a stop signal was presented during a speeded auditory discrimination task. RESULTS: Practicing a short SST improved behavioral performance. Electrophysiologically, ERPs differed topographically at 200 msec post-stimulus onset, indicative of the engagement of distinct brain network with learning. Source estimations localized this effect within the inferior frontal gyrus, the pre-supplementary motor area and the basal ganglia. CONCLUSION: Our collective results indicate that behavioral and brain responses during an inhibitory control task are subject to fast plastic changes and provide evidence that high-order fronto-basal executive networks can be modified by practicing a SST.
Resumo:
Steep mountain catchments typically experience large sediment pulses from hillslopes which are stored in headwater channels and remobilized by debris-flows or bedload transport. Event-based sediment budget monitoring in the active Manival debris-flow torrent in the French Alps during a two-year period gave insights into the catchment-scale sediment routing during moderate rainfall intensities which occur several times each year. The monitoring was based on intensive topographic resurveys of low- and high-order channels using different techniques (cross-section surveys with total station and high-resolution channel surveys with terrestrial and airborne laser scanning). Data on sediment output volumes from the main channel were obtained by a sediment trap. Two debris-flows were observed, as well as several bedload transport flow events. Sediment budget analysis of the two debris-flows revealed that most of the debris-flow volumes were supplied by channel scouring (more than 92%). Bedload transport during autumn contributed to the sediment recharge of high-order channels by the deposition of large gravel wedges. This process is recognized as being fundamental for debris-flow occurrence during the subsequent spring and summer. A time shift of scour-and-fill sequences was observed between low- and high-order channels, revealing the discontinuous sediment transfer in the catchment during common flow events. A conceptual model of sediment routing for different event magnitude is proposed.
Resumo:
The large spatial inhomogeneity in transmit B, field (B-1(+)) observable in human MR images at hi h static magnetic fields (B-0) severely impairs image quality. To overcome this effect in brain T-1-weighted images the, MPRAGE sequence was modified to generate two different images at different inversion times MP2RAGE By combining the two images in a novel fashion, it was possible to create T-1-weigthed images where the result image was free of proton density contrast, T-2* contrast, reception bias field, and, to first order transmit field inhomogeneity. MP2RAGE sequence parameters were optimized using Bloch equations to maximize contrast-to-noise ratio per unit of time between brain tissues and minimize the effect of B-1(+) variations through space. Images of high anatomical quality and excellent brain tissue differentiation suitable for applications such as segmentation and voxel-based morphometry were obtained at 3 and 7 T. From such T-1-weighted images, acquired within 12 min, high-resolution 3D T-1 maps were routinely calculated at 7 T with sub-millimeter voxel resolution (0.65-0.85 mm isotropic). T-1 maps were validated in phantom experiments. In humans, the T, values obtained at 7 T were 1.15 +/- 0.06 s for white matter (WM) and 1.92 +/- 0.16 s for grey matter (GM), in good agreement with literature values obtained at lower spatial resolution. At 3 T, where whole-brain acquisitions with 1 mm isotropic voxels were acquired in 8 min the T-1 values obtained (0.81 +/- 0.03 S for WM and 1.35 +/- 0.05 for GM) were once again found to be in very good agreement with values in the literature. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
To enhance the clinical value of coronary magnetic resonance angiography (MRA), high-relaxivity contrast agents have recently been used at 3T. Here we examine a uniform bilateral shadowing artifact observed along the coronary arteries in MRA images collected using such a contrast agent. Simulations were performed to characterize this artifact, including its origin, to determine how best to mitigate this effect, and to optimize a data acquisition/injection scheme. An intraluminal contrast agent concentration model was used to simulate various acquisition strategies with two profile orders for a slow-infusion of a high-relaxivity contrast agent. Filtering effects from temporally variable weighting in k-space are prominent when a centric, radial (CR) profile order is applied during contrast infusion, resulting in decreased signal enhancement and underestimation of vessel width, while both pre- and postinfusion steady-state acquisitions result in overestimation of the vessel width. Acquisition during the brief postinfusion steady-state produces the greatest signal enhancement and minimizes k-space filtering artifacts.
Resumo:
AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.
Resumo:
Abstract Accurate characterization of the spatial distribution of hydrological properties in heterogeneous aquifers at a range of scales is a key prerequisite for reliable modeling of subsurface contaminant transport, and is essential for designing effective and cost-efficient groundwater management and remediation strategies. To this end, high-resolution geophysical methods have shown significant potential to bridge a critical gap in subsurface resolution and coverage between traditional hydrological measurement techniques such as borehole log/core analyses and tracer or pumping tests. An important and still largely unresolved issue, however, is how to best quantitatively integrate geophysical data into a characterization study in order to estimate the spatial distribution of one or more pertinent hydrological parameters, thus improving hydrological predictions. Recognizing the importance of this issue, the aim of the research presented in this thesis was to first develop a strategy for the assimilation of several types of hydrogeophysical data having varying degrees of resolution, subsurface coverage, and sensitivity to the hydrologic parameter of interest. In this regard a novel simulated annealing (SA)-based conditional simulation approach was developed and then tested in its ability to generate realizations of porosity given crosshole ground-penetrating radar (GPR) and neutron porosity log data. This was done successfully for both synthetic and field data sets. A subsequent issue that needed to be addressed involved assessing the potential benefits and implications of the resulting porosity realizations in terms of groundwater flow and contaminant transport. This was investigated synthetically assuming first that the relationship between porosity and hydraulic conductivity was well-defined. Then, the relationship was itself investigated in the context of a calibration procedure using hypothetical tracer test data. Essentially, the relationship best predicting the observed tracer test measurements was determined given the geophysically derived porosity structure. Both of these investigations showed that the SA-based approach, in general, allows much more reliable hydrological predictions than other more elementary techniques considered. Further, the developed calibration procedure was seen to be very effective, even at the scale of tomographic resolution, for predictions of transport. This also held true at locations within the aquifer where only geophysical data were available. This is significant because the acquisition of hydrological tracer test measurements is clearly more complicated and expensive than the acquisition of geophysical measurements. Although the above methodologies were tested using porosity logs and GPR data, the findings are expected to remain valid for a large number of pertinent combinations of geophysical and borehole log data of comparable resolution and sensitivity to the hydrological target parameter. Moreover, the obtained results allow us to have confidence for future developments in integration methodologies for geophysical and hydrological data to improve the 3-D estimation of hydrological properties.
Resumo:
Objectives The objective of this article is to describe the development of an anatomically accurate simulator in order to aid the training of a perinatal team in the insertion and removal of a fetal endoscopic tracheal occlusion (FETO) balloon in the management of prenatally diagnosed congenital diaphragmatic hernia. Methods An experienced perinatal team collaborated with a medical sculptor to design a fetal model for the FETO procedure. Measurements derived from 28-week fetal magnetic resonance imaging were used in the development of an anatomically precise simulated airway within a silicone rubber preterm fetal model. Clinician feedback was then used to guide multiple iterations of the model with serial improvements in the anatomic accuracy of the simulator airway. Results An appropriately sized preterm fetal mannequin with a high-fidelity airway was developed. The team used this model to develop surgical skills with balloon insertion, and removal, and to prepare the team for an integrated response to unanticipated delivery with the FETO balloon still in situ. Conclusions This fetal mannequin aided in the ability of a fetal therapy unit to offer the FETO procedure at their center for the first time. This model may be of benefit to other perinatal centers planning to offer this procedure.