54 resultados para 180123 Litigation Adjudication and Dispute Resolution

em Université de Lausanne, Switzerland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

RATIONALE: The aim of the work was to develop and validate a method for the quantification of vitamin D metabolites in serum using ultra-high-pressure liquid chromatography coupled to mass spectrometry (LC/MS), and to validate a high-resolution mass spectrometry (LC/HRMS) approach against a tandem mass spectrometry (LC/MS/MS) approach using a large clinical sample set. METHODS: A fast, accurate and reliable method for the quantification of the vitamin D metabolites, 25-hydroxyvitamin D2 (25OH-D2) and 25-hydroxyvitamin D3 (25OH-D3), in human serum was developed and validated. The C3 epimer of 25OH-D3 (3-epi-25OH-D3) was also separated from 25OH-D3. The samples were rapidly prepared via a protein precipitation step followed by solid-phase extraction (SPE) using an HLB μelution plate. Quantification was performed using both LC/MS/MS and LC/HRMS systems. RESULTS: Recovery, matrix effect, inter- and intra-day reproducibility were assessed. Lower limits of quantification (LLOQs) were determined for both 25OH-D2 and 25OH-D3 for the LC/MS/MS approach (6.2 and 3.4 µg/L, respectively) and the LC/HRMS approach (2.1 and 1.7 µg/L, respectively). A Passing & Bablok fit was determined between both approaches for 25OH-D3 on 662 clinical samples (1.11 + 1.06x). It was also shown that results can be affected by the inclusion of the isomer 3-epi-25OH-D3. CONCLUSIONS: Quantification of the relevant vitamin D metabolites was successfully developed and validated here. It was shown that LC/HRMS is an accurate, powerful and easy to use approach for quantification within clinical laboratories. Finally, the results here suggest that it is important to separate 3-epi-25OH-D3 from 25OH-D3. Copyright © 2012 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L'imagerie par résonance magnétique (IRM) peut fournir aux cardiologues des informations diagnostiques importantes sur l'état de la maladie de l'artère coronarienne dans les patients. Le défi majeur pour l'IRM cardiaque est de gérer toutes les sources de mouvement qui peuvent affecter la qualité des images en réduisant l'information diagnostique. Cette thèse a donc comme but de développer des nouvelles techniques d'acquisitions des images IRM, en changeant les techniques de compensation du mouvement, pour en augmenter l'efficacité, la flexibilité, la robustesse et pour obtenir plus d'information sur le tissu et plus d'information temporelle. Les techniques proposées favorisent donc l'avancement de l'imagerie des coronaires dans une direction plus maniable et multi-usage qui peut facilement être transférée dans l'environnement clinique. La première partie de la thèse s'est concentrée sur l'étude du mouvement des artères coronariennes sur des patients en utilisant la techniques d'imagerie standard (rayons x), pour mesurer la précision avec laquelle les artères coronariennes retournent dans la même position battement après battement (repositionnement des coronaires). Nous avons découvert qu'il y a des intervalles dans le cycle cardiaque, tôt dans la systole et à moitié de la diastole, où le repositionnement des coronaires est au minimum. En réponse nous avons développé une nouvelle séquence d'acquisition (T2-post) capable d'acquérir les données aussi tôt dans la systole. Cette séquence a été testée sur des volontaires sains et on a pu constater que la qualité de visualisation des artère coronariennes est égale à celle obtenue avec les techniques standard. De plus, le rapport signal sur bruit fourni par la séquence d'acquisition proposée est supérieur à celui obtenu avec les techniques d'imagerie standard. La deuxième partie de la thèse a exploré un paradigme d'acquisition des images cardiaques complètement nouveau pour l'imagerie du coeur entier. La technique proposée dans ce travail acquiert les données sans arrêt (free-running) au lieu d'être synchronisée avec le mouvement cardiaque. De cette façon, l'efficacité de la séquence d'acquisition est augmentée de manière significative et les images produites représentent le coeur entier dans toutes les phases cardiaques (quatre dimensions, 4D). Par ailleurs, l'auto-navigation de la respiration permet d'effectuer cette acquisition en respiration libre. Cette technologie rend possible de visualiser et évaluer l'anatomie du coeur et de ses vaisseaux ainsi que la fonction cardiaque en quatre dimensions et avec une très haute résolution spatiale et temporelle, sans la nécessité d'injecter un moyen de contraste. Le pas essentiel qui a permis le développement de cette technique est l'utilisation d'une trajectoire d'acquisition radiale 3D basée sur l'angle d'or. Avec cette trajectoire, il est possible d'acquérir continûment les données d'espace k, puis de réordonner les données et choisir les paramètres temporel des images 4D a posteriori. L'acquisition 4D a été aussi couplée avec un algorithme de reconstructions itératif (compressed sensing) qui permet d'augmenter la résolution temporelle tout en augmentant la qualité des images. Grâce aux images 4D, il est possible maintenant de visualiser les artères coronariennes entières dans chaque phase du cycle cardiaque et, avec les mêmes données, de visualiser et mesurer la fonction cardiaque. La qualité des artères coronariennes dans les images 4D est la même que dans les images obtenues avec une acquisition 3D standard, acquise en diastole Par ailleurs, les valeurs de fonction cardiaque mesurées au moyen des images 4D concorde avec les valeurs obtenues avec les images 2D standard. Finalement, dans la dernière partie de la thèse une technique d'acquisition a temps d'écho ultra-court (UTE) a été développée pour la visualisation in vivo des calcifications des artères coronariennes. Des études récentes ont démontré que les acquisitions UTE permettent de visualiser les calcifications dans des plaques athérosclérotiques ex vivo. Cepandent le mouvement du coeur a entravé jusqu'à maintenant l'utilisation des techniques UTE in vivo. Pour résoudre ce problème nous avons développé une séquence d'acquisition UTE avec trajectoire radiale 3D et l'avons testée sur des volontaires. La technique proposée utilise une auto-navigation 3D pour corriger le mouvement respiratoire et est synchronisée avec l'ECG. Trois échos sont acquis pour extraire le signal de la calcification avec des composants au T2 très court tout en permettant de séparer le signal de la graisse depuis le signal de l'eau. Les résultats sont encore préliminaires mais on peut affirmer que la technique développé peut potentiellement montrer les calcifications des artères coronariennes in vivo. En conclusion, ce travail de thèse présente trois nouvelles techniques pour l'IRM du coeur entier capables d'améliorer la visualisation et la caractérisation de la maladie athérosclérotique des coronaires. Ces techniques fournissent des informations anatomiques et fonctionnelles en quatre dimensions et des informations sur la composition du tissu auparavant indisponibles. CORONARY artery magnetic resonance imaging (MRI) has the potential to provide the cardiologist with relevant diagnostic information relative to coronary artery disease of patients. The major challenge of cardiac MRI, though, is dealing with all sources of motions that can corrupt the images affecting the diagnostic information provided. The current thesis, thus, focused on the development of new MRI techniques that change the standard approach to cardiac motion compensation in order to increase the efficiency of cardioavscular MRI, to provide more flexibility and robustness, new temporal information and new tissue information. The proposed approaches help in advancing coronary magnetic resonance angiography (MRA) in the direction of an easy-to-use and multipurpose tool that can be translated to the clinical environment. The first part of the thesis focused on the study of coronary artery motion through gold standard imaging techniques (x-ray angiography) in patients, in order to measure the precision with which the coronary arteries assume the same position beat after beat (coronary artery repositioning). We learned that intervals with minimal coronary artery repositioning occur in peak systole and in mid diastole and we responded with a new pulse sequence (T2~post) that is able to provide peak-systolic imaging. Such a sequence was tested in healthy volunteers and, from the image quality comparison, we learned that the proposed approach provides coronary artery visualization and contrast-to-noise ratio (CNR) comparable with the standard acquisition approach, but with increased signal-to-noise ratio (SNR). The second part of the thesis explored a completely new paradigm for whole- heart cardiovascular MRI. The proposed techniques acquires the data continuously (free-running), instead of being triggered, thus increasing the efficiency of the acquisition and providing four dimensional images of the whole heart, while respiratory self navigation allows for the scan to be performed in free breathing. This enabling technology allows for anatomical and functional evaluation in four dimensions, with high spatial and temporal resolution and without the need for contrast agent injection. The enabling step is the use of a golden-angle based 3D radial trajectory, which allows for a continuous sampling of the k-space and a retrospective selection of the timing parameters of the reconstructed dataset. The free-running 4D acquisition was then combined with a compressed sensing reconstruction algorithm that further increases the temporal resolution of the 4D dataset, while at the same time increasing the overall image quality by removing undersampling artifacts. The obtained 4D images provide visualization of the whole coronary artery tree in each phases of the cardiac cycle and, at the same time, allow for the assessment of the cardiac function with a single free- breathing scan. The quality of the coronary arteries provided by the frames of the free-running 4D acquisition is in line with the one obtained with the standard ECG-triggered one, and the cardiac function evaluation matched the one measured with gold-standard stack of 2D cine approaches. Finally, the last part of the thesis focused on the development of ultrashort echo time (UTE) acquisition scheme for in vivo detection of calcification in the coronary arteries. Recent studies showed that UTE imaging allows for the coronary artery plaque calcification ex vivo, since it is able to detect the short T2 components of the calcification. The heart motion, though, prevented this technique from being applied in vivo. An ECG-triggered self-navigated 3D radial triple- echo UTE acquisition has then been developed and tested in healthy volunteers. The proposed sequence combines a 3D self-navigation approach with a 3D radial UTE acquisition enabling data collection during free breathing. Three echoes are simultaneously acquired to extract the short T2 components of the calcification while a water and fat separation technique allows for proper visualization of the coronary arteries. Even though the results are still preliminary, the proposed sequence showed great potential for the in vivo visualization of coronary artery calcification. In conclusion, the thesis presents three novel MRI approaches aimed at improved characterization and assessment of atherosclerotic coronary artery disease. These approaches provide new anatomical and functional information in four dimensions, and support tissue characterization for coronary artery plaques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: Atherosclerosis results in a considerable medical and socioeconomic impact on society. We sought to evaluate novel magnetic resonance imaging (MRI) angiography and vessel wall sequences to visualize and quantify different morphologic stages of atherosclerosis in a Watanabe hereditary hyperlipidemic (WHHL) rabbit model. MATERIAL AND METHODS: Aortic 3D steady-state free precession angiography and subrenal aortic 3D black-blood fast spin-echo vessel wall imaging pre- and post-Gadolinium (Gd) was performed in 14 WHHL rabbits (3 normal, 6 high-cholesterol diet, and 5 high-cholesterol diet plus endothelial denudation) on a commercial 1.5 T MR system. Angiographic lumen diameter, vessel wall thickness, signal-/contrast-to-noise analysis, total vessel area, lumen area, and vessel wall area were analyzed semiautomatically. RESULTS: Pre-Gd, both lumen and wall dimensions (total vessel area, lumen area, vessel wall area) of group 2 + 3 were significantly increased when compared with those of group 1 (all P < 0.01). Group 3 animals had significantly thicker vessel walls than groups 1 and 2 (P < 0.01), whereas angiographic lumen diameter was comparable among all groups. Post-Gd, only diseased animals of groups 2 + 3 showed a significant (>100%) signal-to-noise ratio and contrast-to-noise increase. CONCLUSIONS: A combination of novel 3D magnetic resonance angiography and high-resolution 3D vessel wall MRI enabled quantitative characterization of various atherosclerotic stages including positive arterial remodeling and Gd uptake in a WHHL rabbit model using a commercially available 1.5 T MRI system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: To introduce a new k-space traversal strategy for segmented three-dimensional echo planar imaging (3D EPI) that encodes two partitions per radiofrequency excitation, effectively reducing the number excitations used to acquire a 3D EPI dataset by half. METHODS: The strategy was evaluated in the context of functional MRI applications for: image quality compared with segmented 3D EPI, temporal signal-to-noise ratio (tSNR) (the ability to detect resting state networks compared with multislice two-dimensional (2D) EPI and segmented 3D EPI, and temporal resolution (the ability to separate cardiac- and respiration-related fluctuations from the desired blood oxygen level-dependent signal of interest). RESULTS: Whole brain images with a nominal voxel size of 2 mm isotropic could be acquired with a temporal resolution under half a second using traditional parallel imaging acceleration up to 4× in the partition-encode direction and using novel data acquisition speed-up of 2× with a 32-channel coil. With 8× data acquisition speed-up in the partition-encode direction, 3D reduced excitations (RE)-EPI produced acceptable image quality without introduction of noticeable additional artifacts. Due to increased tSNR and better characterization of physiological fluctuations, the new strategy allowed detection of more resting state networks compared with multislice 2D-EPI and segmented 3D EPI. CONCLUSION: 3D RE-EPI resulted in significant increases in temporal resolution for whole brain acquisitions and in improved physiological noise characterization compared with 2D-EPI and segmented 3D EPI. Magn Reson Med 72:786-792, 2014. © 2013 Wiley Periodicals, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: The general strategy to perform anti-doping analysis starts with a screening followed by a confirmatory step when a sample is suspected to be positive. The screening step should be fast, generic and able to highlight any sample that may contain a prohibited substance by avoiding false negative and reducing false positive results. The confirmatory step is a dedicated procedure comprising a selective sample preparation and detection mode. Aim: The purpose of the study is to develop rapid screening and selective confirmatory strategies to detect and identify 103 doping agents in urine. Methods: For the screening, urine samples were simply diluted by a factor 2 with ultra-pure water and directly injected ("dilute and shoot") in the ultrahigh- pressure liquid chromatography (UHPLC). The UHPLC separation was performed in two gradients (ESI positive and negative) from 5/95 to 95/5% of MeCN/Water containing 0.1% formic acid. The gradient analysis time is 9 min including 3 min reequilibration. Analytes detection was performed in full scan mode on a quadrupole time-of-flight (QTOF) mass spectrometer by acquiring the exact mass of the protonated (ESI positive) or deprotonated (ESI negative) molecular ion. For the confirmatory analysis, urine samples were extracted on SPE 96-well plate with mixed-mode cation (MCX) for basic and neutral compounds or anion exchange (MAX) sorbents for acidic molecules. The analytes were eluted in 3 min (including 1.5 min reequilibration) with a S1-25 Ann Toxicol Anal. 2009; 21(S1) Abstracts gradient from 5/95 to 95/5% of MeCN/Water containing 0.1% formic acid. Analytes confirmation was performed in MS and MS/MS mode on a QTOF mass spectrometer. Results: In the screening and confirmatory analysis, basic and neutral analytes were analysed in the positive ESI mode, whereas acidic compounds were analysed in the negative mode. The analyte identification was based on retention time (tR) and exact mass measurement. "Dilute and shoot" was used as a generic sample treatment in the screening procedure, but matrix effect (e.g., ion suppression) cannot be avoided. However, the sensitivity was sufficient for all analytes to reach the minimal required performance limit (MRPL) required by the World Anti Doping Agency (WADA). To avoid time-consuming confirmatory analysis of false positive samples, a pre-confirmatory step was added. It consists of the sample re-injection, the acquisition of MS/MS spectra and the comparison to reference material. For the confirmatory analysis, urine samples were extracted by SPE allowing a pre-concentration of the analyte. A fast chromatographic separation was developed as a single analyte has to be confirmed. A dedicated QTOF-MS and MS/MS acquisition was performed to acquire within the same run a parallel scanning of two functions. Low collision energy was applied in the first channel to obtain the protonated molecular ion (QTOF-MS), while dedicated collision energy was set in the second channel to obtain fragmented ions (QTOF-MS/MS). Enough identification points were obtained to compare the spectra with reference material and negative urine sample. Finally, the entire process was validated and matrix effects quantified. Conclusion: Thanks to the coupling of UHPLC with the QTOF mass spectrometer, high tR repeatability, sensitivity, mass accuracy and mass resolution over a broad mass range were obtained. The method was sensitive, robust and reliable enough to detect and identify doping agents in urine. Keywords: screening, confirmatory analysis, UHPLC, QTOF, doping agents

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Central serous chorioretinopathy (CSCR) is a vision-threatening eye disease with no validated treatment and unknown pathogeny. In CSCR, dilation and leakage of choroid vessels underneath the retina cause subretinal fluid accumulation and retinal detachment. Because glucocorticoids induce and aggravate CSCR and are known to bind to the mineralocorticoid receptor (MR), CSCR may be related to inappropriate MR activation. Our aim was to assess the effect of MR activation on rat choroidal vasculature and translate the results to CSCR patients. Intravitreous injection of the glucocorticoid corticosterone in rat eyes induced choroidal enlargement. Aldosterone, a specific MR activator, elicited the same effect, producing choroid vessel dilation -and leakage. We identified an underlying mechanism of this effect: aldosterone upregulated the endothelial vasodilatory K channel KCa2.3. Its blockade prevented aldosterone-induced thickening. To translate these findings, we treated 2 patients with chronic nonresolved CSCR with oral eplerenone, a specific MR antagonist, for 5 weeks, and observed impressive and rapid resolution of retinal detachment and choroidal vasodilation as well as improved visual acuity. The benefit was maintained 5 months after eplerenone withdrawal. Our results identify MR signaling as a pathway controlling choroidal vascular bed relaxation and provide a pathogenic link with human CSCR, which suggests that blockade of MR could be used therapeutically to reverse choroid vasculopathy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper characterizes and evaluates the potential of three commercial CT iterative reconstruction methods (ASIR?, VEO? and iDose(4 ()?())) for dose reduction and image quality improvement. We measured CT number accuracy, standard deviation (SD), noise power spectrum (NPS) and modulation transfer function (MTF) metrics on Catphan phantom images while five human observers performed four-alternative forced-choice (4AFC) experiments to assess the detectability of low- and high-contrast objects embedded in two pediatric phantoms. Results show that 40% and 100% ASIR as well as iDose(4) levels 3 and 6 do not affect CT number and strongly decrease image noise with relative SD constant in a large range of dose. However, while ASIR produces a shift of the NPS curve apex, less change is observed with iDose(4) with respect to FBP methods. With second-generation iterative reconstruction VEO, physical metrics are even further improved: SD decreased to 70.4% at 0.5 mGy and spatial resolution improved to 37% (MTF(50%)). 4AFC experiments show that few improvements in detection task performance are obtained with ASIR and iDose(4), whereas VEO makes excellent detections possible even at an ultra-low-dose (0.3 mGy), leading to a potential dose reduction of a factor 3 to 7 (67%-86%). In spite of its longer reconstruction time and the fact that clinical studies are still required to complete these results, VEO clearly confirms the tremendous potential of iterative reconstructions for dose reduction in CT and appears to be an important tool for patient follow-up, especially for pediatric patients where cumulative lifetime dose still remains high.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale for the purpose of improving predictions of groundwater flow and solute transport. However, extending corresponding approaches to the regional scale still represents one of the major challenges in the domain of hydrogeophysics. To address this problem, we have developed a regional-scale data integration methodology based on a two-step Bayesian sequential simulation approach. Our objective is to generate high-resolution stochastic realizations of the regional-scale hydraulic conductivity field in the common case where there exist spatially exhaustive but poorly resolved measurements of a related geophysical parameter, as well as highly resolved but spatially sparse collocated measurements of this geophysical parameter and the hydraulic conductivity. To integrate this multi-scale, multi-parameter database, we first link the low- and high-resolution geophysical data via a stochastic downscaling procedure. This is followed by relating the downscaled geophysical data to the high-resolution hydraulic conductivity distribution. After outlining the general methodology of the approach, we demonstrate its application to a realistic synthetic example where we consider as data high-resolution measurements of the hydraulic and electrical conductivities at a small number of borehole locations, as well as spatially exhaustive, low-resolution estimates of the electrical conductivity obtained from surface-based electrical resistivity tomography. The different stochastic realizations of the hydraulic conductivity field obtained using our procedure are validated by comparing their solute transport behaviour with that of the underlying ?true? hydraulic conductivity field. We find that, even in the presence of strong subsurface heterogeneity, our proposed procedure allows for the generation of faithful representations of the regional-scale hydraulic conductivity structure and reliable predictions of solute transport over long, regional-scale distances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several Permian-Triassic boundary sections occur in various structural units within Hungary. These sections represent different facies zones of the western Palaeotethys margin. The Gardony core in the NE part of the Transdanubian Range typically represents the inner ramp, while the Balvany section in the Bukk Mountains of northern Hungary represents an outer ramp setting. The two sections have different patterns for their delta(13)C values. The Balvany section shows a continuous change towards more negative delta(13)C values starting at the first biotic decline, followed by a sharp, quasi-symmetric negative peak at the second decline. The appearance of the delta(13)C peak has no relationship to the lithology and occurs within a shale with low overall carbonate content, indicating that the peak is not related to diagenesis or other secondary influences. Instead, the shift and the peak reflect primary processes related to changes in environmental conditions. The continuous shift in delta(13)C values is most probably related to a decrease in bioproductivity, whereas the sharp peak can be attributed to an addition of C strongly depleted in (13)C to the ocean-atmosphere system. The most plausible model is a massive release of methane-hydrate. The quasi-symmetric pattern suggests a rapid warming-cooling cycle or physical unroofing of sediments through slope-failure and releasing methane-hydrate. The Gidony-1 core shows a continuous negative delta(13)C shift starting below the P-T boundary. However, the detailed analyses revealed a sharp delta(13)C peak in the boundary interval, just below the major biotic decline, although its magnitude doesn't reach that observed in the Balvany section. Based on careful textural examination and high-resolution stable isotope microanalyses we suggest that the suppression of the delta(13)C peak that is common in the oolitic boundary sections is due to combined effects of condensed sedimentation, sediment reworking and erosion, as well as perhaps diagenesis. (c) 2005 Elsevier B.V All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Originally invented for topographic imaging, atomic force microscopy (AFM) has evolved into a multifunctional biological toolkit, enabling to measure structural and functional details of cells and molecules. Its versatility and the large scope of information it can yield make it an invaluable tool in any biologically oriented laboratory, where researchers need to perform characterizations of living samples as well as single molecules in quasi-physiological conditions and with nanoscale resolution. In the last 20 years, AFM has revolutionized the characterization of microbial cells by allowing a better understanding of their cell wall and of the mechanism of action of drugs and by becoming itself a powerful diagnostic tool to study bacteria. Indeed, AFM is much more than a high-resolution microscopy technique. It can reconstruct force maps that can be used to explore the nanomechanical properties of microorganisms and probe at the same time the morphological and mechanical modifications induced by external stimuli. Furthermore it can be used to map chemical species or specific receptors with nanometric resolution directly on the membranes of living organisms. In summary, AFM offers new capabilities and a more in-depth insight in the structure and mechanics of biological specimens with an unrivaled spatial and force resolution. Its application to the study of bacteria is extremely significant since it has already delivered important information on the metabolism of these small microorganisms and, through new and exciting technical developments, will shed more light on the real-time interaction of antimicrobial agents and bacteria.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conflicts among siblings are widespread and their resolution involves complex physical and communication tools. Observations in the barn owl Tyto alba showed that siblings vocally communicate in the absence of parents to negotiate priority of access to the impending food resources that parents will bring. In the present paper, we hypothesize and provide correlative evidence that after a parent brought a food item to their progeny, sibling competition involves vocal sib-sib communication. A food item takes a long time to be entirely consumed, and hence siblings continue to compete over prey monopolization even after parents gave a food item to a single offspring. When physical competition is pronounced and thereby the risk of prey theft is high, the individual that received a prey item consumes it in a concealed place. Concomitantly, nestlings vocalize intensely probably to indicate their motivation to siblings to not share their food item, since this vocal behaviour was particularly frequent in younger individuals for which the risk of being robbed is higher than in their older siblings. Furthermore, nestlings consumed more rapidly a food item when their siblings vocalized intensely presumably because the intensity of siblings' vocalizations is associated with a risk of prey theft. Our correlative study suggests that sibling competition favoured the evolution of sib-sib communication under a wide range of situations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: New ways of representing diffusion data emerged recently and achieved to create structural connectivitymaps in healthy brains (Hagmann P et al. (2008)). These maps have the capacity to study alterations over the entire brain at the connection and network level. This is of high interest in complex disconnection diseases like schizophrenia. In this Pathology where multiple lines of evidence suggest the association of the pathology with abnormalities in neural circuitry and impaired structural connectivity, the diffusion imaging has been widely applied. Despite the large findings, most of the research using the diffusion just uses some scalar map derived from diffusion to show that some markers of white matter integrity are diminished in several areas of the brain (Kyriakopoulos M et al (2008)). Thanks to the structural connectionmatrix constructed by the whole brain tractography, we report in this work the network connectivity alterations in the schizophrenic patients. Methods: We investigated 13 schizophrenic patients as assessed by the DIGS (Diagnostic Interview for genetic studies, DSM IV criteria) and 13 healthy controls. We have got from each volunteer a DT-MRI as well as Qball imaging dataset and a high resolution anatomic T1 performed during the same session; with a 3 T clinical MRI scanner. The controls were matched on age, gender, handedness, and parental social economic-status. For all the subjects, a low resolution connection matrix is obtained by dividing the cortex into 66 gyral based ROIs. A higher resolution matrix is constructed using 250 ROIs as described in Hagmann P et al. (2008). These ROIs are respectively used jointly with the diffusion tractography to construct the high and low resolution densities connection matrices for each subject. In a first step the matrices of the groups are compared in term of connectivity, and not in term of density to check if the pathological group shows a loss of global connectivity. In this context the density connection matrices were binarized. As some local connectivity changes were also suspected, especially in frontal and temporal areas, we have also looked for the areas where the connectivity showed significant changes. Results: The statistical analysis revealed a significant loss of global connectivity in the schizophrenic's brains at level 5%. Furthermore, by constructing specific statistics which represent local connectivity within the anatomical regions (66 ROIs) using the data obtained by the finest resolution (250 ROIs) to improve the robustness, we found the regions that cause this significant loss of connectivity. The significance is observed after multiple testing corrections by the False Discovery Rate. Discussion: The detected regions are almost the same as those reported in the literature as the involved regions in schizophrenia. Most of the connectivity decreases are noted in both hemispheres in the fronto-frontal and temporo-temporal regions as well as some temporal ROIs with their adjacent ROIs in parietal and occipital lobes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Four standard radiation qualities (from RQA 3 to RQA 9) were used to compare the imaging performance of a computed radiography (CR) system (general purpose and high resolution phosphor plates of a Kodak CR 9000 system), a selenium-based direct flat panel detector (Kodak Direct View DR 9000), and a conventional screen-film system (Kodak T-MAT L/RA film with a 3M Trimax Regular screen of speed 400) in conventional radiography. Reference exposure levels were chosen according to the manufacturer's recommendations to be representative of clinical practice (exposure index of 1700 for digital systems and a film optical density of 1.4). With the exception of the RQA 3 beam quality, the exposure levels needed to produce a mean digital signal of 1700 were higher than those needed to obtain a mean film optical density of 1.4. In spite of intense developments in the field of digital detectors, screen-film systems are still very efficient detectors for most of the beam qualities used in radiology. An important outcome of this study is the behavior of the detective quantum efficiency of the digital radiography (DR) system as a function of beam energy. The practice of users to increase beam energy when switching from a screen-film system to a CR system, in order to improve the compromise between patient dose and image quality, might not be appropriate when switching from screen-film to selenium-based DR systems.