967 resultados para High-dynamic range images
Resumo:
The Mississippi Valley-type zinc and lead deposits at Topla (250,150 metric tons (t) of ore grading 1.0 wt % Zn and 3.3 wt % Pb) and Mezica (19 million metric tons (Mt) of ore grading 5.3 wt % Pb and 2.7 wt % Zn) occur within the Middle to Upper Triassic platform carbonate rocks of the northern Karavanke/Drau Range geotectonic units of the Eastern Alps, Slovenia. The ore and host rocks of these deposits have been investigated by a combination of inorganic and organic geochemical methods to determine major, trace, and rare earth element (REE) concentrations, hydrocarbon distribution, and stable isotope ratios of carbonates, kerogen, extractable organic matter, and individual hydrocarbons. These data combined with sedimentological evidence provide insight into the paleoenvironmental conditions at the site of ore formation. The carbonate isotope composition, the REE patterns, and the distribution of hydrocarbon biomarkers (normal alkanes and steranes) suggest a marine depositional environment. At Topla, a relatively high concentration of redox sensitive trace elements (V, Mo, U) in the host dolostones and REE patterns parallel to that of the North American shale composite suggest that sediments were deposited in a reducing environment. Anoxic conditions enhanced the preservation of organic matter and resulted in relatively higher total organic carbon contents (up to 0.4 wt %). The isotopic composition of the kerogen (delta C-13(kerogon) = -29.4 to -25.0 parts per thousand, delta N-15(kerogen) = -.13.6 to 6.8 parts per thousand) suggests that marine algae and/or bacteria were the main source of organic carbon with a very minor contribution from detrital continental plants and a varying degree of alteration. Extractable organic matter from Topla ore is generally depleted in C-13 compared to the associated kerogen, which is consistent with an indigenous source of the bitumens. The mineralization correlates with delta N-15(kerogen) values around 0 per mil, C-13 depleted kerogen, C-13 enriched n-heptadecane, and relatively high concentrations of bacteria] hydrocarbon biomarkers, indicating a high cyanobacterial biomass at the site of ore formation. Abundant dissimilatory sulfate-reducing bacteria, feeding on the cyanobacterial remains, led to accumulation of biogenic H2S in the pore water of the sediments. This biogenic H2S was mainly incorporated into sedimentary organic matter and diagenetic pyrite. Higher bacterial activity at the ore site also is indicated by specific concentration ratios of hydrocarbons, which are roughly correlated with total Pb plus Zn contents. This correlation is consistent with mixing of hydrothermal metal-rich, fluids and local bacteriogenic sulfide sulfur. The new geochemical data provide supporting evidence that Topla is a low-temperature Mississippi Valley-type deposit formed in an anoxic supratidal saline to hypersaline environment. A laminated cyanobacterial mat, with abundant sulfate-reducing bacteria was the main site of sulfate reduction.
Resumo:
Ultrasound segmentation is a challenging problem due to the inherent speckle and some artifacts like shadows, attenuation and signal dropout. Existing methods need to include strong priors like shape priors or analytical intensity models to succeed in the segmentation. However, such priors tend to limit these methods to a specific target or imaging settings, and they are not always applicable to pathological cases. This work introduces a semi-supervised segmentation framework for ultrasound imaging that alleviates the limitation of fully automatic segmentation, that is, it is applicable to any kind of target and imaging settings. Our methodology uses a graph of image patches to represent the ultrasound image and user-assisted initialization with labels, which acts as soft priors. The segmentation problem is formulated as a continuous minimum cut problem and solved with an efficient optimization algorithm. We validate our segmentation framework on clinical ultrasound imaging (prostate, fetus, and tumors of the liver and eye). We obtain high similarity agreement with the ground truth provided by medical expert delineations in all applications (94% DICE values in average) and the proposed algorithm performs favorably with the literature.
Resumo:
Although aneuploidy has many possible causes, it often results from underlying chromosomal instability (CIN) leading to an unstable karyotype with cell-to-cell variation and multiple subclones. To test for the presence of CIN in high hyperdiploid acute lymphoblastic leukemia (HeH ALL) at diagnosis, we investigated 20 patients (10 HeH ALL and 10 non-HeH ALL), using automated four-color interphase fluorescence in situ hybridization (I-FISH) with centromeric probes for chromosomes 4, 6, 10, and 17. In HeH ALL, the proportion of abnormal cells ranged from 36.3% to 92.4%, and a variety of aneuploid populations were identified. Compared with conventional cytogenetics, I-FISH revealed numerous additional clones, some of them very small. To investigate the nature and origin of this clonal heterogeneity, we determined average numerical CIN values for all four chromosomes together and for each chromosome and patient group. The CIN values in HeH ALL were relatively high (range, 22.2-44.7%), compared with those in non-HeH ALL (3.2-6.4%), thus accounting for the presence of numerical CIN in HeH ALL at diagnosis. We conclude that numerical CIN may be at the origin of the high level of clonal heterogeneity revealed by I-FISH in HeH ALL at presentation, which would corroborate the potential role of CIN in tumor pathogenesis.
Resumo:
This study compared adherence (persistence and execution) during pregnancy and postpartum in HIV-positive women having taken part in the adherence-enhancing program of the Community Pharmacy of the Department of Ambulatory Care and Community Medicine in Lausanne between 2004 and 2012. This interdisciplinary program combined electronic drug monitoring and semi-structured, repeated motivational interviews. This was a retrospective, observational study. Observation period spread over from first adherence visit after last menstruation until 6 months after childbirth. Medication-taking was recorded by electronic drug monitoring. Socio-demographic and delivery data were collected from Swiss HIV Cohort database. Adherence data, barriers and facilitators were collected from pharmacy database. Electronic data were reconciled with pill-count and interview notes in order to include reported pocket-doses. Execution was analyzed over 3-day periods by a mixed effect logistic model, separating time before and after childbirth. This model allowed us to estimate different time slopes for both periods and to show a sudden fall associated with childbirth. Twenty-five pregnant women were included. Median age was 29 (IQR: 26.5, 32.0), women were in majority black (n_17,68%) and took a cART combining protease and nucleoside reverse transcriptase inhibitors (n_24,96%). Eleven women (44%) were ART-naı¨ve at the beginning of pregnancy. Twenty women (80%) were included in the program because of pregnancy. Women were included at all stages of pregnancy. Six women (24%) stopped the program during pregnancy, 3 (12%) at delivery, 4 (16%) during postpartum and 12 (48%) stayed in program at the end of observation time. Median number of visits was 4 (3.0, 6.3) during pregnancy and 3 (0.8, 6.0) during postpartum. Execution was continuously high during pregnancy, low at beginning of postpartum and increased gradually during the 6 months of postpartum. Major barriers to adherence were medication adverse events and difficulties in daily routine. Facilitators were motivation for promoting child-health and social support. The dramatic drop and very slow increase in cART adherence during postpartum might result in viral rebound and drug resistance. Although much attention is devoted to pregnant women, interdisciplinary care should also be provided to women in the community during first trimester of postpartum to support them in sustaining cART adherence.
Resumo:
Purpose: To assess the phenotype of patients in a large 3 generation Swiss family with X-linked retinitis pigmentosa (XLRP) due to a novel nonsense mutation Glu20stop in RP2 gene and to correlate with the genotype. Methods: 6 affected patients (1 male, 5 females, age range: 23 - 73 years) were assessed with a complete ophthalmologic examination. All had fundus autofluorescence images, standardised electroretinography, Goldmann visual fields and Optical Coherence Tomography. In addition, medical records of 2 affected male patients were reviewed. Blood sample was taken for molecular analysis. Results: The male patients were severely affected at a young age with early macular involvement. The youngest 23 y old male had also high myopia and vision of less than 0.05 according to Snellen EDTRS chart bilaterally. All 5 female carriers had some degree of rod-cone dystrophy, but no macular involvement. The visual acuity was 1.0 in the younger carriers, while the 73 years old had VA of 0.5. Two females had mild myopia (range -0.75 to -2) and one had anisometropia of 3.5D, with the more severely affected eye being myopic. Three out of 5 female carriers had optic nerve drusen. Conclusions: We report a novel Glu20stop mutation in RP2 gene, which is a rare cause of XLRP. Our description of severe phenotype in male patients with high myopia and early macular atrophy confirms previous reports. Unlike previous reports, all our female carriers had RP, but not macular involvement or high myopia. The identifiable phenotype for RP2-XLRP aids in clinical diagnosis and targeted genetic screening.
Resumo:
A high throughput method was designed to produce hyperpolarized gases by combining low-temperature dynamic nuclear polarization with a sublimation procedure. It is illustrated by applications to 129Xe nuclear magnetic resonance in xenon gas, leading to a signal enhancement of 3 to 4 orders of magnitude compared to the room-temperature thermal equilibrium signal at 7.05 T.
Resumo:
The thesis at hand is concerned with the spatio-temporal brain mechanisms of visual food perception as investigated by electrical neuroimaging. Due to the increasing prevalence of obesity and its associated challenges for public health care, there is a need to better understand behavioral and brain processes underlying food perception and food-based decision-making. The first study (Study A) of this thesis was concerned with the role of repeated exposure to visual food cues. In our everyday lives we constantly and repeatedly encounter food and these exposures influence our food choices and preferences. In Study A, we therefore applied electrical neuroimaging analyses of visual evoked potentials to investigate the spatio-temporal brain dynamics linked to the repeated viewing of high- and low-energy food cues (published manuscript: "The role of energetic value in dynamic brain response adaptation during repeated food image viewing" (Lietti et al., 2012)). In this study, we found that repetitions differentially affect behavioral and brain mechanisms when high-energy, as opposed to low-energy foods and non-food control objects, were viewed. The representation of high-energy food remained invariant between initial and repeated exposures indicating that the sight of high-energy dense food induces less behavioral and neural adaptation than the sight of low-energy food and non-food control objects. We discuss this finding in the context of the higher salience (due to greater motivation and higher reward or hedonic valuation) of energy- dense food that likely generates a more mnemonically stable representation. In turn, this more invariant representation of energy-dense food is supposed to (partially) explain why these foods are over-consumed despite of detrimental health consequences. In Study Β we investigated food responsiveness in patients who had undergone Roux-en-Y gastric bypass surgery to overcome excessive obesity. This type of gastric bypass surgery is not only known to alter food appreciation, but also the secretion patterns of adipokines and gut peptides. Study Β aimed at a comprehensive and interdisciplinary investigation of differences along the gut-brain axis in bypass-operated patients as opposed to weight-matched non-operated controls. On the one hand, the spatio-temporal brain dynamics to the visual perception of high- vs. low-energy foods under differing states of motivation towards food intake (i.e. pre- and post-prandial) were assessed and compared between groups. On the other hand, peripheral gut hormone measures were taken in pre- and post-prandial nutrition state and compared between groups. In order to evaluate alterations in the responsiveness along the gut-brain-axis related to gastric bypass surgery, correlations between both measures were compared between both participant groups. The results revealed that Roux-en- Y gastric bypass surgery alters the spatio-temporal brain dynamics to the perception of high- and low-energy food cues, as well as the responsiveness along the gut-brain-axis. The potential role of these response alterations is discussed in relation to previously observed changes in physiological factors and food intake behavior post-Roux-en-Y gastric bypass surgery. By doing so, we highlight potential behavioral, neural and endocrine (i.e. gut hormone) targets for the future development of intervention strategies for deviant eating behavior and obesity. Together, the studies showed that the visual representation of foods in the brain is plastic and that modulations in neural activity are already noted at early stages of visual processing. Different factors of influence such as a repeated exposure, Roux-en-Y gastric bypass surgery, motivation (nutrition state), as well as the energy density of the visually perceived food were identified. En raison de la prévalence croissante de l'obésité et du défi que cela représente en matière de santé publique, une meilleure compréhension des processus comportementaux et cérébraux liés à la nourriture sont nécessaires. En particulier, cette thèse se concentre sur l'investigation des mécanismes cérébraux spatio-temporels liés à la perception visuelle de la nourriture. Nous sommes quotidiennement et répétitivement exposés à des images de nourriture. Ces expositions répétées influencent nos choix, ainsi que nos préférences alimentaires. La première étude (Study A) de cette thèse investigue donc l'impact de ces exposition répétée à des stimuli visuels de nourriture. En particulier, nous avons comparé la dynamique spatio-temporelle de l'activité cérébrale induite par une exposition répétée à des images de nourriture de haute densité et de basse densité énergétique. (Manuscrit publié: "The role of energetic value in dynamic brain response adaptation during repeated food image viewing" (Lietti et al., 2012)). Dans cette étude, nous avons pu constater qu'une exposition répétée à des images représentant de la nourriture de haute densité énergétique, par opposition à de la nourriture de basse densité énergétique, affecte les mécanismes comportementaux et cérébraux de manière différente. En particulier, la représentation neurale des images de nourriture de haute densité énergétique est similaire lors de l'exposition initiale que lors de l'exposition répétée. Ceci indique que la perception d'images de nourriture de haute densité énergétique induit des adaptations comportementales et neurales de moindre ampleur par rapport à la perception d'images de nourriture de basse densité énergétique ou à la perception d'une « catégorie contrôle » d'objets qui ne sont pas de la nourriture. Notre discussion est orientée sur les notions prépondérantes de récompense et de motivation qui sont associées à la nourriture de haute densité énergétique. Nous suggérons que la nourriture de haute densité énergétique génère une représentation mémorielle plus stable et que ce mécanisme pourrait (partiellement) être sous-jacent au fait que la nourriture de haute densité énergétique soit préférentiellement consommée. Dans la deuxième étude (Study Β) menée au cours de cette thèse, nous nous sommes intéressés aux mécanismes de perception de la nourriture chez des patients ayant subi un bypass gastrique Roux- en-Y, afin de réussir à perdre du poids et améliorer leur santé. Ce type de chirurgie est connu pour altérer la perception de la nourriture et le comportement alimentaire, mais également la sécrétion d'adipokines et de peptides gastriques. Dans une approche interdisciplinaire et globale, cette deuxième étude investigue donc les différences entre les patients opérés et des individus « contrôles » de poids similaire au niveau des interactions entre leur activité cérébrale et les mesures de leurs hormones gastriques. D'un côté, nous avons investigué la dynamique spatio-temporelle cérébrale de la perception visuelle de nourriture de haute et de basse densité énergétique dans deux états physiologiques différent (pre- et post-prandial). Et de l'autre, nous avons également investigué les mesures physiologiques des hormones gastriques. Ensuite, afin d'évaluer les altérations liées à l'intervention chirurgicale au niveau des interactions entre la réponse cérébrale et la sécrétion d'hormone, des corrélations entre ces deux mesures ont été comparées entre les deux groupes. Les résultats révèlent que l'intervention chirurgicale du bypass gastrique Roux-en-Y altère la dynamique spatio-temporelle de la perception visuelle de la nourriture de haute et de basse densité énergétique, ainsi que les interactions entre cette dernière et les mesures périphériques des hormones gastriques. Nous discutons le rôle potentiel de ces altérations en relation avec les modulations des facteurs physiologiques et les changements du comportement alimentaire préalablement déjà démontrés. De cette manière, nous identifions des cibles potentielles pour le développement de stratégies d'intervention future, au niveau comportemental, cérébral et endocrinien (hormones gastriques) en ce qui concerne les déviances du comportement alimentaire, dont l'obésité. Nos deux études réunies démontrent que la représentation visuelle de la nourriture dans le cerveau est plastique et que des modulations de l'activité neurale apparaissent déjà à un stade très précoce des mécanismes de perception visuelle. Différents facteurs d'influence comme une exposition repetee, le bypass gastrique Roux-en-Y, la motivation (état nutritionnel), ainsi que la densité énergétique de la nourriture qui est perçue ont pu être identifiés.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
Introduction: Osteoporosis (OP) is a systemic skeletal disease characterized by a low bone mineral density (BMD) and a micro-architectural (MA) deterioration. Clinical risk factors (CRF) are often used as a MA approximation. MA is yet evaluable in daily practice by the Trabecular Bone Score (TBS) measure. TBS is a novel grey-level texture measurement reflecting bone micro-architecture based on the use of experimental variograms of 2D projection images. TBS is very simple to obtain, by reanalyzing a lumbar DXA-scan. TBS has proven to have diagnosis and prognosis value, partially independent of CRF and BMD. The aim of the OsteoLaus cohort is to combine in daily practice the CRF and the information given by DXA (BMD, TBS and vertebral fracture assessment (VFA)) to better identify women at high fracture risk. Method: The OsteoLaus cohort (1400 women 50 to 80 years living in Lausanne, Switzerland) started in 2010. This study is derived from the cohort COLAUS who started in Lausanne in 2003. The main goals of COLAUS is to obtain information on the epidemiology and genetic determinants of cardiovascular risk in 6700 men and women. CRF for OP, bone ultrasound of the heel, lumbar spine and hip BMD, VFA by DXA and MA evaluation by TBS are recorded in OsteoLaus. Preliminary results are reported. Results: We included 631 women: mean age 67.4±6.7 y, BMI 26.1±4.6, mean lumbar spine BMD 0.943±0.168 (T-score -1.4 SD), TBS 1.271±0.103. As expected, correlation between BMD and site matched TBS is low (r2=0.16). Prevalence of VFx grade 2/3, major OP Fx and all OP Fx is 8.4%, 17.0% and 26.0% respectively. Age- and BMI-adjusted ORs (per SD decrease) are 1.8 (1.2- 2.5), 1.6 (1.2-2.1), 1.3 (1.1-1.6) for BMD for the different categories of fractures and 2.0 (1.4-3.0), 1.9 (1.4-2.5), 1.4 (1.1-1.7) for TBS respectively. Only 32 to 37% of women with OP Fx have a BMD < -2.5 SD or a TBS < 1.200. If we combine a BMD < -2.5 SD or a TBS < 1.200, 54 to 60% of women with an osteoporotic Fx are identified. Conclusion: As in the already published studies, these preliminary results confirm the partial independence between BMD and TBS. More importantly, a combination of TBS subsequent to BMD increases significantly the identification of women with prevalent OP Fx which would have been miss-classified by BMD alone. For the first time we are able to have complementary information about fracture (VFA), density (BMD), micro- and macro architecture (TBS & HAS) from a simple, low ionizing radiation and cheap device: DXA. Such complementary information is very useful for the patient in the daily practice and moreover will likely have an impact on cost effectiveness analysis.
Resumo:
This paper characterizes and evaluates the potential of three commercial CT iterative reconstruction methods (ASIR?, VEO? and iDose(4 ()?())) for dose reduction and image quality improvement. We measured CT number accuracy, standard deviation (SD), noise power spectrum (NPS) and modulation transfer function (MTF) metrics on Catphan phantom images while five human observers performed four-alternative forced-choice (4AFC) experiments to assess the detectability of low- and high-contrast objects embedded in two pediatric phantoms. Results show that 40% and 100% ASIR as well as iDose(4) levels 3 and 6 do not affect CT number and strongly decrease image noise with relative SD constant in a large range of dose. However, while ASIR produces a shift of the NPS curve apex, less change is observed with iDose(4) with respect to FBP methods. With second-generation iterative reconstruction VEO, physical metrics are even further improved: SD decreased to 70.4% at 0.5 mGy and spatial resolution improved to 37% (MTF(50%)). 4AFC experiments show that few improvements in detection task performance are obtained with ASIR and iDose(4), whereas VEO makes excellent detections possible even at an ultra-low-dose (0.3 mGy), leading to a potential dose reduction of a factor 3 to 7 (67%-86%). In spite of its longer reconstruction time and the fact that clinical studies are still required to complete these results, VEO clearly confirms the tremendous potential of iterative reconstructions for dose reduction in CT and appears to be an important tool for patient follow-up, especially for pediatric patients where cumulative lifetime dose still remains high.
Resumo:
The large spatial inhomogeneity in transmit B, field (B-1(+)) observable in human MR images at hi h static magnetic fields (B-0) severely impairs image quality. To overcome this effect in brain T-1-weighted images the, MPRAGE sequence was modified to generate two different images at different inversion times MP2RAGE By combining the two images in a novel fashion, it was possible to create T-1-weigthed images where the result image was free of proton density contrast, T-2* contrast, reception bias field, and, to first order transmit field inhomogeneity. MP2RAGE sequence parameters were optimized using Bloch equations to maximize contrast-to-noise ratio per unit of time between brain tissues and minimize the effect of B-1(+) variations through space. Images of high anatomical quality and excellent brain tissue differentiation suitable for applications such as segmentation and voxel-based morphometry were obtained at 3 and 7 T. From such T-1-weighted images, acquired within 12 min, high-resolution 3D T-1 maps were routinely calculated at 7 T with sub-millimeter voxel resolution (0.65-0.85 mm isotropic). T-1 maps were validated in phantom experiments. In humans, the T, values obtained at 7 T were 1.15 +/- 0.06 s for white matter (WM) and 1.92 +/- 0.16 s for grey matter (GM), in good agreement with literature values obtained at lower spatial resolution. At 3 T, where whole-brain acquisitions with 1 mm isotropic voxels were acquired in 8 min the T-1 values obtained (0.81 +/- 0.03 S for WM and 1.35 +/- 0.05 for GM) were once again found to be in very good agreement with values in the literature. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Thank you Chairman I would like to extend a warm welcome to our keynote speakers, David Byrne of the European Commission, Derek Yach from the World Health Organisation, and Paul Quinn representing Congressman Marty Meehan who sends his apologies. When we include the speakers who will address later sessions, this is, undoubtedly, one of the strongest teams that have been assembled on tobacco control in Europe. The very strength of the team underlines what I see as a shift – a very necessary shift – in the way we perceive the tobacco issue. For the last twenty years, we have lived out a paradox. It isn´t a social side issue. I make no apology for the bluntness of what I´m saying, and will come back, a little later, to the radicalism I believe we need to bring – nationally – to this issue. For starters, though, I want to lay it on the line that what we´re talking about is an epidemic as deadly as any suffered by human kind throughout the centuries. Slower than some of those epidemics in its lethal action, perhaps. But an epidemic, nonetheless. According to the World Health Organisation tobacco accounted for just over 3 million annual deaths in 1990, rising to 4.023 million annual deaths in 1998. The numbers of deaths due to tobacco will rise to 8.4 million in 2020 and reach roughly 10 million annually by 2030. This is quite simply ghastly. Tobacco kills. It kills in many different ways. It kills increasing numbers of women. It does its damage directly and indirectly. For children, much of the damage comes from smoking by adults where children live, study, play and work. The very least we should be able to offer every child is breathable air. Air that doesn´t do them damage. We´re now seeing a global public health response to the tobacco epidemic. The Tobacco Free Initiative launched by the World Health Organisation was matched by significant tobacco control initiatives throughout the world. During this conference we will hear about the experiences our speakers had in driving these initiatives. This Tobacco Free Initiative poses unique challenges to our legal frameworks at both national and international levels; in particular it raises challenges about the legal context in which tobacco products are traded and asks questions about the impact of commercial speech especially on children, and the extent of the limitations that should be imposed on it. Politicians, supported by economists and lawyers as well as the medical profession, must continue to explore and develop this context to find innovative ways to wrap public health considerations around the trade in tobacco products – very tightly. We also have the right to demand a totally new paradigm from the tobacco industry. Bluntly, the tobacco industry plays the PR game at its cynical worst. The industry sells its products without regard to the harm these products cause. At the same time, to gain social acceptance, it gives donations, endowments and patronage to high profile events and people. Not good enough. This model of behaviour is no longer acceptable in a modern society. We need one where the industry integrates social responsibility and accountability into its day-to-day activities. We have waited for this change in behaviour from the tobacco industry for many decades. Unfortunately the documents disclosed during litigation in the USA and from other sources make very depressing reading; it is clear from them that any trust society placed in the tobacco industry in the past to address the health problems associated with its products was misplaced. This industry appears to lack the necessary leadership to guide it towards just and responsible action. Instead, it chooses evasion, deception and at times illegal activity to protect its profits at any price and to avoid its responsibilities to society and its customers. It has engaged in elaborate ´spin´ to generate political tolerance, scientific uncertainty and public acceptance of its products. Legislators must act now. I see no reason why the global community should continue to wait. Effective legal controls must be laid on this errant industry. We should also keep these controls under review at regular intervals and if they are failing to achieve the desired outcomes we should be prepared to amend them. In Ireland, as Minister for Health and Children, I launched a comprehensive tobacco control policy entitled “Towards a Tobacco Free Society“. OTT?Excessive?Unrealistic? On the contrary – I believe it to be imperative and inevitable. I honestly hold that, given the range of fatal diseases caused by tobacco use we have little alternative but to pursue the clear objective of creating a tobacco free society. Aiming at a tobacco free society means ensuring public and political opinion are properly informed. It requires help to be given to smokers to break the addiction. It demands that people are protected against environmental tobacco smoke and children are protected from any inducement to experiment with this product. Over the past year we have implemented a number of measures which will support these objectives; we have established an independent Office of Tobacco Control, we have introduced free nicotine replacement therapy for low-income earners, we have extended our existing prohibitions on tobacco advertising to the print media with some minor derogations for international publications. We have raised the legal age at which a person can be sold tobacco products to eighteen years. We have invested substantially more funds in health promotion activities and we have mounted sustained information campaigns. We have engaged in sponsorship arrangements, which are new and innovative for public bodies. I have provided health boards with additional resources to let them mount a sustained inspection and enforcement service. Health boards will engage new Directors of Tobacco Control responsible for coordinating each health board´s response and for liasing with the Tobacco Control Agency I set up earlier this year. Most recently, I have published a comprehensive Bill – The Public Health (Tobacco) Bill, 2001. This Bill will, among other things, end all forms of product display and in-store advertising and will require all retailers to register with the new Tobacco Control Agency. Ten packs of cigarettes will be banned and transparent and independent testing procedures of tobacco products will be introduced. Enforcement officers will be given all the necessary powers to ensure there is full compliance with the law. On smoking in public places we will extend the existing areas covered and it is proposed that I, as Minister for Health and Children, will have the powers to introduce further prohibitions in public places such as pubs and the work place. I will also provide for the establishment of a Tobacco Free Council to advise and assist on an ongoing basis. I believe the measures already introduced and those additional ones proposed in the Bill have widespread community support. In fact, you´re going to hear a detailed presentation from the MRBI which will amply illustrate the extent of this support. The great thing is that the support comes from smokers and non-smokers alike. Bottom line, Ladies and Gentlemen, is that we are at a watershed. As a society (if you´ll allow me to play with a popular phrase) we´ve realised it´s time to ´wake up and smell the cigarettes.´ Smell them. See them for what they are. And get real about destroying their hold on our people. The MRBI survey makes it clear that the single strongest weapon we have when it comes to preventing the habit among young people is price. Simple as that. Price. Up to now, the fear of inflation has been a real impediment to increasing taxes on tobacco. It sounds a serious, logical argument. Until you take it out and look at it a little more closely. Weigh it, as it were, in two hands. I believe – and I believe this with a great passion – that we must take cigarettes out of the equation we use when awarding wage increases. I am calling on IBEC and ICTU, on employers and trade unions alike, to move away from any kind of tolerance of a trade that is killing our citizens. At one point in industrial history, cigarettes were a staple of the workingman´s life. So it was legitimate to include them in the ´basket´ of goods that goes to make up the Consumer Price Index. It isn´t legitimate to include them any more. Today, I´m saying that society collectively must take the step to remove cigarettes from the basket of normality, from the list of elements which constitute necessary consumer spending. I´m saying: “We can no longer delude ourselves. We must exclude cigarettes from the considerations we address in central wage bargaining. We must price cigarettes out of the reach of the children those cigarettes will kill.” Right now, in the monthly Central Statistics Office reports on consumer spending, the figures include cigarettes. But – right down at the bottom of the page – there´s another figure. Calculated without including cigarettes. I believe that if we continue to use the first figure as our constant measure, it will be an indictment of us as legislators, as advocates for working people, as public health professionals. If, on the other hand, we move to the use of the second figure, we will be sending out a message of startling clarity to the nation. We will be saying “We don´t count an addictive, killer drug as part of normal consumer spending.” Taking cigarettes out of the basket used to determine the Consumer Price Index will take away the inflation argument. It will not be easy, in its implications for the social partners. But it is morally inescapable. We must do it. Because it will help us stop the killer that is tobacco. If we can do it, we will give so much extra strength to health educators and the new Tobacco Control Association. This new organisation of young people who already have branches in over fifteen counties, is represented here today. The young adults who make up its membership are well placed to advise children of the dangers of tobacco addiction in a way that older generations cannot. It would strengthen their hand if cigarettes move – in price terms – out of the easy reach of our children Finally, I would like to commend so many public health advocates who have shown professional and indeed personal courage in their commitment to this critical public health issue down through the years. We need you to continue to challenge and confront this grave public health problem and to repudiate the questionable science of the tobacco industry. The Research Institute for a Tobacco Free Society represents a new and dynamic form of partnership between government and civil society. It will provide an effective platform to engage and mobilise the many different professional and academic skills necessary to guide and challenge us. I wish the conference every success.
Resumo:
INTRODUCTION: Gamma Knife surgery (GKS) is a non-invasive neurosurgical stereotactic procedure, increasingly used as an alternative to open functional procedures. This includes targeting of the ventro-intermediate nucleus of the thalamus (e.g. Vim) for tremor. We currently perform an indirect targeting, as the Vim is not visible on current 3Tesla MRI acquisitions. Our objective was to enhance anatomic imaging (aiming at refining the precision of anatomic target selection by direct visualisation) in patients treated for tremor with Vim GKS, by using high field 7T MRI. MATERIALS AND METHODSH: Five young healthy subjects were scanned on 3 (T1-w and diffusion tensor imaging) and 7T (high-resolution susceptibility weighted images (SWI)) MRI in Lausanne. All images were further integrated for the first time into the Gamma Plan Software(®) (Elekta Instruments, AB, Sweden) and co-registered (with T1 was a reference). A simulation of targeting of the Vim was done using various methods on the 3T images. Furthermore, a correlation with the position of the found target with the 7T SWI was performed. The atlas of Morel et al. (Zurich, CH) was used to confirm the findings on a detailed analysis inside/outside the Gamma Plan. RESULTS: The use of SWI provided us with a superior resolution and an improved image contrast within the basal ganglia. This allowed visualization and direct delineation of some subgroups of thalamic nuclei in vivo, including the Vim. The position of the target, as assessed on 3T, perfectly matched with the supposed one of the Vim on the SWI. Furthermore, a 3-dimensional model of the Vim-target area was created on the basis of the obtained images. CONCLUSION: This is the first report of the integration of SWI high field MRI into the LGP, aiming at the improvement of targeting validation of the Vim in tremor. The anatomical correlation between the direct visualization on 7T and the current targeting methods on 3T (e.g. quadrilatere of Guyot, histological atlases) seems to show a very good anatomical matching. Further studies are needed to validate this technique, both by improving the accuracy of the targeting of the Vim (potentially also other thalamic nuclei) and to perform clinical assessment.
Resumo:
Species distribution models (SDMs) studies suggest that, without control measures, the distribution of many alien invasive plant species (AIS) will increase under climate and land-use changes. Due to limited resources and large areas colonised by invaders, management and monitoring resources must be prioritised. Choices depend on the conservation value of the invaded areas and can be guided by SDM predictions. Here, we use a hierarchical SDM framework, complemented by connectivity analysis of AIS distributions, to evaluate current and future conflicts between AIS and high conservation value areas. We illustrate the framework with three Australian wattle (Acacia) species and patterns of conservation value in Northern Portugal. Results show that protected areas will likely suffer higher pressure from all three Acacia species under future climatic conditions. Due to this higher predicted conflict in protected areas, management might be prioritised for Acacia dealbata and Acacia melanoxylon. Connectivity of AIS suitable areas inside protected areas is currently lower than across the full study area, but this would change under future environmental conditions. Coupled SDM and connectivity analysis can support resource prioritisation for anticipation and monitoring of AIS impacts. However, further tests of this framework over a wide range of regions and organisms are still required before wide application.
Resumo:
Ireland is a successful major centre for ICT operations with ten of the top ICT companies in the world having substantial operations here. The large talent pool of ICT professionals that exists here is valuable both for foreign-owned and Irish companies. The cluster of internationally renowned firms and Irish companies offer a range of attractive career opportunities for professionals. A range of skills recruitment difficulties have been raised through the work of the Expert Group on Future Skills Needs (EGFSN), specifically the immediate issue of high-level ICT skills within both the ICT sector and from other sectors such as international financial services, banking and business services. Forfs, with the support of IDA Ireland and Enterprise Ireland, engaged in discussions with a selected range of foreign–owned and Irish companies employing approximately 30,000 employees to establish the nature of positions involved, the reasons for recruitment difficulties and to identify measures to help address them. Consultations were also held with key stakeholders including IDA Ireland, Enterprise Ireland, ICT Ireland, Software Ireland, IT@Cork, Engineers Ireland and Dublin Chambers of Commerce. Discussions were held with the heads of the computing departments of all Universities and Institutes of Technology at a meeting chaired by the Higher Education Authority. An in-depth analysis of third-level ICT supply statistics and trends was undertaken to inform the research.