45 resultados para Mesh generation from image data
Resumo:
Patterns of cigarette smoking in Switzerland were analyzed on the basis of sales data (available since 1924) and national health surveys conducted in the last decade. There was a steady and substantial increase in cigarettes sales up to the early 1970s. Thereafter, the curve tended to level off around an average value of 3,000 cigarettes per adult per year. According to the 1981-1983 National Health Survey, 37% of Swiss men were current smokers, 25% were ex-smokers, and 39% were never smokers. Corresponding porportions in women were 22, 11, and 67%. Among men, smoking prevalence was higher in lower social classes, and some moderate decline was apparent from survey data over the period 1975-1981 mostly in later middle-age. Trends in lung cancer death certification rates over the period 1950-1984 were analyzed using standard cross-sectional methods and a log-linear Poisson model to isolate the effects of age, birth cohort, and year of death. Mortality from lung cancer increased substantially among Swiss men between the early 1950s and the late 1970s, and levelled off (around a value of 70/100,000 men) thereafter. Among women, there has been a steady upward trend which started in the mid-1960s, and continues to climb steadily, although lung cancer mortality is still considerably lower in absolute terms (around 8/100,000 women) than in several North European countries or in North America. Cohort analyses indicate that the peak rates in men were reached by the generation born around 1910 and mortality stabilized for subsequent generations up to the 1930 birth cohort. Among females, marked increases were observed in each subsequent birth cohort. This pattern of trends is consistent with available information on smoking prevalence in successive generations, showing a peak among men for the 1910 cohort, but steady upward trends among females. Over the period 1980-1984, about 90% of lung cancer deaths among Swiss men and about 40% of those among women could be attributed to smoking (overall proportion, 85%).
Resumo:
We evaluated 25 protocol variants of 14 independent computational methods for exon identification, transcript reconstruction and expression-level quantification from RNA-seq data. Our results show that most algorithms are able to identify discrete transcript components with high success rates but that assembly of complete isoform structures poses a major challenge even when all constituent elements are identified. Expression-level estimates also varied widely across methods, even when based on similar transcript models. Consequently, the complexity of higher eukaryotic genomes imposes severe limitations on transcript recall and splice product discrimination that are likely to remain limiting factors for the analysis of current-generation RNA-seq data.
Resumo:
Computed tomography (CT) is used increasingly to measure liver volume in patients undergoing evaluation for transplantation or resection. This study is designed to determine a formula predicting total liver volume (TLV) based on body surface area (BSA) or body weight in Western adults. TLV was measured in 292 patients from four Western centers. Liver volumes were calculated from helical computed tomographic scans obtained for conditions unrelated to the hepatobiliary system. BSA was calculated based on height and weight. Each center used a different established method of three-dimensional volume reconstruction. Using regression analysis, measurements were compared, and formulas correlating BSA or body weight to TLV were established. A linear regression formula to estimate TLV based on BSA was obtained: TLV = -794.41 + 1,267.28 x BSA (square meters; r(2) = 0.46; P <.0001). A formula based on patient weight also was derived: TLV = 191.80 + 18.51 x weight (kilograms; r(2) = 0.49; P <.0001). The newly derived TLV formula based on BSA was compared with previously reported formulas. The application of a formula obtained from healthy Japanese individuals underestimated TLV. Two formulas derived from autopsy data for Western populations were similar to the newly derived BSA formula, with a slight overestimation of TLV. In conclusion, hepatic three-dimensional volume reconstruction based on helical CT predicts TLV based on BSA or body weight. The new formulas derived from this correlation should contribute to the estimation of TLV before liver transplantation or major hepatic resection.
Resumo:
Crocidura cossyrensis Contoli, 1989 (Mammalia, Soricidae): karyotype, biochemical genetics and hybridization experiments. - The shrew Crocidura cossyrensis Contoli, 1989 from Pantelleria (I), a Mediterranean island 100 km south of Sicily and 70 km west from Tunisia, was investigated in order to understand its origin and its relationship with C. russula from Tunisia, Morocco and Switzerland. With the exception of a single heterozygote centric fusion, C. cossyrensis had a karyotype identical with that of C russula from Tunisia (2N = 42, NF = 70 to 72), but it differed from C russula from Morocco and Switzerland (2N = 42, NF = 60). The former have 5-6 pairs of chromosomes with small arms that are acrocentric in the latter. Genetic comparisons with allozyme data revealed small genetic distance (0.04) between C cossyrensis and C russula from Tunisia. In contrast, this eastern clade (Tunisia and Pantelleria) is separated from the western clade (Switzerland and Morocco) by a genetic distance of 0.14. A hybridization experiment between shrews from Pantelleria and Switzerland lead rapidly to an F1 generation. From 12 F1 hybrids that were backcrossed, females reproduced normally, but none of the males did so. Concluding from the results, C. cossyrensis from Pantelleria and C. russula cf. agilis from Tunisia belong to the same taxon that may have reached the differentiation of a biological species within the C. russula group. More geographic samples are needed to determine the definitive taxonomic positions of these shrews.
Resumo:
Purpose: SIOPEN scoring of 123I mIBG imaging has been shown to predict response to induction chemotherapy and outcome at diagnosis in children with HRN.Method: Patterns of skeletal 123I mIBG uptake were assigned numerical scores (Mscore) ranging from 0 (no metastasis) to 72 (diffuse metastases) within 12 body areas as described previously. 271 anonymised, paired image data sets acquired at diagnosis and on completion of Rapid COJEC induction chemotherapy were reviewed, constituting a representative sample of 1602 children treated prospectively within the HR-NBL1/SIOPEN trial. Pre-and post-treatment Mscores were compared with bone marrow cytology (BM) and 3 year event free survival (EFS).Results: Results 224/271 patients showed skeletal MIBG-uptake at diagnosis and were evaluable forMIBG-response. Complete response (CR) on MIBG to Rapid COJEC induction was achieved by 66%, 34% and 15% of patients who had pre-treatment Mscores of <18 (n¼65, 29%), 18-44 (n¼95,42%) and Y ´ 45 (n¼64, 28.5%) respectively (chi squared test p<.0001). Mscore at diagnosis and on completion of Rapid COJEC correlated strongly with BM involvement (p<0.0001). The correlation of pre score with post scores and response was highly significant (p<0.001). Most importantly, the 3 year EFS in 47 children with Mscore 0 at diagnosis was 0.68 (A ` 0.07), by comparison with 0.42 (A` 0.06), 0.35 (A` 0.05) and 0.25 (A` 0.06) for patients in pre-treatment score groups <18, 18-44 and Y ´ 45, respectively (p<0.001). AnMscore threshold ofY ´ 45 at diagnosis was associated with significantly worse outcome by comparison with all other Mscore groups (p¼0.029). The 3 year EFS of 0.53 (A` 0.07) of patients in metastatic CR (mIBG and BM) after Rapid Cojec (33%) is clearly superior to patients not achieving metastatic CR (0.24 (A ` 0.04), p¼0.005).Conclusion: SIOPEN scoring of 123I mIBG imaging has been shown to predict response to induction chemotherapy and outcome at diagnosis in children with HRN.
Resumo:
With the trend in molecular epidemiology towards both genome-wide association studies and complex modelling, the need for large sample sizes to detect small effects and to allow for the estimation of many parameters within a model continues to increase. Unfortunately, most methods of association analysis have been restricted to either a family-based or a case-control design, resulting in the lack of synthesis of data from multiple studies. Transmission disequilibrium-type methods for detecting linkage disequilibrium from family data were developed as an effective way of preventing the detection of association due to population stratification. Because these methods condition on parental genotype, however, they have precluded the joint analysis of family and case-control data, although methods for case-control data may not protect against population stratification and do not allow for familial correlations. We present here an extension of a family-based association analysis method for continuous traits that will simultaneously test for, and if necessary control for, population stratification. We further extend this method to analyse binary traits (and therefore family and case-control data together) and accurately to estimate genetic effects in the population, even when using an ascertained family sample. Finally, we present the power of this binary extension for both family-only and joint family and case-control data, and demonstrate the accuracy of the association parameter and variance components in an ascertained family sample.
The 5th anniversary of "Patient Safety in Surgery" - from the Journal's origin to its future vision.
Resumo:
A prospective study was undertaken to determine prognostic markers for patients with obstructive jaundice. Along with routine liver function tests, antipyrine clearance was determined in 20 patients. Four patients died after basal investigations. Five patients underwent definitive surgery. The remaining 11 patients were subjected to percutaneous transhepatic biliary decompression. Four patients died during the drainage period, while surgery was carried out for seven patients within 1-3 weeks of drainage. Of 20 patients, only six patients survived. Basal liver function tests were comparable in survivors and nonsurvivors. Discriminant analysis of the basal data revealed that plasma bilirubin, proteins and antipyrine half-life taken together had a strong association with mortality. A mathematical equation was derived using these variables and a score was computed for each patient. It was observed that a score value greater than or equal to 0.84 indicated survival. Omission of antipyrine half-life from the data, however, resulted in prediction of false security in 55% of patients. This study highlights the importance of addition of antipyrine elimination test to the routine liver function tests for precise identification of high risk patients.
Resumo:
Background: Well-conducted behavioural surveillance (BS) is essential for policy planning and evaluation. Data should be comparable across countries. In 2008, the European Centre for Disease Prevention and Control (ECDC) began a programme to support Member States in the implementation of BS for Second Generation Surveillance. Methods: Data from a mapping exercise on current BS activities in EU/EFTA countries led to recommendations for establishing national BS systems and international coordination, and the definition of a set of core and transversal (UNGASS-Dublin compatible) indicators for BS in the general and eight specific populations. A toolkit for establishing BS has been developed and a BS needs-assessment survey has been launched in 30 countries. Tools for BS self-assessment and planning are currently being tested during interactive workshops with country representatives. Results: The mapping exercise revealed extreme diversity between countries. Around half had established a BS system, but this did not always correspond to the epidemiological situation. Challenges to implementation and harmonisation at all levels emerged from survey findings and workshop feedback. These include: absence of synergy between biological and behavioural surveillance and of actors having an overall view of all system elements; lack of awareness of the relevance of BS and of coordination between agencies; insufficient use of available data; financial constraints; poor sustainability, data quality and access to certain key populations; unfavourable legislative environments. Conclusions: There is widespread need in the region not only for technical support but also for BS advocacy: BS remains the neglected partner of second generation surveillance and requires increased political support and capacity-building in order to become effective. Dissemination of validated tools for BS, developed in interaction with country experts, proves feasible and acceptable.
Resumo:
ABSTRACT: BACKGROUND: A central question for ecologists is the extent to which anthropogenic disturbances (e.g. tourism) might impact wildlife and affect the systems under study. From a research perspective, identifying the effects of human disturbance caused by research-related activities is crucial in order to understand and account for potential biases and derive appropriate conclusions from the data. RESULTS: Here, we document a case of biological adjustment to chronic human disturbance in a colonial seabird, the king penguin (Aptenodytes patagonicus), breeding on remote and protected islands of the Southern ocean. Using heart rate (HR) as a measure of the stress response, we show that, in a colony with areas exposed to the continuous presence of humans (including scientists) for over 50 years, penguins have adjusted to human disturbance and habituated to certain, but not all, types of stressors. When compared to birds breeding in relatively undisturbed areas, birds in areas of high chronic human disturbance were found to exhibit attenuated HR responses to acute anthropogenic stressors of low-intensity (i.e. sounds or human approaches) to which they had been subjected intensely over the years. However, such attenuation was not apparent for high-intensity stressors (i.e. captures for scientific research) which only a few individuals experience each year. CONCLUSIONS: Habituation to anthropogenic sounds/approaches could be an adaptation to deal with chronic innocuous stressors, and beneficial from a research perspective. Alternately, whether penguins have actually habituated to anthropogenic disturbances over time or whether human presence has driven the directional selection of human-tolerant phenotypes, remains an open question with profound ecological and conservation implications, and emphasizes the need for more knowledge on the effects of human disturbance on long-term studied populations.
Resumo:
Automatic environmental monitoring networks enforced by wireless communication technologies provide large and ever increasing volumes of data nowadays. The use of this information in natural hazard research is an important issue. Particularly useful for risk assessment and decision making are the spatial maps of hazard-related parameters produced from point observations and available auxiliary information. The purpose of this article is to present and explore the appropriate tools to process large amounts of available data and produce predictions at fine spatial scales. These are the algorithms of machine learning, which are aimed at non-parametric robust modelling of non-linear dependencies from empirical data. The computational efficiency of the data-driven methods allows producing the prediction maps in real time which makes them superior to physical models for the operational use in risk assessment and mitigation. Particularly, this situation encounters in spatial prediction of climatic variables (topo-climatic mapping). In complex topographies of the mountainous regions, the meteorological processes are highly influenced by the relief. The article shows how these relations, possibly regionalized and non-linear, can be modelled from data using the information from digital elevation models. The particular illustration of the developed methodology concerns the mapping of temperatures (including the situations of Föhn and temperature inversion) given the measurements taken from the Swiss meteorological monitoring network. The range of the methods used in the study includes data-driven feature selection, support vector algorithms and artificial neural networks.
Resumo:
Acoustic waveform inversions are an increasingly popular tool for extracting subsurface information from seismic data. They are computationally much more efficient than elastic inversions. Naturally, an inherent disadvantage is that any elastic effects present in the recorded data are ignored in acoustic inversions. We investigate the extent to which elastic effects influence seismic crosshole data. Our numerical modeling studies reveal that in the presence of high contrast interfaces, at which P-to-S conversions occur, elastic effects can dominate the seismic sections, even for experiments involving pressure sources and pressure receivers. Comparisons of waveform inversion results using a purely acoustic algorithm on synthetic data that is either acoustic or elastic, show that subsurface models comprising small low-to-medium contrast (?30%) structures can be successfully resolved in the acoustic approximation. However, in the presence of extended high-contrast anomalous bodies, P-to-S-conversions may substantially degrade the quality of the tomographic images. In particular, extended low-velocity zones are difficult to image. Likewise, relatively small low-velocity features are unresolved, even when advanced a priori information is included. One option for mitigating elastic effects is data windowing, which suppresses later arriving seismic arrivals, such as shear waves. Our tests of this approach found it to be inappropriate because elastic effects are also included in earlier arriving wavetrains. Furthermore, data windowing removes later arriving P-wave phases that may provide critical constraints on the tomograms. Finally, we investigated the extent to which acoustic inversions of elastic data are useful for time-lapse analyses of high contrast engineered structures, for which accurate reconstruction of the subsurface structure is not as critical as imaging differential changes between sequential experiments. Based on a realistic scenario for monitoring a radioactive waste repository, we demonstrated that acoustic inversions of elastic data yield substantial distortions of the tomograms and also unreliable information on trends in the velocity changes.
Resumo:
Version abregée L'ischémie cérébrale est la troisième cause de mort dans les pays développés, et la maladie responsable des plus sérieux handicaps neurologiques. La compréhension des bases moléculaires et anatomiques de la récupération fonctionnelle après l'ischémie cérébrale est donc extrêmement importante et représente un domaine d'intérêt crucial pour la recherche fondamentale et clinique. Durant les deux dernières décennies, les chercheurs ont tenté de combattre les effets nocifs de l'ischémie cérébrale à l'aide de substances exogènes qui, bien que testées avec succès dans le domaine expérimental, ont montré un effet contradictoire dans l'application clinique. Une approche différente mais complémentaire est de stimuler des mécanismes intrinsèques de neuroprotection en utilisant le «modèle de préconditionnement» : une brève insulte protège contre des épisodes d'ischémie plus sévères à travers la stimulation de voies de signalisation endogènes qui augmentent la résistance à l'ischémie. Cette approche peut offrir des éléments importants pour clarifier les mécanismes endogènes de neuroprotection et fournir de nouvelles stratégies pour rendre les neurones et la glie plus résistants à l'attaque ischémique cérébrale. Dans un premier temps, nous avons donc étudié les mécanismes de neuroprotection intrinsèques stimulés par la thrombine, un neuroprotecteur «préconditionnant» dont on a montré, à l'aide de modèles expérimentaux in vitro et in vivo, qu'il réduit la mort neuronale. En appliquant une technique de microchirurgie pour induire une ischémie cérébrale transitoire chez la souris, nous avons montré que la thrombine peut stimuler les voies de signalisation intracellulaire médiées par MAPK et JNK par une approche moléculaire et l'analyse in vivo d'un inhibiteur spécifique de JNK (L JNK) .Nous avons également étudié l'impact de la thrombine sur la récupération fonctionnelle après une attaque et avons pu démontrer que ces mécanismes moléculaires peuvent améliorer la récupération motrice. La deuxième partie de cette étude des mécanismes de récupération après ischémie cérébrale est basée sur l'investigation des bases anatomiques de la plasticité des connections cérébrales, soit dans le modèle animal d'ischémie transitoire, soit chez l'homme. Selon des résultats précédemment publiés par divers groupes ,nous savons que des mécanismes de plasticité aboutissant à des degrés divers de récupération fonctionnelle sont mis enjeu après une lésion ischémique. Le résultat de cette réorganisation est une nouvelle architecture fonctionnelle et structurelle, qui varie individuellement selon l'anatomie de la lésion, l'âge du sujet et la chronicité de la lésion. Le succès de toute intervention thérapeutique dépendra donc de son interaction avec la nouvelle architecture anatomique. Pour cette raison, nous avons appliqué deux techniques de diffusion en résonance magnétique qui permettent de détecter les changements de microstructure cérébrale et de connexions anatomiques suite à une attaque : IRM par tenseur de diffusion (DT-IR1V) et IRM par spectre de diffusion (DSIRM). Grâce à la DT-IRM hautement sophistiquée, nous avons pu effectuer une étude de follow-up à long terme chez des souris ayant subi une ischémie cérébrale transitoire, qui a mis en évidence que les changements microstructurels dans l'infarctus ainsi que la modification des voies anatomiques sont corrélés à la récupération fonctionnelle. De plus, nous avons observé une réorganisation axonale dans des aires où l'on détecte une augmentation d'expression d'une protéine de plasticité exprimée dans le cône de croissance des axones (GAP-43). En appliquant la même technique, nous avons également effectué deux études, rétrospective et prospective, qui ont montré comment des paramètres obtenus avec DT-IRM peuvent monitorer la rapidité de récupération et mettre en évidence un changement structurel dans les voies impliquées dans les manifestations cliniques. Dans la dernière partie de ce travail, nous avons décrit la manière dont la DS-IRM peut être appliquée dans le domaine expérimental et clinique pour étudier la plasticité cérébrale après ischémie. Abstract Ischemic stroke is the third leading cause of death in developed countries and the disease responsible for the most serious long-term neurological disability. Understanding molecular and anatomical basis of stroke recovery is, therefore, extremely important and represents a major field of interest for basic and clinical research. Over the past 2 decades, much attention has focused on counteracting noxious effect of the ischemic insult with exogenous substances (oxygen radical scavengers, AMPA and NMDA receptor antagonists, MMP inhibitors etc) which were successfully tested in the experimental field -but which turned out to have controversial effects in clinical trials. A different but complementary approach to address ischemia pathophysiology and treatment options is to stimulate and investigate intrinsic mechanisms of neuroprotection using the "preconditioning effect": applying a brief insult protects against subsequent prolonged and detrimental ischemic episodes, by up-regulating powerful endogenous pathways that increase resistance to injury. We believe that this approach might offer an important insight into the molecular mechanisms responsible for endogenous neuroprotection. In addition, results from preconditioning model experiment may provide new strategies for making brain cells "naturally" more resistant to ischemic injury and accelerate their rate of functional recovery. In the first part of this work, we investigated down-stream mechanisms of neuroprotection induced by thrombin, a well known neuroprotectant which has been demonstrated to reduce stroke-induced cell death in vitro and in vivo experimental models. Using microsurgery to induce transient brain ischemia in mice, we showed that thrombin can stimulate both MAPK and JNK intracellular pathways through a molecular biology approach and an in vivo analysis of a specific kinase inhibitor (L JNK1). We also studied thrombin's impact on functional recovery demonstrating that these molecular mechanisms could enhance post-stroke motor outcome. The second part of this study is based on investigating the anatomical basis underlying connectivity remodeling, leading to functional improvement after stroke. To do this, we used both a mouse model of experimental ischemia and human subjects with stroke. It is known from previous data published in literature, that the brain adapts to damage in a way that attempts to preserve motor function. The result of this reorganization is a new functional and structural architecture, which will vary from patient to patient depending on the anatomy of the damage, the biological age of the patient and the chronicity of the lesion. The success of any given therapeutic intervention will depend on how well it interacts with this new architecture. For this reason, we applied diffusion magnetic resonance techniques able to detect micro-structural and connectivity changes following an ischemic lesion: diffusion tensor MRI (DT-MRI) and diffusion spectrum MRI (DS-MRI). Using DT-MRI, we performed along-term follow up study of stroke mice which showed how diffusion changes in the stroke region and fiber tract remodeling is correlating with stroke recovery. In addition, axonal reorganization is shown in areas of increased plasticity related protein expression (GAP 43, growth axonal cone related protein). Applying the same technique, we then performed a retrospective and a prospective study in humans demonstrating how specific DTI parameters could help to monitor the speed of recovery and show longitudinal changes in damaged tracts involved in clinical symptoms. Finally, in the last part of this study we showed how DS-MRI could be applied both to experimental and human stroke and which perspectives it can open to further investigate post stroke plasticity.
Resumo:
Genetic variants influence the risk to develop certain diseases or give rise to differences in drug response. Recent progresses in cost-effective, high-throughput genome-wide techniques, such as microarrays measuring Single Nucleotide Polymorphisms (SNPs), have facilitated genotyping of large clinical and population cohorts. Combining the massive genotypic data with measurements of phenotypic traits allows for the determination of genetic differences that explain, at least in part, the phenotypic variations within a population. So far, models combining the most significant variants can only explain a small fraction of the variance, indicating the limitations of current models. In particular, researchers have only begun to address the possibility of interactions between genotypes and the environment. Elucidating the contributions of such interactions is a difficult task because of the large number of genetic as well as possible environmental factors.In this thesis, I worked on several projects within this context. My first and main project was the identification of possible SNP-environment interactions, where the phenotypes were serum lipid levels of patients from the Swiss HIV Cohort Study (SHCS) treated with antiretroviral therapy. Here the genotypes consisted of a limited set of SNPs in candidate genes relevant for lipid transport and metabolism. The environmental variables were the specific combinations of drugs given to each patient over the treatment period. My work explored bioinformatic and statistical approaches to relate patients' lipid responses to these SNPs, drugs and, importantly, their interactions. The goal of this project was to improve our understanding and to explore the possibility of predicting dyslipidemia, a well-known adverse drug reaction of antiretroviral therapy. Specifically, I quantified how much of the variance in lipid profiles could be explained by the host genetic variants, the administered drugs and SNP-drug interactions and assessed the predictive power of these features on lipid responses. Using cross-validation stratified by patients, we could not validate our hypothesis that models that select a subset of SNP-drug interactions in a principled way have better predictive power than the control models using "random" subsets. Nevertheless, all models tested containing SNP and/or drug terms, exhibited significant predictive power (as compared to a random predictor) and explained a sizable proportion of variance, in the patient stratified cross-validation context. Importantly, the model containing stepwise selected SNP terms showed higher capacity to predict triglyceride levels than a model containing randomly selected SNPs. Dyslipidemia is a complex trait for which many factors remain to be discovered, thus missing from the data, and possibly explaining the limitations of our analysis. In particular, the interactions of drugs with SNPs selected from the set of candidate genes likely have small effect sizes which we were unable to detect in a sample of the present size (<800 patients).In the second part of my thesis, I performed genome-wide association studies within the Cohorte Lausannoise (CoLaus). I have been involved in several international projects to identify SNPs that are associated with various traits, such as serum calcium, body mass index, two-hour glucose levels, as well as metabolic syndrome and its components. These phenotypes are all related to major human health issues, such as cardiovascular disease. I applied statistical methods to detect new variants associated with these phenotypes, contributing to the identification of new genetic loci that may lead to new insights into the genetic basis of these traits. This kind of research will lead to a better understanding of the mechanisms underlying these pathologies, a better evaluation of disease risk, the identification of new therapeutic leads and may ultimately lead to the realization of "personalized" medicine.
Resumo:
Morphogenesis emerges from complex multiscale interactions between genetic and mechanical processes. To understand these processes, the evolution of cell shape, proliferation and gene expression must be quantified. This quantification is usually performed either in full 3D, which is computationally expensive and technically challenging, or on 2D planar projections, which introduces geometrical artifacts on highly curved organs. Here we present MorphoGraphX ( www.MorphoGraphX.org), a software that bridges this gap by working directly with curved surface images extracted from 3D data. In addition to traditional 3D image analysis, we have developed algorithms to operate on curved surfaces, such as cell segmentation, lineage tracking and fluorescence signal quantification. The software's modular design makes it easy to include existing libraries, or to implement new algorithms. Cell geometries extracted with MorphoGraphX can be exported and used as templates for simulation models, providing a powerful platform to investigate the interactions between shape, genes and growth.
Resumo:
Atherosclerosis is a chronic cardiovascular disease that involves the thicken¬ing of the artery walls as well as the formation of plaques (lesions) causing the narrowing of the lumens, in vessels such as the aorta, the coronary and the carotid arteries. Magnetic resonance imaging (MRI) is a promising modality for the assessment of atherosclerosis, as it is a non-invasive and patient-friendly procedure that does not use ionizing radiation. MRI offers high soft tissue con¬trast already without the need of intravenous contrast media; while modifica¬tion of the MR pulse sequences allows for further adjustment of the contrast for specific diagnostic needs. As such, MRI can create angiographic images of the vessel lumens to assess stenoses at the late stage of the disease, as well as blood flow-suppressed images for the early investigation of the vessel wall and the characterization of the atherosclerotic plaques. However, despite the great technical progress that occurred over the past two decades, MRI is intrinsically a low sensitive technique and some limitations still exist in terms of accuracy and performance. A major challenge for coronary artery imaging is respiratory motion. State- of-the-art diaphragmatic navigators rely on an indirect measure of motion, per¬form a ID correction, and have long and unpredictable scan time. In response, self-navigation (SM) strategies have recently been introduced that offer 100% scan efficiency and increased ease of use. SN detects respiratory motion di¬rectly from the image data obtained at the level of the heart, and retrospectively corrects the same data before final image reconstruction. Thus, SN holds po-tential for multi-dimensional motion compensation. To this regard, this thesis presents novel SN methods that estimate 2D and 3D motion parameters from aliased sub-images that are obtained from the same raw data composing the final image. Combination of all corrected sub-images produces a final image with reduced motion artifacts for the visualization of the coronaries. The first study (section 2.2, 2D Self-Navigation with Compressed Sensing) consists of a method for 2D translational motion compensation. Here, the use of com- pressed sensing (CS) reconstruction is proposed and investigated to support motion detection by reducing aliasing artifacts. In healthy human subjects, CS demonstrated an improvement in motion detection accuracy with simula¬tions on in vivo data, while improved coronary artery visualization was demon¬strated on in vivo free-breathing acquisitions. However, the motion of the heart induced by respiration has been shown to occur in three dimensions and to be more complex than a simple translation. Therefore, the second study (section 2.3,3D Self-Navigation) consists of a method for 3D affine motion correction rather than 2D only. Here, different techniques were adopted to reduce background signal contribution in respiratory motion tracking, as this can be adversely affected by the static tissue that surrounds the heart. The proposed method demonstrated to improve conspicuity and vi¬sualization of coronary arteries in healthy and cardiovascular disease patient cohorts in comparison to a conventional ID SN method. In the third study (section 2.4, 3D Self-Navigation with Compressed Sensing), the same tracking methods were used to obtain sub-images sorted according to the respiratory position. Then, instead of motion correction, a compressed sensing reconstruction was performed on all sorted sub-image data. This process ex¬ploits the consistency of the sorted data to reduce aliasing artifacts such that the sub-image corresponding to the end-expiratory phase can directly be used to visualize the coronaries. In a healthy volunteer cohort, this strategy improved conspicuity and visualization of the coronary arteries when compared to a con¬ventional ID SN method. For the visualization of the vessel wall and atherosclerotic plaques, the state- of-the-art dual inversion recovery (DIR) technique is able to suppress the signal coming from flowing blood and provide positive wall-lumen contrast. How¬ever, optimal contrast may be difficult to obtain and is subject to RR variability. Furthermore, DIR imaging is time-inefficient and multislice acquisitions may lead to prolonged scanning times. In response and as a fourth study of this thesis (chapter 3, Vessel Wall MRI of the Carotid Arteries), a phase-sensitive DIR method has been implemented and tested in the carotid arteries of a healthy volunteer cohort. By exploiting the phase information of images acquired after DIR, the proposed phase-sensitive method enhances wall-lumen contrast while widens the window of opportunity for image acquisition. As a result, a 3-fold increase in volumetric coverage is obtained at no extra cost in scanning time, while image quality is improved. In conclusion, this thesis presented novel methods to address some of the main challenges for MRI of atherosclerosis: the suppression of motion and flow artifacts for improved visualization of vessel lumens, walls and plaques. Such methods showed to significantly improve image quality in human healthy sub¬jects, as well as scan efficiency and ease-of-use of MRI. Extensive validation is now warranted in patient populations to ascertain their diagnostic perfor¬mance. Eventually, these methods may bring the use of atherosclerosis MRI closer to the clinical practice. Résumé L'athérosclérose est une maladie cardiovasculaire chronique qui implique le épaississement de la paroi des artères, ainsi que la formation de plaques (lé¬sions) provoquant le rétrécissement des lumières, dans des vaisseaux tels que l'aorte, les coronaires et les artères carotides. L'imagerie par résonance magné¬tique (IRM) est une modalité prometteuse pour l'évaluation de l'athérosclérose, car il s'agit d'une procédure non-invasive et conviviale pour les patients, qui n'utilise pas des rayonnements ionisants. L'IRM offre un contraste des tissus mous très élevé sans avoir besoin de médias de contraste intraveineux, tan¬dis que la modification des séquences d'impulsions de RM permet en outre le réglage du contraste pour des besoins diagnostiques spécifiques. À ce titre, l'IRM peut créer des images angiographiques des lumières des vaisseaux pour évaluer les sténoses à la fin du stade de la maladie, ainsi que des images avec suppression du flux sanguin pour une première enquête des parois des vais¬seaux et une caractérisation des plaques d'athérosclérose. Cependant, malgré les grands progrès techniques qui ont eu lieu au cours des deux dernières dé¬cennies, l'IRM est une technique peu sensible et certaines limitations existent encore en termes de précision et de performance. Un des principaux défis pour l'imagerie de l'artère coronaire est le mou¬vement respiratoire. Les navigateurs diaphragmatiques de pointe comptent sur une mesure indirecte de mouvement, effectuent une correction 1D, et ont un temps d'acquisition long et imprévisible. En réponse, les stratégies d'auto- navigation (self-navigation: SN) ont été introduites récemment et offrent 100% d'efficacité d'acquisition et une meilleure facilité d'utilisation. Les SN détectent le mouvement respiratoire directement à partir des données brutes de l'image obtenue au niveau du coeur, et rétrospectivement corrigent ces mêmes données avant la reconstruction finale de l'image. Ainsi, les SN détiennent un poten¬tiel pour une compensation multidimensionnelle du mouvement. A cet égard, cette thèse présente de nouvelles méthodes SN qui estiment les paramètres de mouvement 2D et 3D à partir de sous-images qui sont obtenues à partir des mêmes données brutes qui composent l'image finale. La combinaison de toutes les sous-images corrigées produit une image finale pour la visualisation des coronaires ou les artefacts du mouvement sont réduits. La première étude (section 2.2,2D Self-Navigation with Compressed Sensing) traite d'une méthode pour une compensation 2D de mouvement de translation. Ici, on étudie l'utilisation de la reconstruction d'acquisition comprimée (compressed sensing: CS) pour soutenir la détection de mouvement en réduisant les artefacts de sous-échantillonnage. Chez des sujets humains sains, CS a démontré une amélioration de la précision de la détection de mouvement avec des simula¬tions sur des données in vivo, tandis que la visualisation de l'artère coronaire sur des acquisitions de respiration libre in vivo a aussi été améliorée. Pourtant, le mouvement du coeur induite par la respiration se produit en trois dimensions et il est plus complexe qu'un simple déplacement. Par conséquent, la deuxième étude (section 2.3, 3D Self-Navigation) traite d'une méthode de cor¬rection du mouvement 3D plutôt que 2D uniquement. Ici, différentes tech¬niques ont été adoptées pour réduire la contribution du signal du fond dans le suivi de mouvement respiratoire, qui peut être influencé négativement par le tissu statique qui entoure le coeur. La méthode proposée a démontré une amélioration, par rapport à la procédure classique SN de correction 1D, de la visualisation des artères coronaires dans le groupe de sujets sains et des pa¬tients avec maladies cardio-vasculaires. Dans la troisième étude (section 2.4,3D Self-Navigation with Compressed Sensing), les mêmes méthodes de suivi ont été utilisées pour obtenir des sous-images triées selon la position respiratoire. Au lieu de la correction du mouvement, une reconstruction de CS a été réalisée sur toutes les sous-images triées. Cette procédure exploite la cohérence des données pour réduire les artefacts de sous- échantillonnage de telle sorte que la sous-image correspondant à la phase de fin d'expiration peut directement être utilisée pour visualiser les coronaires. Dans un échantillon de volontaires en bonne santé, cette stratégie a amélioré la netteté et la visualisation des artères coronaires par rapport à une méthode classique SN ID. Pour la visualisation des parois des vaisseaux et de plaques d'athérosclérose, la technique de pointe avec double récupération d'inversion (DIR) est capa¬ble de supprimer le signal provenant du sang et de fournir un contraste posi¬tif entre la paroi et la lumière. Pourtant, il est difficile d'obtenir un contraste optimal car cela est soumis à la variabilité du rythme cardiaque. Par ailleurs, l'imagerie DIR est inefficace du point de vue du temps et les acquisitions "mul- tislice" peuvent conduire à des temps de scan prolongés. En réponse à ce prob¬lème et comme quatrième étude de cette thèse (chapitre 3, Vessel Wall MRI of the Carotid Arteries), une méthode de DIR phase-sensitive a été implémenté et testé