133 resultados para Homeostasis Model Assessment


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Absorption, transport and storage of iron are tightly regulated, as expected for an element, which is both essential and potentially toxic. Iron deficiency is the leading cause of anaemia, and it also compromises immune function and cognitive development. Iron overload damages the liver and other organs in hereditary hemochromatosis, and in thalassaemia patients with both transfusion and non-transfusionrelated iron accumulation. Excess iron has harmful effects in chronic liver diseases caused by excessive alcohol, obesity or viruses. There is evidence for involvement of iron in neurodegenerative diseases and in Type 2 diabetes. Variation in transferrin saturation, a biomarker of iron status, has been associated with mortality in patients with diabetes and in the general population13. All these associations between iron and either clinical disease or pathological processes make it important to understand the causes of variation in iron status. Importantly, information on genetic causes of variation can be used in Mendelian randomization studies to test whether variation in iron status is a cause or consequence of disease. We have used biomarkers of iron status (serum iron, transferrin, transferrin saturation and ferritin), which are commonly used clinically and readily measurable in thousands of individuals, and carried out a meta-analysis of human genomewide association study (GWAS) data from 11 discovery and eight replication cohorts. Our aims were to identify additional loci affecting markers of iron status in the general population and to relate the significant loci to information on gene expression to identify relevant genes. We also made an initial assessment of whether any such loci affect iron status in HFE C282Y homozygotes, who are at genetic risk of HFE-related iron overload (hereditary hemochromatosis type 1, OMIM #235200)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

STUDY OBJECTIVES: Sleep fragmentation (SF) is an integral feature of sleep apnea and other prevalent sleep disorders. Although the effect of repetitive arousals on cognitive performance is well documented, the effects of long-term SF on electroencephalography (EEG) and molecular markers of sleep homeostasis remain poorly investigated. To address this question, we developed a mouse model of chronic SF and characterized its effect on EEG spectral frequencies and the expression of genes previously linked to sleep homeostasis including clock genes, heat shock proteins, and plasticity-related genes. DESIGN: N/A. SETTING: Animal sleep research laboratory. PARTICIPANTS: Sixty-six C57BL6/J adult mice. INTERVENTIONS: Instrumental sleep disruption at a rate of 60/h during 14 days. MEASUREMENTS AND RESULTS: Locomotor activity and EEG were recorded during 14 days of SF followed by recovery for 2 days. Despite a dramatic number of arousals and decreased sleep bout duration, SF minimally reduced total quantity of sleep and did not significantly alter its circadian distribution. Spectral analysis during SF revealed a homeostatic drive for slow wave activity (SWA; 1-4 Hz) and other frequencies as well (4-40 Hz). Recordings during recovery revealed slow wave sleep consolidation and a transient rebound in SWA, and paradoxical sleep duration. The expression of selected genes was not induced following chronic SF. CONCLUSIONS: Chronic SF increased sleep pressure confirming that altered quality with preserved quantity triggers core sleep homeostasis mechanisms. However, it did not induce the expression of genes induced by sleep loss, suggesting that these molecular pathways are not sustainably activated in chronic diseases involving SF.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

(13)C magnetic resonance spectroscopy (MRS) combined with the administration of (13)C labeled substrates uniquely allows to measure metabolic fluxes in vivo in the brain of humans and rats. The extension to mouse models may provide exclusive prospect for the investigation of models of human diseases. In the present study, the short-echo-time (TE) full-sensitivity (1)H-[(13)C] MRS sequence combined with high magnetic field (14.1 T) and infusion of [U-(13)C6] glucose was used to enhance the experimental sensitivity in vivo in the mouse brain and the (13)C turnover curves of glutamate C4, glutamine C4, glutamate+glutamine C3, aspartate C2, lactate C3, alanine C3, γ-aminobutyric acid C2, C3 and C4 were obtained. A one-compartment model was used to fit (13)C turnover curves and resulted in values of metabolic fluxes including the tricarboxylic acid (TCA) cycle flux VTCA (1.05 ± 0.04 μmol/g per minute), the exchange flux between 2-oxoglutarate and glutamate Vx (0.48 ± 0.02 μmol/g per minute), the glutamate-glutamine exchange rate V(gln) (0.20 ± 0.02 μmol/g per minute), the pyruvate dilution factor K(dil) (0.82 ± 0.01), and the ratio for the lactate conversion rate and the alanine conversion rate V(Lac)/V(Ala) (10 ± 2). This study opens the prospect of studying transgenic mouse models of brain pathologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mutations in GDAP1, which encodes protein located in the mitochondrial outer membrane, cause axonal recessive (AR-CMT2), axonal dominant (CMT2K) and demyelinating recessive (CMT4A) forms of Charcot-Marie-Tooth (CMT) neuropathy. Loss of function recessive mutations in GDAP1 are associated with decreased mitochondrial fission activity, while dominant mutations result in impairment of mitochondrial fusion with increased production of reactive oxygen species and susceptibility to apoptotic stimuli. GDAP1 silencing in vitro reduces Ca2+ inflow through store-operated Ca2+ entry (SOCE) upon mobilization of endoplasmic reticulum (ER) Ca2+, likely in association with an abnormal distribution of the mitochondrial network. To investigate the functional consequences of lack of GDAP1 in vivo, we generated a Gdap1 knockout mouse. The affected animals presented abnormal motor behavior starting at the age of 3 months. Electrophysiological and biochemical studies confirmed the axonal nature of the neuropathy whereas histopathological studies over time showed progressive loss of motor neurons (MNs) in the anterior horn of the spinal cord and defects in neuromuscular junctions. Analyses of cultured embryonic MNs and adult dorsal root ganglia neurons from affected animals demonstrated large and defective mitochondria, changes in the ER cisternae, reduced acetylation of cytoskeletal α-tubulin and increased autophagy vesicles. Importantly, MNs showed reduced cytosolic calcium and SOCE response. The development and characterization of the GDAP1 neuropathy mice model thus revealed that some of the pathophysiological changes present in axonal recessive form of the GDAP1-related CMT might be the consequence of changes in the mitochondrial network biology and mitochondria-endoplasmic reticulum interaction leading to abnormalities in calcium homeostasis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of dysfunctional or exhausted T cells is characteristic of immune responses to chronic viral infections and cancer. Exhausted T cells are defined by reduced effector function, sustained upregulation of multiple inhibitory receptors, an altered transcriptional program and perturbations of normal memory development and homeostasis. This review focuses on (a) illustrating milestone discoveries that led to our present understanding of T cell exhaustion, (b) summarizing recent developments in the field, and (c) identifying new challenges for translational research. Exhausted T cells are now recognized as key therapeutic targets in human infections and cancer. Much of our knowledge of the clinically relevant process of exhaustion derives from studies in the mouse model of Lymphocytic choriomeningitis virus (LCMV) infection. Studies using this model have formed the foundation for our understanding of human T cell memory and exhaustion. We will use this example to discuss recent advances in our understanding of T cell exhaustion and illustrate the value of integrated mouse and human studies and will emphasize the benefits of bi-directional mouse-to-human and human-to-mouse research approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AIMS: Notch1 signalling in the heart is mainly activated via expression of Jagged1 on the surface of cardiomyocytes. Notch controls cardiomyocyte proliferation and differentiation in the developing heart and regulates cardiac remodelling in the stressed adult heart. Besides canonical Notch receptor activation in signal-receiving cells, Notch ligands can also activate Notch receptor-independent responses in signal-sending cells via release of their intracellular domain. We evaluated therefore the importance of Jagged1 (J1) intracellular domain (ICD)-mediated pathways in the postnatal heart. METHODS AND RESULTS: In cardiomyocytes, Jagged1 releases J1ICD, which then translocates into the nucleus and down-regulates Notch transcriptional activity. To study the importance of J1ICD in cardiac homeostasis, we generated transgenic mice expressing a tamoxifen-inducible form of J1ICD, specifically in cardiomyocytes. Using this model, we demonstrate that J1ICD-mediated Notch inhibition diminishes proliferation in the neonatal cardiomyocyte population and promotes maturation. In the neonatal heart, a response via Wnt and Akt pathway activation is elicited as an attempt to compensate for the deficit in cardiomyocyte number resulting from J1ICD activation. In the stressed adult heart, J1ICD activation results in a dramatic reduction of the number of Notch signalling cardiomyocytes, blunts the hypertrophic response, and reduces the number of apoptotic cardiomyocytes. Consistently, this occurs concomitantly with a significant down-regulation of the phosphorylation of the Akt effectors ribosomal S6 protein (S6) and eukaryotic initiation factor 4E binding protein1 (4EBP1) controlling protein synthesis. CONCLUSIONS: Altogether, these data demonstrate the importance of J1ICD in the modulation of physiological and pathological hypertrophy, and reveal the existence of a novel pathway regulating cardiac homeostasis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Obesity is associated with chronic food intake disorders and binge eating. Food intake relies on the interaction between homeostatic regulation and hedonic signals among which, olfaction is a major sensory determinant. However, its potential modulation at the peripheral level by a chronic energy imbalance associated to obese status remains a matter of debate. We further investigated the olfactory function in a rodent model relevant to the situation encountered in obese humans, where genetic susceptibility is juxtaposed on chronic eating disorders. Using several olfactory-driven tests, we compared the behaviors of obesity-prone Sprague-Dawley rats (OP) fed with a high-fat/high-sugar diet with those of obese-resistant ones fed with normal chow. In OP rats, we reported 1) decreased odor threshold, but 2) poor olfactory performances, associated with learning/memory deficits, 3) decreased influence of fasting, and 4) impaired insulin control on food seeking behavior. Associated with these behavioral modifications, we found a modulation of metabolism-related factors implicated in 1) electrical olfactory signal regulation (insulin receptor), 2) cellular dynamics (glucorticoids receptors, pro- and antiapoptotic factors), and 3) homeostasis of the olfactory mucosa and bulb (monocarboxylate and glucose transporters). Such impairments might participate to the perturbed daily food intake pattern that we observed in obese animals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Characterizing the geological features and structures in three dimensions over inaccessible rock cliffs is needed to assess natural hazards such as rockfalls and rockslides and also to perform investigations aimed at mapping geological contacts and building stratigraphy and fold models. Indeed, the detailed 3D data, such as LiDAR point clouds, allow to study accurately the hazard processes and the structure of geologic features, in particular in vertical and overhanging rock slopes. Thus, 3D geological models have a great potential of being applied to a wide range of geological investigations both in research and applied geology projects, such as mines, tunnels and reservoirs. Recent development of ground-based remote sensing techniques (LiDAR, photogrammetry and multispectral / hyperspectral images) are revolutionizing the acquisition of morphological and geological information. As a consequence, there is a great potential for improving the modeling of geological bodies as well as failure mechanisms and stability conditions by integrating detailed remote data. During the past ten years several large rockfall events occurred along important transportation corridors where millions of people travel every year (Switzerland: Gotthard motorway and railway; Canada: Sea to sky highway between Vancouver and Whistler). These events show that there is still a lack of knowledge concerning the detection of potential rockfalls, making mountain residential settlements and roads highly risky. It is necessary to understand the main factors that destabilize rocky outcrops even if inventories are lacking and if no clear morphological evidences of rockfall activity are observed. In order to increase the possibilities of forecasting potential future landslides, it is crucial to understand the evolution of rock slope stability. Defining the areas theoretically most prone to rockfalls can be particularly useful to simulate trajectory profiles and to generate hazard maps, which are the basis for land use planning in mountainous regions. The most important questions to address in order to assess rockfall hazard are: Where are the most probable sources for future rockfalls located? What are the frequencies of occurrence of these rockfalls? I characterized the fracturing patterns in the field and with LiDAR point clouds. Afterwards, I developed a model to compute the failure mechanisms on terrestrial point clouds in order to assess the susceptibility to rockfalls at the cliff scale. Similar procedures were already available to evaluate the susceptibility to rockfalls based on aerial digital elevation models. This new model gives the possibility to detect the most susceptible rockfall sources with unprecented detail in the vertical and overhanging areas. The results of the computation of the most probable rockfall source areas in granitic cliffs of Yosemite Valley and Mont-Blanc massif were then compared to the inventoried rockfall events to validate the calculation methods. Yosemite Valley was chosen as a test area because it has a particularly strong rockfall activity (about one rockfall every week) which leads to a high rockfall hazard. The west face of the Dru was also chosen for the relevant rockfall activity and especially because it was affected by some of the largest rockfalls that occurred in the Alps during the last 10 years. Moreover, both areas were suitable because of their huge vertical and overhanging cliffs that are difficult to study with classical methods. Limit equilibrium models have been applied to several case studies to evaluate the effects of different parameters on the stability of rockslope areas. The impact of the degradation of rockbridges on the stability of large compartments in the west face of the Dru was assessed using finite element modeling. In particular I conducted a back-analysis of the large rockfall event of 2005 (265'000 m3) by integrating field observations of joint conditions, characteristics of fracturing pattern and results of geomechanical tests on the intact rock. These analyses improved our understanding of the factors that influence the stability of rock compartments and were used to define the most probable future rockfall volumes at the Dru. Terrestrial laser scanning point clouds were also successfully employed to perform geological mapping in 3D, using the intensity of the backscattered signal. Another technique to obtain vertical geological maps is combining triangulated TLS mesh with 2D geological maps. At El Capitan (Yosemite Valley) we built a georeferenced vertical map of the main plutonio rocks that was used to investigate the reasons for preferential rockwall retreat rate. Additional efforts to characterize the erosion rate were made at Monte Generoso (Ticino, southern Switzerland) where I attempted to improve the estimation of long term erosion by taking into account also the volumes of the unstable rock compartments. Eventually, the following points summarize the main out puts of my research: The new model to compute the failure mechanisms and the rockfall susceptibility with 3D point clouds allows to define accurately the most probable rockfall source areas at the cliff scale. The analysis of the rockbridges at the Dru shows the potential of integrating detailed measurements of the fractures in geomechanical models of rockmass stability. The correction of the LiDAR intensity signal gives the possibility to classify a point cloud according to the rock type and then use this information to model complex geologic structures. The integration of these results, on rockmass fracturing and composition, with existing methods can improve rockfall hazard assessments and enhance the interpretation of the evolution of steep rockslopes. -- La caractérisation de la géologie en 3D pour des parois rocheuses inaccessibles est une étape nécessaire pour évaluer les dangers naturels tels que chutes de blocs et glissements rocheux, mais aussi pour réaliser des modèles stratigraphiques ou de structures plissées. Les modèles géologiques 3D ont un grand potentiel pour être appliqués dans une vaste gamme de travaux géologiques dans le domaine de la recherche, mais aussi dans des projets appliqués comme les mines, les tunnels ou les réservoirs. Les développements récents des outils de télédétection terrestre (LiDAR, photogrammétrie et imagerie multispectrale / hyperspectrale) sont en train de révolutionner l'acquisition d'informations géomorphologiques et géologiques. Par conséquence, il y a un grand potentiel d'amélioration pour la modélisation d'objets géologiques, ainsi que des mécanismes de rupture et des conditions de stabilité, en intégrant des données détaillées acquises à distance. Pour augmenter les possibilités de prévoir les éboulements futurs, il est fondamental de comprendre l'évolution actuelle de la stabilité des parois rocheuses. Définir les zones qui sont théoriquement plus propices aux chutes de blocs peut être très utile pour simuler les trajectoires de propagation des blocs et pour réaliser des cartes de danger, qui constituent la base de l'aménagement du territoire dans les régions de montagne. Les questions plus importantes à résoudre pour estimer le danger de chutes de blocs sont : Où se situent les sources plus probables pour les chutes de blocs et éboulement futurs ? Avec quelle fréquence vont se produire ces événements ? Donc, j'ai caractérisé les réseaux de fractures sur le terrain et avec des nuages de points LiDAR. Ensuite, j'ai développé un modèle pour calculer les mécanismes de rupture directement sur les nuages de points pour pouvoir évaluer la susceptibilité au déclenchement de chutes de blocs à l'échelle de la paroi. Les zones sources de chutes de blocs les plus probables dans les parois granitiques de la vallée de Yosemite et du massif du Mont-Blanc ont été calculées et ensuite comparés aux inventaires des événements pour vérifier les méthodes. Des modèles d'équilibre limite ont été appliqués à plusieurs cas d'études pour évaluer les effets de différents paramètres sur la stabilité des parois. L'impact de la dégradation des ponts rocheux sur la stabilité de grands compartiments de roche dans la paroi ouest du Petit Dru a été évalué en utilisant la modélisation par éléments finis. En particulier j'ai analysé le grand éboulement de 2005 (265'000 m3), qui a emporté l'entier du pilier sud-ouest. Dans le modèle j'ai intégré des observations des conditions des joints, les caractéristiques du réseau de fractures et les résultats de tests géoméchaniques sur la roche intacte. Ces analyses ont amélioré l'estimation des paramètres qui influencent la stabilité des compartiments rocheux et ont servi pour définir des volumes probables pour des éboulements futurs. Les nuages de points obtenus avec le scanner laser terrestre ont été utilisés avec succès aussi pour produire des cartes géologiques en 3D, en utilisant l'intensité du signal réfléchi. Une autre technique pour obtenir des cartes géologiques des zones verticales consiste à combiner un maillage LiDAR avec une carte géologique en 2D. A El Capitan (Yosemite Valley) nous avons pu géoréferencer une carte verticale des principales roches plutoniques que j'ai utilisé ensuite pour étudier les raisons d'une érosion préférentielle de certaines zones de la paroi. D'autres efforts pour quantifier le taux d'érosion ont été effectués au Monte Generoso (Ticino, Suisse) où j'ai essayé d'améliorer l'estimation de l'érosion au long terme en prenant en compte les volumes des compartiments rocheux instables. L'intégration de ces résultats, sur la fracturation et la composition de l'amas rocheux, avec les méthodes existantes permet d'améliorer la prise en compte de l'aléa chute de pierres et éboulements et augmente les possibilités d'interprétation de l'évolution des parois rocheuses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evaluation of image quality (IQ) in Computed Tomography (CT) is important to ensure that diagnostic questions are correctly answered, whilst keeping radiation dose to the patient as low as is reasonably possible. The assessment of individual aspects of IQ is already a key component of routine quality control of medical x-ray devices. These values together with standard dose indicators can be used to give rise to 'figures of merit' (FOM) to characterise the dose efficiency of the CT scanners operating in certain modes. The demand for clinically relevant IQ characterisation has naturally increased with the development of CT technology (detectors efficiency, image reconstruction and processing), resulting in the adaptation and evolution of assessment methods. The purpose of this review is to present the spectrum of various methods that have been used to characterise image quality in CT: from objective measurements of physical parameters to clinically task-based approaches (i.e. model observer (MO) approach) including pure human observer approach. When combined together with a dose indicator, a generalised dose efficiency index can be explored in a framework of system and patient dose optimisation. We will focus on the IQ methodologies that are required for dealing with standard reconstruction, but also for iterative reconstruction algorithms. With this concept the previously used FOM will be presented with a proposal to update them in order to make them relevant and up to date with technological progress. The MO that objectively assesses IQ for clinically relevant tasks represents the most promising method in terms of radiologist sensitivity performance and therefore of most relevance in the clinical environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This chapter presents possible uses and examples of Monte Carlo methods for the evaluation of uncertainties in the field of radionuclide metrology. The method is already well documented in GUM supplement 1, but here we present a more restrictive approach, where the quantities of interest calculated by the Monte Carlo method are estimators of the expectation and standard deviation of the measurand, and the Monte Carlo method is used to propagate the uncertainties of the input parameters through the measurement model. This approach is illustrated by an example of the activity calibration of a 103Pd source by liquid scintillation counting and the calculation of a linear regression on experimental data points. An electronic supplement presents some algorithms which may be used to generate random numbers with various statistical distributions, for the implementation of this Monte Carlo calculation method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: Iterative algorithms introduce new challenges in the field of image quality assessment. The purpose of this study is to use a mathematical model to evaluate objectively the low contrast detectability in CT. MATERIALS AND METHODS: A QRM 401 phantom containing 5 and 8 mm diameter spheres with a contrast level of 10 and 20 HU was used. The images were acquired at 120 kV with CTDIvol equal to 5, 10, 15, 20 mGy and reconstructed using the filtered back-projection (FBP), adaptive statistical iterative reconstruction 50% (ASIR 50%) and model-based iterative reconstruction (MBIR) algorithms. The model observer used is the Channelized Hotelling Observer (CHO). The channels are dense difference of Gaussian channels (D-DOG). The CHO performances were compared to the outcomes of six human observers having performed four alternative forced choice (4-AFC) tests. RESULTS: For the same CTDIvol level and according to CHO model, the MBIR algorithm gives the higher detectability index. The outcomes of human observers and results of CHO are highly correlated whatever the dose levels, the signals considered and the algorithms used when some noise is added to the CHO model. The Pearson coefficient between the human observers and the CHO is 0.93 for FBP and 0.98 for MBIR. CONCLUSION: The human observers' performances can be predicted by the CHO model. This opens the way for proposing, in parallel to the standard dose report, the level of low contrast detectability expected. The introduction of iterative reconstruction requires such an approach to ensure that dose reduction does not impair diagnostics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nanogenotoxicity is a crucial endpoint in safety testing of nanomaterials as it addresses potential mutagenicity, which has implications for risks of both genetic disease and carcinogenesis. Within the NanoTEST project, we investigated the genotoxic potential of well-characterised nanoparticles (NPs): titanium dioxide (TiO2) NPs of nominal size 20 nm, iron oxide (8 nm) both uncoated (U-Fe3O4) and oleic acid coated (OC-Fe3O4), rhodamine-labelled amorphous silica 25 (Fl-25 SiO2) and 50 nm (Fl-50 SiO) and polylactic glycolic acid polyethylene oxide polymeric NPs - as well as Endorem® as a negative control for detection of strand breaks and oxidised DNA lesions with the alkaline comet assay. Using primary cells and cell lines derived from blood (human lymphocytes and lymphoblastoid TK6 cells), vascular/central nervous system (human endothelial human cerebral endothelial cells), liver (rat hepatocytes and Kupffer cells), kidney (monkey Cos-1 and human HEK293 cells), lung (human bronchial 16HBE14o cells) and placenta (human BeWo b30), we were interested in which in vitro cell model is sufficient to detect positive (genotoxic) and negative (non-genotoxic) responses. All in vitro studies were harmonized, i.e. NPs from the same batch, and identical dispersion protocols (for TiO2 NPs, two dispersions were used), exposure time, concentration range, culture conditions and time-courses were used. The results from the statistical evaluation show that OC-Fe3O4 and TiO2 NPs are genotoxic in the experimental conditions used. When all NPs were included in the analysis, no differences were seen among cell lines - demonstrating the usefulness of the assay in all cells to identify genotoxic and non-genotoxic NPs. The TK6 cells, human lymphocytes, BeWo b30 and kidney cells seem to be the most reliable for detecting a dose-response.