866 resultados para Homeostasis Model Assessment
Resumo:
Static process simulation has traditionally been used to model complex processes for various purposes. However, the use of static processsimulators for the preparation of holistic examinations aiming at improving profit-making capability requires a lot of work because the production of results requires the assessment of the applicability of detailed data which may be irrelevant to the objective. The relevant data for the total assessment gets buried byirrelevant data. Furthermore, the models do not include an examination of the maintenance or risk management, and economic examination is often an extra property added to them which can be performed with a spreadsheet program. A process model applicable to holistic economic examinations has been developed in this work. The model is based on the life cycle profit philosophy developed by Hagberg and Henriksson in 1996. The construction of the model has utilized life cycle assessment and life cycle costing methodologies with a view to developing, above all, a model which would be applicable to the economic examinations of complete wholes and which would require the need for information focusing on aspects essential to the objectives. Life cycle assessment and costing differ from each other in terms of the modeling principles, but the features of bothmethodologies can be used in the development of economic process modeling. Methods applicable to the modeling of complex processes can be examined from the viewpoint of life cycle methodologies, because they involve the collection and management of large corpuses of information and the production of information for the needs of decision-makers as well. The results of the study shows that on the basis of the principles of life cycle modeling, a process model can be created which may be used to produce holistic efficiency examinations on the profit-making capability of the production line, with fewer resources thanwith traditional methods. The calculations of the model are based to the maximum extent on the information system of the factory, which means that the accuracyof the results can be improved by developing information systems so that they can provide the best information for this kind of examinations.
Resumo:
Shallow upland drains, grips, have been hypothesized as responsible for increased downstream flow magnitudes. Observations provide counterfactual evidence, often relating to the difficulty of inferring conclusions from statistical correlation and paired catchment comparisons, and the complexity of designing field experiments to test grip impacts at the catchment scale. Drainage should provide drier antecedent moisture conditions, providing more storage at the start of an event; however, grips have higher flow velocities than overland flow, thus potentially delivering flow more rapidly to the drainage network. We develop and apply a model for assessing the impacts of grips on flow hydrographs. The model was calibrated on the gripped case, and then the gripped case was compared with the intact case by removing all grips. This comparison showed that even given parameter uncertainty, the intact case had significantly higher flood peaks and lower baseflows, mirroring field observations of the hydrological response of intact peat. The simulations suggest that this is because delivery effects may not translate into catchment-scale impacts for three reasons. First, in our case, the proportions of flow path lengths that were hillslope were not changed significantly by gripping. Second, the structure of the grip network as compared with the structure of the drainage basin mitigated against grip-related increases in the concentration of runoff in the drainage network, although it did marginally reduce the mean timing of that concentration at the catchment outlet. Third, the effect of the latter upon downstream flow magnitudes can only be assessed by reference to the peak timing of other tributary basins, emphasizing that drain effects are both relative and scale dependent. However, given the importance of hillslope flow paths, we show that if upland drainage causes significant changes in surface roughness on hillslopes, then critical and important feedbacks may impact upon the speed of hydrological response. Copyright (c) 2012 John Wiley & Sons, Ltd.
Resumo:
Absorption, transport and storage of iron are tightly regulated, as expected for an element, which is both essential and potentially toxic. Iron deficiency is the leading cause of anaemia, and it also compromises immune function and cognitive development. Iron overload damages the liver and other organs in hereditary hemochromatosis, and in thalassaemia patients with both transfusion and non-transfusionrelated iron accumulation. Excess iron has harmful effects in chronic liver diseases caused by excessive alcohol, obesity or viruses. There is evidence for involvement of iron in neurodegenerative diseases and in Type 2 diabetes. Variation in transferrin saturation, a biomarker of iron status, has been associated with mortality in patients with diabetes and in the general population13. All these associations between iron and either clinical disease or pathological processes make it important to understand the causes of variation in iron status. Importantly, information on genetic causes of variation can be used in Mendelian randomization studies to test whether variation in iron status is a cause or consequence of disease. We have used biomarkers of iron status (serum iron, transferrin, transferrin saturation and ferritin), which are commonly used clinically and readily measurable in thousands of individuals, and carried out a meta-analysis of human genomewide association study (GWAS) data from 11 discovery and eight replication cohorts. Our aims were to identify additional loci affecting markers of iron status in the general population and to relate the significant loci to information on gene expression to identify relevant genes. We also made an initial assessment of whether any such loci affect iron status in HFE C282Y homozygotes, who are at genetic risk of HFE-related iron overload (hereditary hemochromatosis type 1, OMIM #235200)
Resumo:
STUDY OBJECTIVES: Sleep fragmentation (SF) is an integral feature of sleep apnea and other prevalent sleep disorders. Although the effect of repetitive arousals on cognitive performance is well documented, the effects of long-term SF on electroencephalography (EEG) and molecular markers of sleep homeostasis remain poorly investigated. To address this question, we developed a mouse model of chronic SF and characterized its effect on EEG spectral frequencies and the expression of genes previously linked to sleep homeostasis including clock genes, heat shock proteins, and plasticity-related genes. DESIGN: N/A. SETTING: Animal sleep research laboratory. PARTICIPANTS: Sixty-six C57BL6/J adult mice. INTERVENTIONS: Instrumental sleep disruption at a rate of 60/h during 14 days. MEASUREMENTS AND RESULTS: Locomotor activity and EEG were recorded during 14 days of SF followed by recovery for 2 days. Despite a dramatic number of arousals and decreased sleep bout duration, SF minimally reduced total quantity of sleep and did not significantly alter its circadian distribution. Spectral analysis during SF revealed a homeostatic drive for slow wave activity (SWA; 1-4 Hz) and other frequencies as well (4-40 Hz). Recordings during recovery revealed slow wave sleep consolidation and a transient rebound in SWA, and paradoxical sleep duration. The expression of selected genes was not induced following chronic SF. CONCLUSIONS: Chronic SF increased sleep pressure confirming that altered quality with preserved quantity triggers core sleep homeostasis mechanisms. However, it did not induce the expression of genes induced by sleep loss, suggesting that these molecular pathways are not sustainably activated in chronic diseases involving SF.
Resumo:
(13)C magnetic resonance spectroscopy (MRS) combined with the administration of (13)C labeled substrates uniquely allows to measure metabolic fluxes in vivo in the brain of humans and rats. The extension to mouse models may provide exclusive prospect for the investigation of models of human diseases. In the present study, the short-echo-time (TE) full-sensitivity (1)H-[(13)C] MRS sequence combined with high magnetic field (14.1 T) and infusion of [U-(13)C6] glucose was used to enhance the experimental sensitivity in vivo in the mouse brain and the (13)C turnover curves of glutamate C4, glutamine C4, glutamate+glutamine C3, aspartate C2, lactate C3, alanine C3, γ-aminobutyric acid C2, C3 and C4 were obtained. A one-compartment model was used to fit (13)C turnover curves and resulted in values of metabolic fluxes including the tricarboxylic acid (TCA) cycle flux VTCA (1.05 ± 0.04 μmol/g per minute), the exchange flux between 2-oxoglutarate and glutamate Vx (0.48 ± 0.02 μmol/g per minute), the glutamate-glutamine exchange rate V(gln) (0.20 ± 0.02 μmol/g per minute), the pyruvate dilution factor K(dil) (0.82 ± 0.01), and the ratio for the lactate conversion rate and the alanine conversion rate V(Lac)/V(Ala) (10 ± 2). This study opens the prospect of studying transgenic mouse models of brain pathologies.
Resumo:
Mutations in GDAP1, which encodes protein located in the mitochondrial outer membrane, cause axonal recessive (AR-CMT2), axonal dominant (CMT2K) and demyelinating recessive (CMT4A) forms of Charcot-Marie-Tooth (CMT) neuropathy. Loss of function recessive mutations in GDAP1 are associated with decreased mitochondrial fission activity, while dominant mutations result in impairment of mitochondrial fusion with increased production of reactive oxygen species and susceptibility to apoptotic stimuli. GDAP1 silencing in vitro reduces Ca2+ inflow through store-operated Ca2+ entry (SOCE) upon mobilization of endoplasmic reticulum (ER) Ca2+, likely in association with an abnormal distribution of the mitochondrial network. To investigate the functional consequences of lack of GDAP1 in vivo, we generated a Gdap1 knockout mouse. The affected animals presented abnormal motor behavior starting at the age of 3 months. Electrophysiological and biochemical studies confirmed the axonal nature of the neuropathy whereas histopathological studies over time showed progressive loss of motor neurons (MNs) in the anterior horn of the spinal cord and defects in neuromuscular junctions. Analyses of cultured embryonic MNs and adult dorsal root ganglia neurons from affected animals demonstrated large and defective mitochondria, changes in the ER cisternae, reduced acetylation of cytoskeletal α-tubulin and increased autophagy vesicles. Importantly, MNs showed reduced cytosolic calcium and SOCE response. The development and characterization of the GDAP1 neuropathy mice model thus revealed that some of the pathophysiological changes present in axonal recessive form of the GDAP1-related CMT might be the consequence of changes in the mitochondrial network biology and mitochondria-endoplasmic reticulum interaction leading to abnormalities in calcium homeostasis.
Resumo:
[cat] La carpeta d'aprenentatge representa un punt de trobada dels grans temes que han ocupat la Didàctica d'ençà dels anys 90 del segle passat: el caràcter formatiu de l'avaluació, l'assumpte de les competències, la metacognició i el pensament crític de l'estudiant, el paper desenvolupat per les TIC, i la concepció d'un aprenentatge col·laboratiu i plantejat a llarg termini. En aquest article de revisió, hom estudia com aquests temes s'han concretat en la carpeta d'aprenentatge i n'han determinat la seva evolució. [eng] The learning portofolio represents a meeting point for the big questions than have concernend teaching since the 1990s: the formative character of the assessment, the issue of competencies, metacognition and the students's critical thinking, the role played by ICTs, and the idea of collaborative learning considered in the long term. The article looks at how these subjects have taken shape in the learning portofolio and how they have determinded its evolutin.
Resumo:
In recent years, there has been an increased attention towards the composition of feeding fats. In the aftermath of the BSE crisis all animal by-products utilised in animal nutrition have been subjected to close scrutiny. Regulation requires that the material belongs to the category of animal by-products fit for human consumption. This implies the use of reliable techniques in order to insure the safety of products. The feasibility of using rapid and non-destructive methods, to control the composition of feedstuffs on animal fats has been studied. Fourier Transform Raman spectroscopy has been chosen for its advantage to give detailed structural information. Data were treated using chemometric methods as PCA and PLS-DA which have permitted to separate well the different classes of animal fats. The same methodology was applied on fats from various types of feedstock and production technology processes. PLS-DA model for the discrimination of animal fats from the other categories presents a sensitivity and a specificity of 0.958 and 0.914, respectively. These results encourage the use of FT-Raman spectroscopy to discriminate animal fats.
Resumo:
The development of dysfunctional or exhausted T cells is characteristic of immune responses to chronic viral infections and cancer. Exhausted T cells are defined by reduced effector function, sustained upregulation of multiple inhibitory receptors, an altered transcriptional program and perturbations of normal memory development and homeostasis. This review focuses on (a) illustrating milestone discoveries that led to our present understanding of T cell exhaustion, (b) summarizing recent developments in the field, and (c) identifying new challenges for translational research. Exhausted T cells are now recognized as key therapeutic targets in human infections and cancer. Much of our knowledge of the clinically relevant process of exhaustion derives from studies in the mouse model of Lymphocytic choriomeningitis virus (LCMV) infection. Studies using this model have formed the foundation for our understanding of human T cell memory and exhaustion. We will use this example to discuss recent advances in our understanding of T cell exhaustion and illustrate the value of integrated mouse and human studies and will emphasize the benefits of bi-directional mouse-to-human and human-to-mouse research approaches.
Resumo:
AIMS: Notch1 signalling in the heart is mainly activated via expression of Jagged1 on the surface of cardiomyocytes. Notch controls cardiomyocyte proliferation and differentiation in the developing heart and regulates cardiac remodelling in the stressed adult heart. Besides canonical Notch receptor activation in signal-receiving cells, Notch ligands can also activate Notch receptor-independent responses in signal-sending cells via release of their intracellular domain. We evaluated therefore the importance of Jagged1 (J1) intracellular domain (ICD)-mediated pathways in the postnatal heart. METHODS AND RESULTS: In cardiomyocytes, Jagged1 releases J1ICD, which then translocates into the nucleus and down-regulates Notch transcriptional activity. To study the importance of J1ICD in cardiac homeostasis, we generated transgenic mice expressing a tamoxifen-inducible form of J1ICD, specifically in cardiomyocytes. Using this model, we demonstrate that J1ICD-mediated Notch inhibition diminishes proliferation in the neonatal cardiomyocyte population and promotes maturation. In the neonatal heart, a response via Wnt and Akt pathway activation is elicited as an attempt to compensate for the deficit in cardiomyocyte number resulting from J1ICD activation. In the stressed adult heart, J1ICD activation results in a dramatic reduction of the number of Notch signalling cardiomyocytes, blunts the hypertrophic response, and reduces the number of apoptotic cardiomyocytes. Consistently, this occurs concomitantly with a significant down-regulation of the phosphorylation of the Akt effectors ribosomal S6 protein (S6) and eukaryotic initiation factor 4E binding protein1 (4EBP1) controlling protein synthesis. CONCLUSIONS: Altogether, these data demonstrate the importance of J1ICD in the modulation of physiological and pathological hypertrophy, and reveal the existence of a novel pathway regulating cardiac homeostasis.
Resumo:
Obesity is associated with chronic food intake disorders and binge eating. Food intake relies on the interaction between homeostatic regulation and hedonic signals among which, olfaction is a major sensory determinant. However, its potential modulation at the peripheral level by a chronic energy imbalance associated to obese status remains a matter of debate. We further investigated the olfactory function in a rodent model relevant to the situation encountered in obese humans, where genetic susceptibility is juxtaposed on chronic eating disorders. Using several olfactory-driven tests, we compared the behaviors of obesity-prone Sprague-Dawley rats (OP) fed with a high-fat/high-sugar diet with those of obese-resistant ones fed with normal chow. In OP rats, we reported 1) decreased odor threshold, but 2) poor olfactory performances, associated with learning/memory deficits, 3) decreased influence of fasting, and 4) impaired insulin control on food seeking behavior. Associated with these behavioral modifications, we found a modulation of metabolism-related factors implicated in 1) electrical olfactory signal regulation (insulin receptor), 2) cellular dynamics (glucorticoids receptors, pro- and antiapoptotic factors), and 3) homeostasis of the olfactory mucosa and bulb (monocarboxylate and glucose transporters). Such impairments might participate to the perturbed daily food intake pattern that we observed in obese animals.
Resumo:
Characterizing the geological features and structures in three dimensions over inaccessible rock cliffs is needed to assess natural hazards such as rockfalls and rockslides and also to perform investigations aimed at mapping geological contacts and building stratigraphy and fold models. Indeed, the detailed 3D data, such as LiDAR point clouds, allow to study accurately the hazard processes and the structure of geologic features, in particular in vertical and overhanging rock slopes. Thus, 3D geological models have a great potential of being applied to a wide range of geological investigations both in research and applied geology projects, such as mines, tunnels and reservoirs. Recent development of ground-based remote sensing techniques (LiDAR, photogrammetry and multispectral / hyperspectral images) are revolutionizing the acquisition of morphological and geological information. As a consequence, there is a great potential for improving the modeling of geological bodies as well as failure mechanisms and stability conditions by integrating detailed remote data. During the past ten years several large rockfall events occurred along important transportation corridors where millions of people travel every year (Switzerland: Gotthard motorway and railway; Canada: Sea to sky highway between Vancouver and Whistler). These events show that there is still a lack of knowledge concerning the detection of potential rockfalls, making mountain residential settlements and roads highly risky. It is necessary to understand the main factors that destabilize rocky outcrops even if inventories are lacking and if no clear morphological evidences of rockfall activity are observed. In order to increase the possibilities of forecasting potential future landslides, it is crucial to understand the evolution of rock slope stability. Defining the areas theoretically most prone to rockfalls can be particularly useful to simulate trajectory profiles and to generate hazard maps, which are the basis for land use planning in mountainous regions. The most important questions to address in order to assess rockfall hazard are: Where are the most probable sources for future rockfalls located? What are the frequencies of occurrence of these rockfalls? I characterized the fracturing patterns in the field and with LiDAR point clouds. Afterwards, I developed a model to compute the failure mechanisms on terrestrial point clouds in order to assess the susceptibility to rockfalls at the cliff scale. Similar procedures were already available to evaluate the susceptibility to rockfalls based on aerial digital elevation models. This new model gives the possibility to detect the most susceptible rockfall sources with unprecented detail in the vertical and overhanging areas. The results of the computation of the most probable rockfall source areas in granitic cliffs of Yosemite Valley and Mont-Blanc massif were then compared to the inventoried rockfall events to validate the calculation methods. Yosemite Valley was chosen as a test area because it has a particularly strong rockfall activity (about one rockfall every week) which leads to a high rockfall hazard. The west face of the Dru was also chosen for the relevant rockfall activity and especially because it was affected by some of the largest rockfalls that occurred in the Alps during the last 10 years. Moreover, both areas were suitable because of their huge vertical and overhanging cliffs that are difficult to study with classical methods. Limit equilibrium models have been applied to several case studies to evaluate the effects of different parameters on the stability of rockslope areas. The impact of the degradation of rockbridges on the stability of large compartments in the west face of the Dru was assessed using finite element modeling. In particular I conducted a back-analysis of the large rockfall event of 2005 (265'000 m3) by integrating field observations of joint conditions, characteristics of fracturing pattern and results of geomechanical tests on the intact rock. These analyses improved our understanding of the factors that influence the stability of rock compartments and were used to define the most probable future rockfall volumes at the Dru. Terrestrial laser scanning point clouds were also successfully employed to perform geological mapping in 3D, using the intensity of the backscattered signal. Another technique to obtain vertical geological maps is combining triangulated TLS mesh with 2D geological maps. At El Capitan (Yosemite Valley) we built a georeferenced vertical map of the main plutonio rocks that was used to investigate the reasons for preferential rockwall retreat rate. Additional efforts to characterize the erosion rate were made at Monte Generoso (Ticino, southern Switzerland) where I attempted to improve the estimation of long term erosion by taking into account also the volumes of the unstable rock compartments. Eventually, the following points summarize the main out puts of my research: The new model to compute the failure mechanisms and the rockfall susceptibility with 3D point clouds allows to define accurately the most probable rockfall source areas at the cliff scale. The analysis of the rockbridges at the Dru shows the potential of integrating detailed measurements of the fractures in geomechanical models of rockmass stability. The correction of the LiDAR intensity signal gives the possibility to classify a point cloud according to the rock type and then use this information to model complex geologic structures. The integration of these results, on rockmass fracturing and composition, with existing methods can improve rockfall hazard assessments and enhance the interpretation of the evolution of steep rockslopes. -- La caractérisation de la géologie en 3D pour des parois rocheuses inaccessibles est une étape nécessaire pour évaluer les dangers naturels tels que chutes de blocs et glissements rocheux, mais aussi pour réaliser des modèles stratigraphiques ou de structures plissées. Les modèles géologiques 3D ont un grand potentiel pour être appliqués dans une vaste gamme de travaux géologiques dans le domaine de la recherche, mais aussi dans des projets appliqués comme les mines, les tunnels ou les réservoirs. Les développements récents des outils de télédétection terrestre (LiDAR, photogrammétrie et imagerie multispectrale / hyperspectrale) sont en train de révolutionner l'acquisition d'informations géomorphologiques et géologiques. Par conséquence, il y a un grand potentiel d'amélioration pour la modélisation d'objets géologiques, ainsi que des mécanismes de rupture et des conditions de stabilité, en intégrant des données détaillées acquises à distance. Pour augmenter les possibilités de prévoir les éboulements futurs, il est fondamental de comprendre l'évolution actuelle de la stabilité des parois rocheuses. Définir les zones qui sont théoriquement plus propices aux chutes de blocs peut être très utile pour simuler les trajectoires de propagation des blocs et pour réaliser des cartes de danger, qui constituent la base de l'aménagement du territoire dans les régions de montagne. Les questions plus importantes à résoudre pour estimer le danger de chutes de blocs sont : Où se situent les sources plus probables pour les chutes de blocs et éboulement futurs ? Avec quelle fréquence vont se produire ces événements ? Donc, j'ai caractérisé les réseaux de fractures sur le terrain et avec des nuages de points LiDAR. Ensuite, j'ai développé un modèle pour calculer les mécanismes de rupture directement sur les nuages de points pour pouvoir évaluer la susceptibilité au déclenchement de chutes de blocs à l'échelle de la paroi. Les zones sources de chutes de blocs les plus probables dans les parois granitiques de la vallée de Yosemite et du massif du Mont-Blanc ont été calculées et ensuite comparés aux inventaires des événements pour vérifier les méthodes. Des modèles d'équilibre limite ont été appliqués à plusieurs cas d'études pour évaluer les effets de différents paramètres sur la stabilité des parois. L'impact de la dégradation des ponts rocheux sur la stabilité de grands compartiments de roche dans la paroi ouest du Petit Dru a été évalué en utilisant la modélisation par éléments finis. En particulier j'ai analysé le grand éboulement de 2005 (265'000 m3), qui a emporté l'entier du pilier sud-ouest. Dans le modèle j'ai intégré des observations des conditions des joints, les caractéristiques du réseau de fractures et les résultats de tests géoméchaniques sur la roche intacte. Ces analyses ont amélioré l'estimation des paramètres qui influencent la stabilité des compartiments rocheux et ont servi pour définir des volumes probables pour des éboulements futurs. Les nuages de points obtenus avec le scanner laser terrestre ont été utilisés avec succès aussi pour produire des cartes géologiques en 3D, en utilisant l'intensité du signal réfléchi. Une autre technique pour obtenir des cartes géologiques des zones verticales consiste à combiner un maillage LiDAR avec une carte géologique en 2D. A El Capitan (Yosemite Valley) nous avons pu géoréferencer une carte verticale des principales roches plutoniques que j'ai utilisé ensuite pour étudier les raisons d'une érosion préférentielle de certaines zones de la paroi. D'autres efforts pour quantifier le taux d'érosion ont été effectués au Monte Generoso (Ticino, Suisse) où j'ai essayé d'améliorer l'estimation de l'érosion au long terme en prenant en compte les volumes des compartiments rocheux instables. L'intégration de ces résultats, sur la fracturation et la composition de l'amas rocheux, avec les méthodes existantes permet d'améliorer la prise en compte de l'aléa chute de pierres et éboulements et augmente les possibilités d'interprétation de l'évolution des parois rocheuses.
Resumo:
We have investigated the behavior of bistable cells made up of four quantum dots and occupied by two electrons, in the presence of realistic confinement potentials produced by depletion gates on top of a GaAs/AlGaAs heterostructure. Such a cell represents the basic building block for logic architectures based on the concept of quantum cellular automata (QCA) and of ground state computation, which have been proposed as an alternative to traditional transistor-based logic circuits. We have focused on the robustness of the operation of such cells with respect to asymmetries derived from fabrication tolerances. We have developed a two-dimensional model for the calculation of the electron density in a driven cell in response to the polarization state of a driver cell. Our method is based on the one-shot configuration-interaction technique, adapted from molecular chemistry. From the results of our simulations, we conclude that an implementation of QCA logic based on simple ¿hole arrays¿ is not feasible, because of the extreme sensitivity to fabrication tolerances. As an alternative, we propose cells defined by multiple gates, where geometrical asymmetries can be compensated for by adjusting the bias voltages. Even though not immediately applicable to the implementation of logic gates and not suitable for large scale integration, the proposed cell layout should allow an experimental demonstration of a chain of QCA cells.
Resumo:
Evaluation of image quality (IQ) in Computed Tomography (CT) is important to ensure that diagnostic questions are correctly answered, whilst keeping radiation dose to the patient as low as is reasonably possible. The assessment of individual aspects of IQ is already a key component of routine quality control of medical x-ray devices. These values together with standard dose indicators can be used to give rise to 'figures of merit' (FOM) to characterise the dose efficiency of the CT scanners operating in certain modes. The demand for clinically relevant IQ characterisation has naturally increased with the development of CT technology (detectors efficiency, image reconstruction and processing), resulting in the adaptation and evolution of assessment methods. The purpose of this review is to present the spectrum of various methods that have been used to characterise image quality in CT: from objective measurements of physical parameters to clinically task-based approaches (i.e. model observer (MO) approach) including pure human observer approach. When combined together with a dose indicator, a generalised dose efficiency index can be explored in a framework of system and patient dose optimisation. We will focus on the IQ methodologies that are required for dealing with standard reconstruction, but also for iterative reconstruction algorithms. With this concept the previously used FOM will be presented with a proposal to update them in order to make them relevant and up to date with technological progress. The MO that objectively assesses IQ for clinically relevant tasks represents the most promising method in terms of radiologist sensitivity performance and therefore of most relevance in the clinical environment.
Resumo:
This chapter presents possible uses and examples of Monte Carlo methods for the evaluation of uncertainties in the field of radionuclide metrology. The method is already well documented in GUM supplement 1, but here we present a more restrictive approach, where the quantities of interest calculated by the Monte Carlo method are estimators of the expectation and standard deviation of the measurand, and the Monte Carlo method is used to propagate the uncertainties of the input parameters through the measurement model. This approach is illustrated by an example of the activity calibration of a 103Pd source by liquid scintillation counting and the calculation of a linear regression on experimental data points. An electronic supplement presents some algorithms which may be used to generate random numbers with various statistical distributions, for the implementation of this Monte Carlo calculation method.