932 resultados para Random field model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research report illustrates and examines new operation models for decreasing fixed costs and transforming them into variable costs in the field of paper industry. The report illustrates two cases - a new operation model for material logistics in maintenance and an examination of forklift truck fleet outsourcing solutions. Conventional material logistics in maintenance operation is illustrated and some problems related to conventional operation are identified. A new operation model that solves some of these problems is presented including descriptions of procurement and service contracts and sources of added value. Forklift truck fleet outsourcing solutions are examined by illustrating the responsibilities of a host company and a service provider both before and after outsourcing. The customer buys outsourcingservices in order to improve its investment productivity. The mechanism of how these services affect the customer company's investment productivity is illustrated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract The main objective of this work is to show how the choice of the temporal dimension and of the spatial structure of the population influences an artificial evolutionary process. In the field of Artificial Evolution we can observe a common trend in synchronously evolv¬ing panmictic populations, i.e., populations in which any individual can be recombined with any other individual. Already in the '90s, the works of Spiessens and Manderick, Sarma and De Jong, and Gorges-Schleuter have pointed out that, if a population is struc¬tured according to a mono- or bi-dimensional regular lattice, the evolutionary process shows a different dynamic with respect to the panmictic case. In particular, Sarma and De Jong have studied the selection pressure (i.e., the diffusion of a best individual when the only selection operator is active) induced by a regular bi-dimensional structure of the population, proposing a logistic modeling of the selection pressure curves. This model supposes that the diffusion of a best individual in a population follows an exponential law. We show that such a model is inadequate to describe the process, since the growth speed must be quadratic or sub-quadratic in the case of a bi-dimensional regular lattice. New linear and sub-quadratic models are proposed for modeling the selection pressure curves in, respectively, mono- and bi-dimensional regu¬lar structures. These models are extended to describe the process when asynchronous evolutions are employed. Different dynamics of the populations imply different search strategies of the resulting algorithm, when the evolutionary process is used to solve optimisation problems. A benchmark of both discrete and continuous test problems is used to study the search characteristics of the different topologies and updates of the populations. In the last decade, the pioneering studies of Watts and Strogatz have shown that most real networks, both in the biological and sociological worlds as well as in man-made structures, have mathematical properties that set them apart from regular and random structures. In particular, they introduced the concepts of small-world graphs, and they showed that this new family of structures has interesting computing capabilities. Populations structured according to these new topologies are proposed, and their evolutionary dynamics are studied and modeled. We also propose asynchronous evolutions for these structures, and the resulting evolutionary behaviors are investigated. Many man-made networks have grown, and are still growing incrementally, and explanations have been proposed for their actual shape, such as Albert and Barabasi's preferential attachment growth rule. However, many actual networks seem to have undergone some kind of Darwinian variation and selection. Thus, how these networks might have come to be selected is an interesting yet unanswered question. In the last part of this work, we show how a simple evolutionary algorithm can enable the emrgence o these kinds of structures for two prototypical problems of the automata networks world, the majority classification and the synchronisation problems. Synopsis L'objectif principal de ce travail est de montrer l'influence du choix de la dimension temporelle et de la structure spatiale d'une population sur un processus évolutionnaire artificiel. Dans le domaine de l'Evolution Artificielle on peut observer une tendence à évoluer d'une façon synchrone des populations panmictiques, où chaque individu peut être récombiné avec tout autre individu dans la population. Déjà dans les année '90, Spiessens et Manderick, Sarma et De Jong, et Gorges-Schleuter ont observé que, si une population possède une structure régulière mono- ou bi-dimensionnelle, le processus évolutionnaire montre une dynamique différente de celle d'une population panmictique. En particulier, Sarma et De Jong ont étudié la pression de sélection (c-à-d la diffusion d'un individu optimal quand seul l'opérateur de sélection est actif) induite par une structure régulière bi-dimensionnelle de la population, proposant une modélisation logistique des courbes de pression de sélection. Ce modèle suppose que la diffusion d'un individu optimal suit une loi exponentielle. On montre que ce modèle est inadéquat pour décrire ce phénomène, étant donné que la vitesse de croissance doit obéir à une loi quadratique ou sous-quadratique dans le cas d'une structure régulière bi-dimensionnelle. De nouveaux modèles linéaires et sous-quadratique sont proposés pour des structures mono- et bi-dimensionnelles. Ces modèles sont étendus pour décrire des processus évolutionnaires asynchrones. Différentes dynamiques de la population impliquent strategies différentes de recherche de l'algorithme résultant lorsque le processus évolutionnaire est utilisé pour résoudre des problèmes d'optimisation. Un ensemble de problèmes discrets et continus est utilisé pour étudier les charactéristiques de recherche des différentes topologies et mises à jour des populations. Ces dernières années, les études de Watts et Strogatz ont montré que beaucoup de réseaux, aussi bien dans les mondes biologiques et sociologiques que dans les structures produites par l'homme, ont des propriétés mathématiques qui les séparent à la fois des structures régulières et des structures aléatoires. En particulier, ils ont introduit la notion de graphe sm,all-world et ont montré que cette nouvelle famille de structures possède des intéressantes propriétés dynamiques. Des populations ayant ces nouvelles topologies sont proposés, et leurs dynamiques évolutionnaires sont étudiées et modélisées. Pour des populations ayant ces structures, des méthodes d'évolution asynchrone sont proposées, et la dynamique résultante est étudiée. Beaucoup de réseaux produits par l'homme se sont formés d'une façon incrémentale, et des explications pour leur forme actuelle ont été proposées, comme le preferential attachment de Albert et Barabàsi. Toutefois, beaucoup de réseaux existants doivent être le produit d'un processus de variation et sélection darwiniennes. Ainsi, la façon dont ces structures ont pu être sélectionnées est une question intéressante restée sans réponse. Dans la dernière partie de ce travail, on montre comment un simple processus évolutif artificiel permet à ce type de topologies d'émerger dans le cas de deux problèmes prototypiques des réseaux d'automates, les tâches de densité et de synchronisation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tämän tutkimuksen aiheena on tilintarkastuksen historiallinen kehittyminen Suomessa runsaan sadan vuoden aikana. Tutkimuksen tavoitteena on analysoida osakeyhtiön tilintarkastuksen kehitystä ja yhdistää vuosisadan kehityspiirteet tilintarkastuksen kokonaiskuvaksi. Tutkittava periodi alkaa 1800-luvun lopulta ja päättyy 2000-luvun taitteeseen. Tutkimuksessa tarkastellaan suomalaista tilintarkastusinstituutiota, joka jaetaan kolmeen osaan: tilintarkastusta säätelevään normistoon (normit), tilintarkastajajärjestelmään (toimijat) ja tilintarkastuksen sisältöön (tehtävät). Tutkimuksessa tavoitellaan vastauksia kysymyksiin: mitä tarkastettiin, milloin tarkastettiin, kuka tarkasti ja miten tarkastettiin eri aikakausina? Tutkimus perustuu historialliseen lähdeaineistoon, jonka muodostavat tutkimusajanjakson lainsäädäntö, lainvalmisteluasiakirjat, viranomaisten ohjeet ja päätökset, alan järjestöjen suositukset, ammattilehtien artikkelit sekä laskentatoimen ja tilintarkastuksen ammattikirjallisuus. Metodologisesti tutkimus on teoreettinen, kvalitatiivinen historiantutkimus, jossa lähdeaineistoa käsitellään lähdekriittisesti ja osittain sisältöanalyysin keinoin. Tilintarkastusta säätelevässä normistossa keskeisiä lakeja ovat olleet osakeyhtiölaki, kirjanpitolaki ja tilintarkastuslaki. Lakisääteinen tilintarkastus alkoi vuoden 1895 osakeyhtiölaista, joka uudistui vuonna 1978 ja jälleen vuonna 1997. Kirjanpitolainsäädäntö on uudistunut viidesti: 1925 ja 1928, 1945, 1973, 1993 sekä 1997. Vuoden 1994 tilintarkastuslakiin koottiin tilintarkastuksen säädökset useista laeista. Muita normistoja ovat olleet EY:n direktiivit, Kilan ohjeet, KHT-yhdistyksen suositukset, Keskuskauppakamarin säännökset ja viimeisimpinä IAS- ja ISA-standardit. Ammattimainen tilintarkastajajärjestelmä saatiin maahamme kauppiaskokousten ansiosta. Ammattimaisena tilintarkastuksen toimijana aloitti Suomen Tilintarkastajainyhdistys vuonna 1911, ja sen toimintaa jatkoi KHT-yhdistys vuodesta 1925 alkaen. Tilintarkastajien auktorisointi siirtyi Keskuskauppakamarille vuonna 1924. HTM-tilintarkastajat ovat olleet alalla vuodesta 1950 lähtien. Kauppakamarijärjestö on toiminut hyväksyttyjen tilintarkastajien valvojana koko ammattimaisen tilintarkastustoiminnan ajan. Valtion valvontaa suorittaa VALA (Valtion tilintarkastuslautakunta). Koko tutkittavan periodin ajan auktorisoitujen tilintarkastajien rinnalla osakeyhtiöiden tarkastajina ovat toimineet myös maallikot.Tilintarkastuksen tehtäviin kuului vuoden 1895 osakeyhtiölain mukaan hallinnon ja tilien tarkastus. Myöhemmin sisältö täsmentyi tilinpäätöksen, kirjanpidon ja hallinnon tarkastukseksi. Tutkimusajanjakson alussa tilintarkastus oli manuaalista kaikkien tositteiden prikkausta ja virheiden etsimistä. Myöhemmin tarkastus muuttui pistokokeiksi. Kertatarkastuksesta siirryttiin jatkuvaan valvontatarkastukseen 1900-luvun alkupuolella. Dokumentoinnista ja työpapereista alkaa olla havaintoja 1930-luvulta lähtien. Atk-tarkastus yleistyi 1970- ja 1980-luvuilla, jolloin myös riskianalyyseihin alettiin kiinnittää huomiota. Hallinnon tarkastuksen merkitys on kasvanut kaiken aikaa. Tilintarkastuskertomukset olivat tutkimusajanjakson alussa vapaamuotoisia ja sisällöltään ilmaisurikkaita ja kuvailevia. Kertomus muuttui julkiseksi vuoden 1978 osakeyhtiölain myötä. Myöhemmin KHT-yhdistyksen vakiokertomusmallit yhdenmukaistivat ja pelkistivät raportointia. Tutkimuksen perusteella tilintarkastuksen historia voidaan jakaa kolmeen kauteen, jotka ovat tilintarkastusinstituution rakentumisen kausi (1895 - 1950), vakiintumisen kausi (1951 - 1985) ja kansainvälistymisen ja julkisuuden kausi (1986 alkaen). Tutkimusajanjakson jokaisella vuosikymmenellä keskusteltiin jatkuvasti tilintarkastajien riittävyydestä, alalle pääsyn ja tutkintojen vaikeudesta, tilintarkastajien ammattitaidon tasosta,hallinnon tarkastuksen sisällöstä, tilintarkastuskertomuksesta sekä maallikkotarkastajien asemasta. 1990-luvun keskeisimmät keskusteluaiheet olivat konsultointi, riippumattomuus, odotuskuilu sekä tilintarkastuksen taso ja laadunvalvonta. Analysoitaessa tilintarkastuksen muutoksia runsaan sadan vuoden ajalta voidaan todeta, että tilintarkastuksen ydintehtävät eivät juurikaan ole muuttuneet vuosikymmenien kuluessa. Osakeyhtiön tilintarkastus on edelleenkin laillisuustarkastusta. Sen tarkoituksena on yhä kirjanpidon, tilinpäätöksen ja hallinnon tarkastus. Tilintarkastajat valvovat osakkeenomistajien etua ja raportoivat heille tarkastuksen tuloksista. Tilintarkastuksen ulkoinen maailma sen sijaan on muuttunut vuosikymmenten saatossa. Kansainvälistyminen on lisännyt säännösten määrää, odotuksia ja vaatimuksia on nykyisin enemmän, uusi tekniikka mahdollistaa nopean tiedonkulun ja valvonta on lisääntynyt nykypäivää kohti tultaessa. Tilintarkastajan pätevyys perustuu nykyään tietotekniikan, tietojärjestelmien ja yrityksen toimialantuntemukseen. Runsaan sadan vuoden takaisen lain vaarinpitovaatimuksesta on tultu virtuaaliaikaiseen maailmaan!

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ruin occurs the first time when the surplus of a company or an institution is negative. In the Omega model, it is assumed that even with a negative surplus, the company can do business as usual until bankruptcy occurs. The probability of bankruptcy at a point of time only depends on the value of the negative surplus at that time. Under the assumption of Brownian motion for the surplus, the expected discounted value of a penalty at bankruptcy is determined, and hence the probability of bankruptcy. There is an intrinsic relation between the probability of no bankruptcy and an exposure random variable. In special cases, the distribution of the total time the Brownian motion spends below zero is found, and the Laplace transform of the integral of the negative part of the Brownian motion is expressed in terms of the Airy function of the first kind.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Transtheoretical Model of behaviour change is currently one of the most promising models in terms of understanding and promoting behaviour change related to the acquisition of healthy living habits. By means of a bibliographic search of papers adopting a TTM approach to obesity, the present bibliometric study enables the scientific output in this field to be evaluated. The results obtained reveal a growing interest in applying this model to both the treatment of obesity and its prevention. Otherwise, author and journal outputs fit the models proposed by Lotka and Bradford, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyze the neutron skin thickness in finite nuclei with the droplet model and effective nuclear interactions. The ratio of the bulk symmetry energy J to the so-called surface stiffness coefficient Q has in the droplet model a prominent role in driving the size of neutron skins. We present a correlation between the density derivative of the nuclear symmetry energy at saturation and the J/Q ratio. We emphasize the role of the surface widths of the neutron and proton density profiles in the calculation of the neutron skin thickness when one uses realistic mean-field effective interactions. Next, taking as experimental baseline the neutron skin sizes measured in 26 antiprotonic atoms along the mass table, we explore constraints arising from neutron skins on the value of the J/Q ratio. The results favor a relatively soft symmetry energy at subsaturation densities. Our predictions are compared with the recent constraints derived from other experimental observables. Though the various extractions predict different ranges of values, one finds a narrow window L∼45-75 MeV for the coefficient L that characterizes the density derivative of the symmetry energy that is compatible with all the different empirical indications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many species are able to learn to associate behaviours with rewards as this gives fitness advantages in changing environments. Social interactions between population members may, however, require more cognitive abilities than simple trial-and-error learning, in particular the capacity to make accurate hypotheses about the material payoff consequences of alternative action combinations. It is unclear in this context whether natural selection necessarily favours individuals to use information about payoffs associated with nontried actions (hypothetical payoffs), as opposed to simple reinforcement of realized payoff. Here, we develop an evolutionary model in which individuals are genetically determined to use either trial-and-error learning or learning based on hypothetical reinforcements, and ask what is the evolutionarily stable learning rule under pairwise symmetric two-action stochastic repeated games played over the individual's lifetime. We analyse through stochastic approximation theory and simulations the learning dynamics on the behavioural timescale, and derive conditions where trial-and-error learning outcompetes hypothetical reinforcement learning on the evolutionary timescale. This occurs in particular under repeated cooperative interactions with the same partner. By contrast, we find that hypothetical reinforcement learners tend to be favoured under random interactions, but stable polymorphisms can also obtain where trial-and-error learners are maintained at a low frequency. We conclude that specific game structures can select for trial-and-error learning even in the absence of costs of cognition, which illustrates that cost-free increased cognition can be counterselected under social interactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Probabilistic inversion methods based on Markov chain Monte Carlo (MCMC) simulation are well suited to quantify parameter and model uncertainty of nonlinear inverse problems. Yet, application of such methods to CPU-intensive forward models can be a daunting task, particularly if the parameter space is high dimensional. Here, we present a 2-D pixel-based MCMC inversion of plane-wave electromagnetic (EM) data. Using synthetic data, we investigate how model parameter uncertainty depends on model structure constraints using different norms of the likelihood function and the model constraints, and study the added benefits of joint inversion of EM and electrical resistivity tomography (ERT) data. Our results demonstrate that model structure constraints are necessary to stabilize the MCMC inversion results of a highly discretized model. These constraints decrease model parameter uncertainty and facilitate model interpretation. A drawback is that these constraints may lead to posterior distributions that do not fully include the true underlying model, because some of its features exhibit a low sensitivity to the EM data, and hence are difficult to resolve. This problem can be partly mitigated if the plane-wave EM data is augmented with ERT observations. The hierarchical Bayesian inverse formulation introduced and used herein is able to successfully recover the probabilistic properties of the measurement data errors and a model regularization weight. Application of the proposed inversion methodology to field data from an aquifer demonstrates that the posterior mean model realization is very similar to that derived from a deterministic inversion with similar model constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background:Average energies of nuclear collective modes may be efficiently and accurately computed using a nonrelativistic constrained approach without reliance on a random phase approximation (RPA). Purpose: To extend the constrained approach to the relativistic domain and to establish its impact on the calibration of energy density functionals. Methods: Relativistic RPA calculations of the giant monopole resonance (GMR) are compared against the predictions of the corresponding constrained approach using two accurately calibrated energy density functionals. Results: We find excellent agreement at the 2% level or better between the predictions of the relativistic RPA and the corresponding constrained approach for magic (or semimagic) nuclei ranging from 16 O to 208 Pb. Conclusions: An efficient and accurate method is proposed for incorporating nuclear collective excitations into the calibration of future energy density functionals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show that the coercive field in ferritin and ferrihydrite depends on the maximum magnetic field in a hysteresis loop and that coercivity and loop shifts depend both on the maximum and cooling fields. In the case of ferritin, we show that the time dependence of the magnetization also depends on the maximum and previous cooling fields. This behavior is associated to changes in the intraparticle energy barriers imprinted by these fields. Accordingly, the dependence of the coercive and loop-shift fields with the maximum field in ferritin and ferrihydrite can be described within the frame of a uniform-rotation model considering a dependence of the energy barrier with the maximum and the cooling fields.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT The traditional method of net present value (NPV) to analyze the economic profitability of an investment (based on a deterministic approach) does not adequately represent the implicit risk associated with different but correlated input variables. Using a stochastic simulation approach for evaluating the profitability of blueberry (Vaccinium corymbosum L.) production in Chile, the objective of this study is to illustrate the complexity of including risk in economic feasibility analysis when the project is subject to several but correlated risks. The results of the simulation analysis suggest that the non-inclusion of the intratemporal correlation between input variables underestimate the risk associated with investment decisions. The methodological contribution of this study illustrates the complexity of the interrelationships between uncertain variables and their impact on the convenience of carrying out this type of business in Chile. The steps for the analysis of economic viability were: First, adjusted probability distributions for stochastic input variables (SIV) were simulated and validated. Second, the random values of SIV were used to calculate random values of variables such as production, revenues, costs, depreciation, taxes and net cash flows. Third, the complete stochastic model was simulated with 10,000 iterations using random values for SIV. This result gave information to estimate the probability distributions of the stochastic output variables (SOV) such as the net present value, internal rate of return, value at risk, average cost of production, contribution margin and return on capital. Fourth, the complete stochastic model simulation results were used to analyze alternative scenarios and provide the results to decision makers in the form of probabilities, probability distributions, and for the SOV probabilistic forecasts. The main conclusion shown that this project is a profitable alternative investment in fruit trees in Chile.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we consider a stochastic process that may experience random reset events which suddenly bring the system to the starting value and analyze the relevant statistical magnitudes. We focus our attention on monotonic continuous-time random walks with a constant drift: The process increases between the reset events, either by the effect of the random jumps, or by the action of the deterministic drift. As a result of all these combined factors interesting properties emerge, like the existence (for any drift strength) of a stationary transition probability density function, or the faculty of the model to reproduce power-law-like behavior. General formulas for two extreme statistics, the survival probability, and the mean exit time, are also derived. To corroborate in an independent way the results of the paper, Monte Carlo methods were used. These numerical estimations are in full agreement with the analytical predictions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of dysfunctional or exhausted T cells is characteristic of immune responses to chronic viral infections and cancer. Exhausted T cells are defined by reduced effector function, sustained upregulation of multiple inhibitory receptors, an altered transcriptional program and perturbations of normal memory development and homeostasis. This review focuses on (a) illustrating milestone discoveries that led to our present understanding of T cell exhaustion, (b) summarizing recent developments in the field, and (c) identifying new challenges for translational research. Exhausted T cells are now recognized as key therapeutic targets in human infections and cancer. Much of our knowledge of the clinically relevant process of exhaustion derives from studies in the mouse model of Lymphocytic choriomeningitis virus (LCMV) infection. Studies using this model have formed the foundation for our understanding of human T cell memory and exhaustion. We will use this example to discuss recent advances in our understanding of T cell exhaustion and illustrate the value of integrated mouse and human studies and will emphasize the benefits of bi-directional mouse-to-human and human-to-mouse research approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the recent years, many protocols aimed at reproducibly sequencing reduced-genome subsets in non-model organisms have been published. Among them, RAD-sequencing is one of the most widely used. It relies on digesting DNA with specific restriction enzymes and performing size selection on the resulting fragments. Despite its acknowledged utility, this method is of limited use with degraded DNA samples, such as those isolated from museum specimens, as these samples are less likely to harbor fragments long enough to comprise two restriction sites making possible ligation of the adapter sequences (in the case of double-digest RAD) or performing size selection of the resulting fragments (in the case of single-digest RAD). Here, we address these limitations by presenting a novel method called hybridization RAD (hyRAD). In this approach, biotinylated RAD fragments, covering a random fraction of the genome, are used as baits for capturing homologous fragments from genomic shotgun sequencing libraries. This simple and cost-effective approach allows sequencing of orthologous loci even from highly degraded DNA samples, opening new avenues of research in the field of museum genomics. Not relying on the restriction site presence, it improves among-sample loci coverage. In a trial study, hyRAD allowed us to obtain a large set of orthologous loci from fresh and museum samples from a non-model butterfly species, with a high proportion of single nucleotide polymorphisms present in all eight analyzed specimens, including 58-year-old museum samples. The utility of the method was further validated using 49 museum and fresh samples of a Palearctic grasshopper species for which the spatial genetic structure was previously assessed using mtDNA amplicons. The application of the method is eventually discussed in a wider context. As it does not rely on the restriction site presence, it is therefore not sensitive to among-sample loci polymorphisms in the restriction sites that usually causes loci dropout. This should enable the application of hyRAD to analyses at broader evolutionary scales.