967 resultados para Flail space model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

INDISIM-YEAST, an individual-based simulator, models the evolution of a yeast population by settingup rules of behaviour for each individual cell according to their own biological rules and characteristics. Ittakes into account the uptake, metabolism, budding reproduction and viability of the yeast cells, over aperiod of time in the bulk of a liquid medium, occupying a three dimensional closed spatial grid with twokinds of particles (glucose and ethanol). Each microorganism is characterized by its biomass, genealogicalage, states in the budding cellular reproduction cycle and position in the space among others. Simulationsare carried out for population properties (global properties), as well as for those properties that pertain toindividual yeast cells (microscopic properties). The results of the simulations are in good qualitativeagreement with established experimental trends.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tämän tutkimuksen tavoitteena oli selvittää kuinka houkuttelevan liiketoimintamahdollisuuden mobiilipelit tarjoavat mainostusalustana. Tutkimus suoritettiin tapaustutkimuksena. Tutkimus aloitettiin määrittelemällä liiketoimintamalli, jonka jälkeen suoritettiin yleinen katsaus Suomen mobiilipelimarkkinoille. Tämän jälkeen arvoketju-, arvoverkko- sekä markkina-analyysin avulla selvitettiin liiketoimintamallin mahdollisuudet sekä rajoitukset. Tutkimukseen käytettiinteorettista viitekehystä joka pohjautui Hamelin liiketoimintamalliin, Porterin arvoketjuun sekä Alleenin arvoverkoon. Tutkimuksen tuloksena todettiin, että mainostaminen mobiilipeleissä tarjoaa liiketoimintamahdollisuuden ilman esteitä sentoteuttamiselle. Suomalaiset mobiilipelimarkkinat ovat kuitenkin pirstoutuneet,minkä johdosta tutkittu 'mainosten hallinta-alusta'-liiketoimintamalli aiheuttaa liian suuret integraatiokustannukset. Myös suuri määrä pelitoimittajia heikentää mallin tehokkuutta.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main goal of this paper is to propose a convergent finite volume method for a reactionâeuro"diffusion system with cross-diffusion. First, we sketch an existence proof for a class of cross-diffusion systems. Then the standard two-point finite volume fluxes are used in combination with a nonlinear positivity-preserving approximation of the cross-diffusion coefficients. Existence and uniqueness of the approximate solution are addressed, and it is also shown that the scheme converges to the corresponding weak solution for the studied model. Furthermore, we provide a stability analysis to study pattern-formation phenomena, and we perform two-dimensional numerical examples which exhibit formation of nonuniform spatial patterns. From the simulations it is also found that experimental rates of convergence are slightly below second order. The convergence proof uses two ingredients of interest for various applications, namely the discrete Sobolev embedding inequalities with general boundary conditions and a space-time $L^1$ compactness argument that mimics the compactness lemma due to Kruzhkov. The proofs of these results are given in the Appendix.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract The main objective of this work is to show how the choice of the temporal dimension and of the spatial structure of the population influences an artificial evolutionary process. In the field of Artificial Evolution we can observe a common trend in synchronously evolv¬ing panmictic populations, i.e., populations in which any individual can be recombined with any other individual. Already in the '90s, the works of Spiessens and Manderick, Sarma and De Jong, and Gorges-Schleuter have pointed out that, if a population is struc¬tured according to a mono- or bi-dimensional regular lattice, the evolutionary process shows a different dynamic with respect to the panmictic case. In particular, Sarma and De Jong have studied the selection pressure (i.e., the diffusion of a best individual when the only selection operator is active) induced by a regular bi-dimensional structure of the population, proposing a logistic modeling of the selection pressure curves. This model supposes that the diffusion of a best individual in a population follows an exponential law. We show that such a model is inadequate to describe the process, since the growth speed must be quadratic or sub-quadratic in the case of a bi-dimensional regular lattice. New linear and sub-quadratic models are proposed for modeling the selection pressure curves in, respectively, mono- and bi-dimensional regu¬lar structures. These models are extended to describe the process when asynchronous evolutions are employed. Different dynamics of the populations imply different search strategies of the resulting algorithm, when the evolutionary process is used to solve optimisation problems. A benchmark of both discrete and continuous test problems is used to study the search characteristics of the different topologies and updates of the populations. In the last decade, the pioneering studies of Watts and Strogatz have shown that most real networks, both in the biological and sociological worlds as well as in man-made structures, have mathematical properties that set them apart from regular and random structures. In particular, they introduced the concepts of small-world graphs, and they showed that this new family of structures has interesting computing capabilities. Populations structured according to these new topologies are proposed, and their evolutionary dynamics are studied and modeled. We also propose asynchronous evolutions for these structures, and the resulting evolutionary behaviors are investigated. Many man-made networks have grown, and are still growing incrementally, and explanations have been proposed for their actual shape, such as Albert and Barabasi's preferential attachment growth rule. However, many actual networks seem to have undergone some kind of Darwinian variation and selection. Thus, how these networks might have come to be selected is an interesting yet unanswered question. In the last part of this work, we show how a simple evolutionary algorithm can enable the emrgence o these kinds of structures for two prototypical problems of the automata networks world, the majority classification and the synchronisation problems. Synopsis L'objectif principal de ce travail est de montrer l'influence du choix de la dimension temporelle et de la structure spatiale d'une population sur un processus évolutionnaire artificiel. Dans le domaine de l'Evolution Artificielle on peut observer une tendence à évoluer d'une façon synchrone des populations panmictiques, où chaque individu peut être récombiné avec tout autre individu dans la population. Déjà dans les année '90, Spiessens et Manderick, Sarma et De Jong, et Gorges-Schleuter ont observé que, si une population possède une structure régulière mono- ou bi-dimensionnelle, le processus évolutionnaire montre une dynamique différente de celle d'une population panmictique. En particulier, Sarma et De Jong ont étudié la pression de sélection (c-à-d la diffusion d'un individu optimal quand seul l'opérateur de sélection est actif) induite par une structure régulière bi-dimensionnelle de la population, proposant une modélisation logistique des courbes de pression de sélection. Ce modèle suppose que la diffusion d'un individu optimal suit une loi exponentielle. On montre que ce modèle est inadéquat pour décrire ce phénomène, étant donné que la vitesse de croissance doit obéir à une loi quadratique ou sous-quadratique dans le cas d'une structure régulière bi-dimensionnelle. De nouveaux modèles linéaires et sous-quadratique sont proposés pour des structures mono- et bi-dimensionnelles. Ces modèles sont étendus pour décrire des processus évolutionnaires asynchrones. Différentes dynamiques de la population impliquent strategies différentes de recherche de l'algorithme résultant lorsque le processus évolutionnaire est utilisé pour résoudre des problèmes d'optimisation. Un ensemble de problèmes discrets et continus est utilisé pour étudier les charactéristiques de recherche des différentes topologies et mises à jour des populations. Ces dernières années, les études de Watts et Strogatz ont montré que beaucoup de réseaux, aussi bien dans les mondes biologiques et sociologiques que dans les structures produites par l'homme, ont des propriétés mathématiques qui les séparent à la fois des structures régulières et des structures aléatoires. En particulier, ils ont introduit la notion de graphe sm,all-world et ont montré que cette nouvelle famille de structures possède des intéressantes propriétés dynamiques. Des populations ayant ces nouvelles topologies sont proposés, et leurs dynamiques évolutionnaires sont étudiées et modélisées. Pour des populations ayant ces structures, des méthodes d'évolution asynchrone sont proposées, et la dynamique résultante est étudiée. Beaucoup de réseaux produits par l'homme se sont formés d'une façon incrémentale, et des explications pour leur forme actuelle ont été proposées, comme le preferential attachment de Albert et Barabàsi. Toutefois, beaucoup de réseaux existants doivent être le produit d'un processus de variation et sélection darwiniennes. Ainsi, la façon dont ces structures ont pu être sélectionnées est une question intéressante restée sans réponse. Dans la dernière partie de ce travail, on montre comment un simple processus évolutif artificiel permet à ce type de topologies d'émerger dans le cas de deux problèmes prototypiques des réseaux d'automates, les tâches de densité et de synchronisation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sexual reproduction is nearly universal in eukaryotes and genetic determination of sex prevails among animals. The astonishing diversity of sex-determining systems and sex chromosomes is yet bewildering. Some taxonomic groups possess conserved and dimorphic sex chromosomes, involving a functional copy (e.g. mammals' X, birds' Z) and a degenerated copy (mammals' Y, birds' W), implying that sex- chromosomes are expected to decay. In contrast, others like amphibians, reptiles and fishes yet maintained undifferentiated sex chromosomes. Why such different evolutionary trajectories? In this thesis, we empirically test and characterize the main hypotheses proposed to prevent the genetic decay of sex chromosomes, namely occasional X-Y recombination and frequent sex-chromosome transitions, using the Palearctic radiation of Hyla tree frogs as a model system. We take a phylogeographic and phylogenetic approach to relate sex-chromosome recombination, differentiation, and transitions in a spatial and temporal framework. By reconstructing the recent evolutionary history of the widespread European tree frog H. arborea, we showed that sex chromosomes can recombine in males, preventing their differentiation, a situation that potentially evolves rapidly. At the scale of the entire radiation, X-Y recombination combines with frequent transitions to prevent sex-chromosome degeneration in Hyla: we traced several turnovers of sex-determining system within the last 10My. These rapid changes seem less random than usually assumed: we gathered evidences that one chromosome pair is a sex expert, carrying genes with key role in animal sex determination, and which probably specialized through frequent reuse as a sex chromosome in Hyla and other amphibians. Finally, we took advantage of secondary contact zones between closely-related Hyla lineages to evaluate the consequences of sex chromosome homomorphy on the genetics of speciation. In comparison with other systems, the evolution of sex chromosomes in Hyla emphasized the existence of consistent evolutionary patterns within the chaotic diversity of flexibility of cold-blooded vertebrates' sex-determining systems, and provides insights into the evolution of recombination. Beyond sex-chromosome evolution, this work also significantly contributed to speciation, phylogeography and applied conservation research. -- La reproduction sexuée est quasi-universelle chez les eucaryotes et le sexe est le plus souvent déterminé génétiquement au sein du règne animal. L'incroyable diversité des systèmes de reproduction et des chromosomes sexuels est particulièrement étonnante. Certains groupes taxonomiques possèdent des chromosomes sexuels dimorphiques et très conservés, avec une copie entièrement fonctionnelle (ex : le X des mammifères, le Z des oiseaux) et une copie dégénérée (ex : le Y des mammifères, le W des oiseaux), suggérant que les chromosomes sexuels sont voués à se détériorer. Cependant les chromosomes sexuels d'autres groupes tels que les amphibiens, les reptiles et les poissons sont pour la plupart indifférenciés. Comment expliquer des trajectoires évolutives si différentes? Au cours de cette thèse, nous avons étudié empiriquement les processus évolutifs pouvant maintenir les chromosomes sexuels intacts, à savoir la recombinaison X-Y occasionnel ainsi que les substitutions fréquentes de chromosomes sexuels, en utilisant les rainettes Paléarctiques du genre Hyla comme modèle d'étude. Nous avons adopté une approche phylogéographique et phylogénétique pour appréhender les événements de recombinaison, de différenciation et de transitions de chromosomes sexuels dans un contexte spatio-temporel. En retraçant l'histoire évolutive récente de la rainette verte H. arborea, nous avons mis en évidence que les chromosomes sexuels pouvaient recombiner chez les mâles, empêchant ainsi leur différenciation, et que ce processus avait le potentiel d'évoluer très rapidement. A l'échelle plus globale de la radiation, il apparait que les phénomènes de recombinaison X-Y soient également accompagnés de substitutions de chromosomes sexuels, et participent de concert au maintien de chromosomes sexuels intacts dans les populations: le système de détermination du sexe des rainettes a changé plusieurs fois au cours des 10 derniers millions d'années. Ces transitions fréquentes ne semblent pas aléatoires: nous avons identifié une paire de chromosomes qui présente des caractéristiques présageant d'une spécialisation dans le déterminisme du sexe (notamment car elle possède des gènes importants pour cette fonction), et qui a été réutilisée plusieurs fois comme tel chez les rainettes ainsi que d'autres amphibiens. Enfin, nous avons étudié l'hybridation entre différentes espèces dans leurs zones de contact, afin d'évaluer si l'absence de différenciation entre X et Y jouaient un rôle dans les processus génétiques de spéciation. Outre son intérêt pour la compréhension de l'évolution des chromosomes sexuels, ce travail contribue de manière significative à d'autres domaines de recherche tels que la spéciation, la phylogéographie, ainsi que la biologie de la conservation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tissue analysis is a useful tool for the nutrient management of fruit orchards. The mineral composition of diagnostic tissues expressed as nutrient concentration on a dry weight basis has long been used to assess the status of 'pure' nutrients. When nutrients are mixed and interact in plant tissues, their proportions or concentrations change relatively to each other as a result of synergism, antagonism, or neutrality, hence producing resonance within the closed space of tissue composition. Ternary diagrams and nutrient ratios are early representations of interacting nutrients in the compositional space. Dual and multiple interactions were integrated by the Diagnosis and Recommendation Integrated System (DRIS) into nutrient indexes and by Compositional Nutrient Diagnosis into centered log ratios (CND-clr). DRIS has some computational flaws such as using a dry matter index that is not a part as well as nutrient products (e.g. NxCa) instead of ratios. DRIS and CND-clr integrate all possible nutrient interactions without defining an ad hoc interactive model. They diagnose D components while D-1 could be diagnosed in the D-compositional Hilbert space. The isometric log ratio (ilr) coordinates overcome these problems using orthonormal binary nutrient partitions instead of dual ratios. In this study, it is presented a nutrient interactive model as well as computation methods for DRIS and CND-clr and CND-ilr coordinates (CND-ilr) using leaf analytical data from an experimental apple orchard in Southwestern Quebec, Canada. It was computed the Aitchison and Mahalanobis distances across ilr coordinates as measures of nutrient imbalance. The effect of changing nutrient concentrations on ilr coordinates are simulated to identify the ones contributing the most to nutrient imbalance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We numerically simulate planar shock wave collisions in anti-de Sitter space as a model for heavy ion collisions of large nuclei. We uncover a crossover between two different dynamical regimes as a function of the collision energy. At low energies the shocks first stop and then explode in a manner approximately described by hydrodynamics, in close similarity with the Landau model. At high energies the receding fragments move outwards at the speed of light, with a region of negative energy density and negative longitudinal pressure trailing behind them. The rapidity distribution of the energy density at late times around midrapidity is not approximately boost invariant but Gaussian, albeit with a width that increases with the collision energy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Probabilistic inversion methods based on Markov chain Monte Carlo (MCMC) simulation are well suited to quantify parameter and model uncertainty of nonlinear inverse problems. Yet, application of such methods to CPU-intensive forward models can be a daunting task, particularly if the parameter space is high dimensional. Here, we present a 2-D pixel-based MCMC inversion of plane-wave electromagnetic (EM) data. Using synthetic data, we investigate how model parameter uncertainty depends on model structure constraints using different norms of the likelihood function and the model constraints, and study the added benefits of joint inversion of EM and electrical resistivity tomography (ERT) data. Our results demonstrate that model structure constraints are necessary to stabilize the MCMC inversion results of a highly discretized model. These constraints decrease model parameter uncertainty and facilitate model interpretation. A drawback is that these constraints may lead to posterior distributions that do not fully include the true underlying model, because some of its features exhibit a low sensitivity to the EM data, and hence are difficult to resolve. This problem can be partly mitigated if the plane-wave EM data is augmented with ERT observations. The hierarchical Bayesian inverse formulation introduced and used herein is able to successfully recover the probabilistic properties of the measurement data errors and a model regularization weight. Application of the proposed inversion methodology to field data from an aquifer demonstrates that the posterior mean model realization is very similar to that derived from a deterministic inversion with similar model constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Maximum entropy modeling (Maxent) is a widely used algorithm for predicting species distributions across space and time. Properly assessing the uncertainty in such predictions is non-trivial and requires validation with independent datasets. Notably, model complexity (number of model parameters) remains a major concern in relation to overfitting and, hence, transferability of Maxent models. An emerging approach is to validate the cross-temporal transferability of model predictions using paleoecological data. In this study, we assess the effect of model complexity on the performance of Maxent projections across time using two European plant species (Alnus giutinosa (L.) Gaertn. and Corylus avellana L) with an extensive late Quaternary fossil record in Spain as a study case. We fit 110 models with different levels of complexity under present time and tested model performance using AUC (area under the receiver operating characteristic curve) and AlCc (corrected Akaike Information Criterion) through the standard procedure of randomly partitioning current occurrence data. We then compared these results to an independent validation by projecting the models to mid-Holocene (6000 years before present) climatic conditions in Spain to assess their ability to predict fossil pollen presence-absence and abundance. We find that calibrating Maxent models with default settings result in the generation of overly complex models. While model performance increased with model complexity when predicting current distributions, it was higher with intermediate complexity when predicting mid-Holocene distributions. Hence, models of intermediate complexity resulted in the best trade-off to predict species distributions across time. Reliable temporal model transferability is especially relevant for forecasting species distributions under future climate change. Consequently, species-specific model tuning should be used to find the best modeling settings to control for complexity, notably with paleoecological data to independently validate model projections. For cross-temporal projections of species distributions for which paleoecological data is not available, models of intermediate complexity should be selected.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

How a stimulus or a task alters the spontaneous dynamics of the brain remains a fundamental open question in neuroscience. One of the most robust hallmarks of task/stimulus-driven brain dynamics is the decrease of variability with respect to the spontaneous level, an effect seen across multiple experimental conditions and in brain signals observed at different spatiotemporal scales. Recently, it was observed that the trial-to-trial variability and temporal variance of functional magnetic resonance imaging (fMRI) signals decrease in the task-driven activity. Here we examined the dynamics of a large-scale model of the human cortex to provide a mechanistic understanding of these observations. The model allows computing the statistics of synaptic activity in the spontaneous condition and in putative tasks determined by external inputs to a given subset of brain regions. We demonstrated that external inputs decrease the variance, increase the covariances, and decrease the autocovariance of synaptic activity as a consequence of single node and large-scale network dynamics. Altogether, these changes in network statistics imply a reduction of entropy, meaning that the spontaneous synaptic activity outlines a larger multidimensional activity space than does the task-driven activity. We tested this model's prediction on fMRI signals from healthy humans acquired during rest and task conditions and found a significant decrease of entropy in the stimulus-driven activity. Altogether, our study proposes a mechanism for increasing the information capacity of brain networks by enlarging the volume of possible activity configurations at rest and reliably settling into a confined stimulus-driven state to allow better transmission of stimulus-related information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dual-stream model of auditory processing postulates separate processing streams for sound meaning and for sound location. The present review draws on evidence from human behavioral and activation studies as well as from lesion studies to argue for a position-linked representation of sound objects that is distinct both from the position-independent representation within the ventral/What stream and from the explicit sound localization processing within the dorsal/Where stream.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the development of integrated circuit technology continues to follow Moore’s law the complexity of circuits increases exponentially. Traditional hardware description languages such as VHDL and Verilog are no longer powerful enough to cope with this level of complexity and do not provide facilities for hardware/software codesign. Languages such as SystemC are intended to solve these problems by combining the powerful expression of high level programming languages and hardware oriented facilities of hardware description languages. To fully replace older languages in the desing flow of digital systems SystemC should also be synthesizable. The devices required by modern high speed networks often share the same tight constraints for e.g. size, power consumption and price with embedded systems but have also very demanding real time and quality of service requirements that are difficult to satisfy with general purpose processors. Dedicated hardware blocks of an application specific instruction set processor are one way to combine fast processing speed, energy efficiency, flexibility and relatively low time-to-market. Common features can be identified in the network processing domain making it possible to develop specialized but configurable processor architectures. One such architecture is the TACO which is based on transport triggered architecture. The architecture offers a high degree of parallelism and modularity and greatly simplified instruction decoding. For this M.Sc.(Tech) thesis, a simulation environment for the TACO architecture was developed with SystemC 2.2 using an old version written with SystemC 1.0 as a starting point. The environment enables rapid design space exploration by providing facilities for hw/sw codesign and simulation and an extendable library of automatically configured reusable hardware blocks. Other topics that are covered are the differences between SystemC 1.0 and 2.2 from the viewpoint of hardware modeling, and compilation of a SystemC model into synthesizable VHDL with Celoxica Agility SystemC Compiler. A simulation model for a processor for TCP/IP packet validation was designed and tested as a test case for the environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is increasing evidence to support a significant role for chronic non-bacterial, prostatic inflammation in the development of human voiding dysfunction and prostate cancer. Their increased prevalence with age suggests that the decrease of testosterone concentration and/or the ratio of testosterone-to-estradiol in serum may have a role in their development. The main objective of this study was to explore prostatic inflammation and its relationship with voiding dysfunction and prostate carcinogenesis by developing an experimental model. A novel selective estrogen receptor modulator (SERM), fispemifene, was tested for the prevention and treatment of prostatic inflammation in this model. Combined treatment of adult Noble rats with testosterone and estradiol for 3 to 6 weeks induced gradually developing prostatic inflammation in the dorsolateral prostatic lobes. Inflammatory cells, mainly T-lymphocytes, were first seen around capillaries. Thereafter, the lymphocytes migrated into the stroma and into periglandular space. When the treatment time was extended to 13 weeks, the number of inflamed acini increased. Urodynamical recordings indicated voiding dysfunction. When the animals had an above normal testosterone and estradiol concentrations but still had a decreased testosterone-to-estradiol ratio in serum, they developed obstructive voiding. Furthermore, they developed precancerous lesions and prostate cancers in the ducts of the dorsolateral prostatic lobes. Interestingly, inflammatory infiltrates were observed adjacent to precancerous lesions but not in the adjacency of adenocarcinomas suggesting that inflammation has a role in the early stages of prostate carcinogenesis. Fispemifene, a novel SERM tested in this experimental model, showed anti-inflammatory action by attenuating the number of inflamed acini in the dorsolateral prostate. Fispemifene exhibited also antiestrogenic properties by decreasing expression of estrogen-induced biomarkers in the acinar epithelium. These findings suggest that SERMs could be considered as a new therapeutic possibility in the prevention and in the treatment of chronic prostatic inflammation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The topological solitons of two classical field theories, the Faddeev-Skyrme model and the Ginzburg-Landau model are studied numerically and analytically in this work. The aim is to gain information on the existence and properties of these topological solitons, their structure and behaviour under relaxation. First, the conditions and mechanisms leading to the possibility of topological solitons are explored from the field theoretical point of view. This leads one to consider continuous deformations of the solutions of the equations of motion. The results of algebraic topology necessary for the systematic treatment of such deformations are reviewed and methods of determining the homotopy classes of topological solitons are presented. The Faddeev-Skyrme and Ginzburg-Landau models are presented, some earlier results reviewed and the numerical methods used in this work are described. The topological solitons of the Faddeev-Skyrme model, Hopfions, are found to follow the same mechanisms of relaxation in three different domains with three different topological classifications. For two of the domains, the necessary but unusual topological classification is presented. Finite size topological solitons are not found in the Ginzburg-Landau model and a scaling argument is used to suggest that there are indeed none unless a certain modification to the model, due to R. S. Ward, is made. In that case, the Hopfions of the Faddeev-Skyrme model are seen to be present for some parameter values. A boundary in the parameter space separating the region where the Hopfions exist and the area where they do not exist is found and the behaviour of the Hopfion energy on this boundary is studied.