39 resultados para Model-Based Design


Relevância:

90.00% 90.00%

Publicador:

Resumo:

AIMS: While successful termination by pacing of organized atrial tachycardias has been observed in patients, single site rapid pacing has not yet led to conclusive results for the termination of atrial fibrillation (AF). The purpose of this study was to evaluate a novel atrial septal pacing algorithm for the termination of AF in a biophysical model of the human atria. METHODS AND RESULTS: Sustained AF was generated in a model based on human magnetic resonance images and membrane kinetics. Rapid pacing was applied from the septal area following a dual-stage scheme: (i) rapid pacing for 10-30 s at pacing intervals 62-70% of AF cycle length (AFCL), (ii) slow pacing for 1.5 s at 180% AFCL, initiated by a single stimulus at 130% AFCL. Atrial fibrillation termination success rates were computed. A mean success rate for AF termination of 10.2% was obtained for rapid septal pacing only. The addition of the slow pacing phase increased this rate to 20.2%. At an optimal pacing cycle length (64% AFCL) up to 29% of AF termination was observed. CONCLUSION: The proposed septal pacing algorithm could suppress AF reentries in a more robust way than classical single site rapid pacing. Experimental studies are now needed to determine whether similar termination mechanisms and rates can be observed in animals or humans, and in which types of AF this pacing strategy might be most effective.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Helical tomotherapy is a relatively new intensity-modulated radiation therapy (IMRT) treatment for which room shielding has to be reassessed for the following reasons. The beam-on-time needed to deliver a given target dose is increased and leads to a weekly workload of typically one order of magnitude higher than that for conventional radiation therapy. The special configuration of tomotherapy units does not allow the use of standard shielding calculation methods. A conventional linear accelerator must be shielded for primary, leakage and scatter photon radiations. For tomotherapy, primary radiation is no longer the main shielding issue since a beam stop is mounted on the gantry directly opposite the source. On the other hand, due to the longer irradiation time, the accelerator head leakage becomes a major concern. An analytical model based on geometric considerations has been developed to determine leakage radiation levels throughout the room for continuous gantry rotation. Compared to leakage radiation, scatter radiation is a minor contribution. Since tomotherapy units operate at a nominal energy of 6 MV, neutron production is negligible. This work proposes a synthetic and conservative model for calculating shielding requirements for the Hi-Art II TomoTherapy unit. Finally, the required concrete shielding thickness is given for different positions of interest.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Because data on rare species usually are sparse, it is important to have efficient ways to sample additional data. Traditional sampling approaches are of limited value for rare species because a very large proportion of randomly chosen sampling sites are unlikely to shelter the species. For these species, spatial predictions from niche-based distribution models can be used to stratify the sampling and increase sampling efficiency. New data sampled are then used to improve the initial model. Applying this approach repeatedly is an adaptive process that may allow increasing the number of new occurrences found. We illustrate the approach with a case study of a rare and endangered plant species in Switzerland and a simulation experiment. Our field survey confirmed that the method helps in the discovery of new populations of the target species in remote areas where the predicted habitat suitability is high. In our simulations the model-based approach provided a significant improvement (by a factor of 1.8 to 4 times, depending on the measure) over simple random sampling. In terms of cost this approach may save up to 70% of the time spent in the field.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

At the beginning of the 21st century, a new social arrangement of work poses a series of questions and challenges to scholars who aim to help people develop their working lives. Given the globalization of career counseling, we decided to address these issues and then to formulate potentially innovative responses in an international forum. We used this approach to avoid the difficulties of creating models and methods in one country and then trying to export them to other countries where they would be adapted for use. This article presents the initial outcome of this collaboration, a counseling model and methods. The life-designing model for career intervention endorses five presuppositions about people and their work lives: contextual possibilities, dynamic processes, non-linear progression, multiple perspectives, and personal patterns. Thinking from these five presuppositions, we have crafted a contextualized model based on the epistemology of social constructionism, particularly recognizing that an individual's knowledge and identity are the product of social interaction and that meaning is co-constructed through discourse. The life-design framework for counseling implements the theories of self-constructing [Guichard, J. (2005). Life-long self-construction. International Journal for Educational and Vocational Guidance, 5, 111-124] and career construction [Savickas, M. L. (2005). The theory and practice of career construction. In S. D. Brown & R. W. Lent (Eds.), Career development and counselling: putting theory and research to work (pp. 42-70). Hoboken, NJ: Wiley] that describe vocational behavior and its development. Thus, the framework is structured to be life-long, holistic, contextual, and preventive.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The paper presents some contemporary approaches to spatial environmental data analysis. The main topics are concentrated on the decision-oriented problems of environmental spatial data mining and modeling: valorization and representativity of data with the help of exploratory data analysis, spatial predictions, probabilistic and risk mapping, development and application of conditional stochastic simulation models. The innovative part of the paper presents integrated/hybrid model-machine learning (ML) residuals sequential simulations-MLRSS. The models are based on multilayer perceptron and support vector regression ML algorithms used for modeling long-range spatial trends and sequential simulations of the residuals. NIL algorithms deliver non-linear solution for the spatial non-stationary problems, which are difficult for geostatistical approach. Geostatistical tools (variography) are used to characterize performance of ML algorithms, by analyzing quality and quantity of the spatially structured information extracted from data with ML algorithms. Sequential simulations provide efficient assessment of uncertainty and spatial variability. Case study from the Chernobyl fallouts illustrates the performance of the proposed model. It is shown that probability mapping, provided by the combination of ML data driven and geostatistical model based approaches, can be efficiently used in decision-making process. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Resume : Mieux comprendre les stromatolithes et les tapis microbiens est un sujet important en biogéosciences puisque cela aide à l'étude des premières formes de vie sur Terre, a mieux cerner l'écologie des communautés microbiennes et la contribution des microorganismes a la biominéralisation, et même à poser certains fondements dans les recherches en exobiologie. D'autre part, la modélisation est un outil puissant utilisé dans les sciences naturelles pour appréhender différents phénomènes de façon théorique. Les modèles sont généralement construits sur un système d'équations différentielles et les résultats sont obtenus en résolvant ce système. Les logiciels disponibles pour implémenter les modèles incluent les logiciels mathématiques et les logiciels généraux de simulation. L'objectif principal de cette thèse est de développer des modèles et des logiciels pour aider a comprendre, via la simulation, le fonctionnement des stromatolithes et des tapis microbiens. Ces logiciels ont été développés en C++ en ne partant d'aucun pré-requis de façon a privilégier performance et flexibilité maximales. Cette démarche permet de construire des modèles bien plus spécifiques et plus appropriés aux phénomènes a modéliser. Premièrement, nous avons étudié la croissance et la morphologie des stromatolithes. Nous avons construit un modèle tridimensionnel fondé sur l'agrégation par diffusion limitée. Le modèle a été implémenté en deux applications C++: un moteur de simulation capable d'exécuter un batch de simulations et de produire des fichiers de résultats, et un outil de visualisation qui permet d'analyser les résultats en trois dimensions. Après avoir vérifié que ce modèle peut en effet reproduire la croissance et la morphologie de plusieurs types de stromatolithes, nous avons introduit un processus de sédimentation comme facteur externe. Ceci nous a mené a des résultats intéressants, et permis de soutenir l'hypothèse que la morphologie des stromatolithes pourrait être le résultat de facteurs externes autant que de facteurs internes. Ceci est important car la classification des stromatolithes est généralement fondée sur leur morphologie, imposant que la forme d'un stromatolithe est dépendante de facteurs internes uniquement (c'est-à-dire les tapis microbiens). Les résultats avancés dans ce mémoire contredisent donc ces assertions communément admises. Ensuite, nous avons décidé de mener des recherches plus en profondeur sur les aspects fonctionnels des tapis microbiens. Nous avons construit un modèle bidimensionnel de réaction-diffusion fondé sur la simulation discrète. Ce modèle a été implémenté dans une application C++ qui permet de paramétrer et exécuter des simulations. Nous avons ensuite pu comparer les résultats de simulation avec des données du monde réel et vérifier que le modèle peut en effet imiter le comportement de certains tapis microbiens. Ainsi, nous avons pu émettre et vérifier des hypothèses sur le fonctionnement de certains tapis microbiens pour nous aider à mieux en comprendre certains aspects, comme la dynamique des éléments, en particulier le soufre et l'oxygène. En conclusion, ce travail a abouti à l'écriture de logiciels dédiés à la simulation de tapis microbiens d'un point de vue tant morphologique que fonctionnel, suivant deux approches différentes, l'une holistique, l'autre plus analytique. Ces logiciels sont gratuits et diffusés sous licence GPL (General Public License). Abstract : Better understanding of stromatolites and microbial mats is an important topic in biogeosciences as it helps studying the early forms of life on Earth, provides clues re- garding the ecology of microbial ecosystems and their contribution to biomineralization, and gives basis to a new science, exobiology. On the other hand, modelling is a powerful tool used in natural sciences for the theoretical approach of various phenomena. Models are usually built on a system of differential equations and results are obtained by solving that system. Available software to implement models includes mathematical solvers and general simulation software. The main objective of this thesis is to develop models and software able to help to understand the functioning of stromatolites and microbial mats. Software was developed in C++ from scratch for maximum performance and flexibility. This allows to build models much more specific to a phenomenon rather than general software. First, we studied stromatolite growth and morphology. We built a three-dimensional model based on diffusion-limited aggregation. The model was implemented in two C++ applications: a simulator engine, which can run a batch of simulations and produce result files, and a Visualization tool, which allows results to be analysed in three dimensions. After verifying that our model can indeed reproduce the growth and morphology of several types of stromatolites, we introduced a sedimentation process as an external factor. This lead to interesting results, and allowed to emit the hypothesis that stromatolite morphology may be the result of external factors as much as internal factors. This is important as stromatolite classification is usually based on their morphology, imposing that a stromatolite shape is dependant on internal factors only (i.e. the microbial mat). This statement is contradicted by our findings, Second, we decided to investigate deeper the functioning of microbial mats, We built a two-dimensional reaction-diffusion model based on discrete simulation, The model was implemented in a C++ application that allows setting and running simulations. We could then compare simulation results with real world data and verify that our model can indeed mimic the behaviour of some microbial mats. Thus, we have proposed and verified hypotheses regarding microbial mats functioning in order to help to better understand them, e.g. the cycle of some elements such as oxygen or sulfur. ln conclusion, this PhD provides a simulation software, dealing with two different approaches. This software is free and available under a GPL licence.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: The risks of a public exposure to a sudden decompression, until now, have been related to civil aviation and, at a lesser extent, to diving activities. However, engineers are currently planning the use of low pressure environments for underground transportation. This method has been proposed for the future Swissmetro, a high-speed underground train designed for inter-urban linking in Switzerland. HYPOTHESIS: The use of a low pressure environment in an underground public transportation system must be considered carefully regarding the decompression risks. Indeed, due to the enclosed environment, both decompression kinetics and safety measures may differ from aviation decompression cases. METHOD: A theoretical study of decompression risks has been conducted at an early stage of the Swissmetro project. A three-compartment theoretical model, based on the physics of fluids, has been implemented with flow processing software (Ithink 5.0). Simulations have been conducted in order to analyze "decompression scenarios" for a wide range of parameters, relevant in the context of the Swissmetro main study. RESULTS: Simulation results cover a wide range from slow to explosive decompression, depending on the simulation parameters. Not surprisingly, the leaking orifice area has a tremendous impact on barotraumatic effects, while the tunnel pressure may significantly affect both hypoxic and barotraumatic effects. Calculations have also shown that reducing the free space around the vehicle may mitigate significantly an accidental decompression. CONCLUSION: Numeric simulations are relevant to assess decompression risks in the future Swissmetro system. The decompression model has proven to be useful in assisting both design choices and safety management.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.