985 resultados para Realistic String Model
Resumo:
This chapter attempts to identify some important issues in developing realistic simulation models based on new economic geography, and it suggests a direction for solving the difficulties. Specifically, adopting the IDE Geographical Simulation Model (IDE-GSM) as an example, we discuss some problems in developing a realistic simulation model for East Asia. The first and largest problem in this region is the lack of reliable economic datasets at the sub-national level, and this issue needs to be resolved in the long term. However, to deal with the existing situation in the short term, we utilize some techniques to produce more realistic and reliable simulation models. One key compromise is to use a 'topology' representation of geography, rather than a 'mesh' or 'grid' representation or simple 'straight lines' connecting each city which are used in many other models. In addition to this, a modal choice model that takes into consideration both money and time costs seems to work well.
Resumo:
While fluoroscopy is still the most widely used imaging modality to guide cardiac interventions, the fusion of pre-operative Magnetic Resonance Imaging (MRI) with real-time intra-operative ultrasound (US) is rapidly gaining clinical acceptance as a viable, radiation-free alternative. In order to improve the detection of the left ventricular (LV) surface in 4D ultrasound, we propose to take advantage of the pre-operative MRI scans to extract a realistic geometrical model representing the patients cardiac anatomy. This could serve as prior information in the interventional setting, allowing to increase the accuracy of the anatomy extraction step in US data. We have made use of a real-time 3D segmentation framework used in the recent past to solve the LV segmentation problem in MR and US data independently and we take advantage of this common link to introduce the prior information as a soft penalty term in the ultrasound segmentation algorithm. We tested the proposed algorithm in a clinical dataset of 38 patients undergoing both MR and US scans. The introduction of the personalized shape prior improves the accuracy and robustness of the LV segmentation, as supported by the error reduction when compared to core lab manual segmentation of the same US sequences.
Resumo:
Modern multicore processors for the embedded market are often heterogeneous in nature. One feature often available are multiple sleep states with varying transition cost for entering and leaving said sleep states. This research effort explores the energy efficient task-mapping on such a heterogeneous multicore platform to reduce overall energy consumption of the system. This is performed in the context of a partitioned scheduling approach and a very realistic power model, which improves over some of the simplifying assumptions often made in the state-of-the-art. The developed heuristic consists of two phases, in the first phase, tasks are allocated to minimise their active energy consumption, while the second phase trades off a higher active energy consumption for an increased ability to exploit savings through more efficient sleep states. Extensive simulations demonstrate the effectiveness of the approach.
Resumo:
Heterogeneous multicore platforms are becoming an interesting alternative for embedded computing systems with limited power supply as they can execute specific tasks in an efficient manner. Nonetheless, one of the main challenges of such platforms consists of optimising the energy consumption in the presence of temporal constraints. This paper addresses the problem of task-to-core allocation onto heterogeneous multicore platforms such that the overall energy consumption of the system is minimised. To this end, we propose a two-phase approach that considers both dynamic and leakage energy consumption: (i) the first phase allocates tasks to the cores such that the dynamic energy consumption is reduced; (ii) the second phase refines the allocation performed in the first phase in order to achieve better sleep states by trading off the dynamic energy consumption with the reduction in leakage energy consumption. This hybrid approach considers core frequency set-points, tasks energy consumption and sleep states of the cores to reduce the energy consumption of the system. Major value has been placed on a realistic power model which increases the practical relevance of the proposed approach. Finally, extensive simulations have been carried out to demonstrate the effectiveness of the proposed algorithm. In the best-case, savings up to 18% of energy are reached over the first fit algorithm, which has shown, in previous works, to perform better than other bin-packing heuristics for the target heterogeneous multicore platform.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
The necessary information to distinguish a local inhomogeneous mass density field from its spatial average on a compact domain of the universe can be measured by relative information entropy. The Kullback-Leibler (KL) formula arises very naturally in this context, however, it provides a very complicated way to compute the mutual information between spatially separated but causally connected regions of the universe in a realistic, inhomogeneous model. To circumvent this issue, by considering a parametric extension of the KL measure, we develop a simple model to describe the mutual information which is entangled via the gravitational field equations. We show that the Tsallis relative entropy can be a good approximation in the case of small inhomogeneities, and for measuring the independent relative information inside the domain, we propose the R\'enyi relative entropy formula.
Resumo:
L'endocardite infectieuse (EI) est une maladie potentiellement mortelle qui doit être prévenue dans toute la mesure du possible. Au cours de ces dernières 50 années, les recommandations Américaines et Européennes pour la prophylaxie de PEI proposaient aux patients à risques de prendre un antibiotique, préventif avant de subir une intervention médico-chirurgicale susceptible d'induire une bactériémie transitoire. Cependant, des études épidémiologiques récentes ont montré que la plupart des EI survenaient en dehors de tous actes médico-chirurgicaux, et indépendamment de la prise ou non de prophylaxie antibiotique . L'EI pourrait donc survenir suite à la cumulation de bactériémies spontanées de faibles intensités, associées à des activités de la vie courante telle que le brossage dentaire pour le streptocoques, ou à partir de tissus colonisés ou de cathéters infectés pour les staphylocoques. En conséquence, les recommandations internationales pour la prophylaxie de PEI ont été revues et proposent une diminution drastique de l'utilisation d'antibiotiques. Cependant, le risque d'EI représenté par le cumul de bactériémies de faibles intensités n'a pas été démontré expérimentalement. Nous avons développé un nouveau modèle d'EI expérimentale induite par une inoculation en continu d'une faible quantité de bactéries, simulant le cumul de bactériémies de faibles intensités chez l'homme, et comparé l'infection de Streptococcus gordonii et de Staphylococcus aureus dans ce modèle avec celle du modèle d'IE induite par une bactériémie brève, mais de forte intensité. Nous avons démontré, après injection d'une quantité égale de bactéries, que le nombre de végétations infectées était similaire dans les deux types d'inoculations. Ces résultats expérimentaux ont confirmé l'hypothèse qu'une exposition cumulée à des bactériémies de faibles intensités, en dehors d'une procédure médico-chirurgicale, représentait un risque pour le développement d'une El, comme le suggéraient les études épidémiologiques. En plus, ces résultats ont validé les nouvelles recommandations pour la prophylaxie de l'El, limitant drastiquement l'utilisation d'antibiotiques. Cependant, ces nouvelles recommandations laissent une grande partie (> 90%) de cas potentiels d'EI sans alternatives de préventions, et des nouvelles stratégies prophylactiques doivent être investiguées. Le nouveau modèle d'EI expérimentale représente un modèle réaliste pour étudier des nouvelles mesures prophylactiques potentielles appliquées à des expositions cumulées de bactériémies de faible nombre. Dans un contexte de bactériémies spontanées répétitives, les antibiotiques ne peuvent pas résoudre le problème de la prévention de l'EI. Nous avons donc étudié la une alternative de prévention par l'utilisation d'agents antiplaquettaires. La logique derrière cette approche était basée sur le fait que les plaquettes sont des composants clés dans la formation des végétations cardiaques, et le fait que les bactéries capables d'interagir avec les plaquettes sont plus enclines à induire une El. Les agents antiplaquettaires utilisés ont été l'aspirine (inhibiteur du COX1), la ticlopidine (inhibiteur du P2Y12, le récepteur de l'ADP), et l'eptifibatide et Pabciximab, deux inhibiteurs du GPIIb/IIIa, le récepteur plaquettaire pour le fibrinogène. Les anticoagulants étaient le dabigatran etexilate, inhibant lathrombine et l'acenocumarol, un antagoniste de la vitamine K. L'aspirine, la ticlopidine ou l'eptifibatide seuls n'ont pas permis de prévenir l'infection valvulaire (> 75% animaux infectés). En revanche, la combinaison d'aspirine et de ticlopidine, aussi bien que l'abciximab, ont protégé 45% - 88% des animaux de l'EI par S. gordonii et par S. aureus. L'antithrombotique dabigatran etexilate à protégé 75% des rats contre l'EI par S. aureus, mais pas (< 30% de protection) par S. gordonii. L'acenocoumarol n'a pas eu d'effet sur aucun des deux organismes. En général, ces résultats suggèrent un possible rôle pour les antiplaquettaires et du dabigatran etexilate dans la prophylaxie de l'EI dans un contexte de bactériémies récurrentes de faibles intensités. Cependant, l'effet bénéfique des antiplaquettaires doit être soupesé avec le risque d'hémorragie inhérent à ces molécules, et le fait que les plaquettes jouent un important rôle dans les défenses de l'hôte contre les infections endovasculaires. En plus, le double effet bénéfique du dabigatran etexilate devrait être revu chez les patients porteurs de valves prothétiques, qui ont besoin d'une anticoagulation à vie, et chez lesquels l'EI à S. aureus est associée avec une mortalité de près de 50%. Comme l'approche avec des antiplaquettaires et des antithrombotiques pourrait avoir des limites, une autre stratégie prophylactique pourrait être la vaccination contre des adhésines de surfaces des pathogènes. Chez S. aureus, la protéine de liaison au fibrinogène, ou dumping factor A (ClfA), et la protéine de liaison à la fibronectine (FnbpA) sont des facteurs de virulence nécessaires à l'initiation et l'évolution de PEI. Elles représentent donc des cibles potentielles pour le développement de vaccins contre cette infection. Récemment, des nombreuses publications ont décrit que la bactérie Lactococcus lactis pouvait être utilisée comme vecteur pour la diffusion d'antigènes bactériens in vivo, et que cette approche pourrait être une stratégie de vaccination contre les infections bactériennes. Nous avons exploré l'effet de l'immunisation par des recombinant de L. lactis exprimant le ClfA, la FnbpA, ou le ClfA ensemble avec et une forme tronquée de la FnbpA (Fnbp, comprenant seulement le domaine de liaison à la fibronectine mais sans le domaine A de liaison au fibrinogène [L. lactis ClfA/Fnbp]), dans la prophylaxie de PIE expérimentale à S. aureus. L. lactis ClfA a été utilisés comme agent d'immunisation contre la souche S. aureus Newman (qui a particularité de n'exprimer que le ClfA, mais pas la FnbpA). L. lactis ClfA, L. lactis FnbpA, et L. lactis ClfA/Fnbp, ont été utilisé comme agents d'immunisation contre une souche isolée d'une IE, S. aureus P8 (exprimant ClfA et FnbpA). L'immunisation avec L. lactis ClfA a généré des anticorps anti-ClfA fonctionnels, capables de bloquer la liaison de S. aureus Newman au fibrinogène in vitro et protéger 13/19 (69%) animaux d'une El due à S. aureus Newman (P < 0.05 comparée aux contrôles). L'immunisation avec L. lactis ClfA, L. lactis FnbpA, ou L. lactis ClfA/Fnbp, a généré des anticorps contre chacun de ces antigènes. Cependant, ils n'ont pas permis de bloquer l'adhésion de S. aureus P8 au fibrinogène et à la fibronectine in vitro. De plus, l'immunisation avec L. lactis ClfA ou L. lactis FnbpA s'est avérée inefficace in vivo (< 10% d'animaux protégés d'une El) et l'immunisation avec L. lactis ClfA/Fnbp a fourni une protection limitée de l'EI (8/23 animaux protégés; P < 0.05 comparée aux contrôles) après inoculation avec S. aureus P8. Dans l'ensemble, ces résultats indiquent que L. lactis est un système efficace pour la présentation d'antigènes in vivo et potentiellement utile pour la prévention de PEI à S. aureus. Cependant, le répertoire de protéines de surface de S. aureus capable d'évoquer une panoplie d'anticorps efficace reste à déterminer.. En résumé, notre étude a démontré expérimentalement, pour la première fois, qu'une bactériémie répétée de faible intensité, simulant la bactériémie ayant lieu, par exemple, lors des activités de la vie quotidienne, est induire un taux d'EI expérimentale similaire à celle induite par une bactériémie de haute intensité suite à une intervention médicale. Dans ce contexte, où l'utilisation d'antibiotiques est pas raisonnable, nous avons aussi montré que d'autres mesures prophylactiques, comme l'utilisation d'agents antiplaquettaires ou antithrombotiques, ou la vaccination utilisant L. lactis comme vecteur d'antigènes bactériens, sont des alternatives prometteuses qui méritent d'être étudiées plus avant. Thesis Summary Infective endocarditis (IE) is a life-threatening disease that should be prevented whenever possible. Over the last 50 years, guidelines for IE prophylaxis proposed the use of antibiotics in patients undergoing dental or medico-surgical procedures that might induce high, but transient bacteremia. However, recent epidemiological studies indicate that IE occurs independently of medico-surgical procedures and the fact that patients had taken antibiotic prophylaxis or not, i.e., by cumulative exposure to random low-grade bacteremia, associated with daily activities (e.g. tooth brushing) in the case of oral streptococci, or with a colonized site or infected device in the case of staphylococci. Accordingly, the most recent American and European guidelines for IE prophylaxis were revisited and updated to drastically restrain antibiotic use. Nevertheless, the relative risk of IE represented by such cumulative low-grade bacteremia had never been demonstrated experimentally. We developed a new model of experimental IE due to continuous inoculation of low-grade bacteremia, mimicking repeated low-grade bacteremia in humans, and compared the infectivity of Streptococcus gordonii and Staphylococcus aureus in this model to that in the model producing brief, high-level bacteremia. We demonstrated that, after injection of identical bacterial numbers, the rate of infected vegetations was similar in both types of challenge. These experimental results support the hypothesis that cumulative exposure to low-grade bacteremia, outside the context of procedure-related bacteremia, represents a genuine risk of IE, as suggested by human epidemiological studies. In addition, they validate the newer guidelines for IE prophylaxis, which drastic limit the procedures in which antibiotic prophylaxis is indicated. Nevertheless, these refreshed guidelines leave the vast majority (> 90%) of potential IE cases without alternative propositions of prevention, and novel strategies must be considered to propose effective alternative and "global" measures to prevent IE initiation. The more realistic experimental model of IE induced by low-grade bacteremia provides an accurate experimental setting to study new preventive measures applying to cumulative exposure to low bacterial numbers. Since in a context of spontaneous low-grade bacteremia antibiotics are unlikely to solve the problem of IE prevention, we addressed the role of antiplatelet and anticoagulant agents for the prophylaxis of experimental IE induced by S. gordonii and S. aureus. The logic of this approach was based on the fact that platelets are key players in vegetation formation and vegetation enlargement, and on the fact that bacteria capable of interacting with platelets are more prone to induce IE. Antiplatelet agents included the COX1 inhibitor aspirin, the inhibitor of the ADP receptor P2Y12 ticlopidine, and two inhibitors of the platelet fibrinogen receptor GPIIb/IIIa, eptifibatide and abciximab. Anticoagulants included the thrombin inhibitor dabigatran etexilate and the vitamin K antagonist acenocoumarol. Aspirin, ticlopidine or eptifibatide alone failed to prevent aortic infection (> 75% infected animals). In contrast, the combination of aspirin with ticlopidine, as well as abciximab, protected 45% to 88% of animals against IE due to S. gordonii and S. aureus. The antithrombin dabigatran etexilate protected 75% of rats against IE due to S. aureus, but failed (< 30% protection) against S. gordonii. Acenocoumarol had no effect against any bacteria. Overall, these results suggest a possible role for antiplatelet agents and dabigatran etexilate in the prophylaxis of IE in humans in a context of recurrent low- grade bacteremia. However, the potential beneficial effect of antiplatelet agents should be balanced against the risk of bleeding and the fact that platelets play an important role in the host defenses against intravascular infections. In addition, the potential dual benefit of dabigatran etexilate might be revisited in patients with prosthetic valves, who require life-long anticoagulation and in whom S. aureus IE is associated with high mortality rate. Because the antiplatelet and anticoagulant approach might be limited in the context of S. aureus bacteremia, other prophylactic strategies for the prevention of S. aureus IE, like vaccination with anti-adhesion proteins was tested. The S. aureus surface proteins fibrinogen-binding protein clumping-factor A (ClfA) and the fibronectin-binding protein A (FnbpA) are critical virulence factors for the initiation and development of IE. Thus, they represent key targets for vaccine development against this disease. Recently, numerous reports have described that the harmless bacteria Lactococcus lactis can be used as a bacterial vector for the efficient delivery of antigens in vivo, and that this approach is a promising vaccination strategy against bacterial infections. We therefore explored the immunization capacity of non- living recombinant L. lactis ClfA, L. lactis FnbpA, or L. lactis expressing ClfA together with Fnbp (a truncated form of FnbpA with only the fibronectin-binding domain but lacking the fibrinogen-binding domain A [L. lactis ClfA/Fnbp]), to protect against S. aureus experimental IE. L. lactis ClfA was used as immunization agent against the laboratory strain S. aureus Newman (expressing ClfA, but lacking FnbpA). L. lactis ClfA, L. lactis FnbpA, as well as L. lactis ClfA/Fnbp, were used as immunization agents against the endocarditis isolate S. aureus P8 (expressing both ClfA and FnbpA). Immunization with L. lactis ClfA produced anti-ClfA functional antibodies, which were able to block the binding of S. aureus Newman to fibrinogen in vitro and protect 13/19 (69%) animals from IE due to S. aureus Newman (P < 0.05 compared to controls). Immunization with L. lactis ClfA, L. lactis FnbpA or L. lactis ClfA/Fnbp, produced antibodies against each antigen. However, they were not sufficient to block S. aureus P8 binding to fibrinogen and fibronectin in vitro. Moreover, immunization with L. lactis ClfA or L. lactis FnbpA was ineffective (< 10% protected animals) and immunization with L. lactis ClfA/Fnbp conferred limited protection from IE (8/23 protected animals; P < 0.05 compared to controls) after challenge with S. aureus P8. Together, these results indicate that L. lactis is an efficient delivering antigen system potentially useful for preventing S. aureus IE. They also demonstrate that expressing multiple antigens in L. lactis, yet to be elucidated, will be necessary to prevent IE due to clinical S. aureus strains fully equipped with virulence determinants. In summary, our study has demonstrated experimentally, for the first time, the hypothesis that low-grade bacteremia, mimicking bacteremia occurring outside of a clinical intervention, is equally prone to induce experimental IE as high-grade bacteremia following medico-surgical procedures. In this context, where the use of antibiotics for the prophylaxis of IE is limited, we showed that other prophylactic measures, like the use of antiplatelets, anticoagulants, or vaccination employing L. lactis as delivery vector of bacterial antigens, are reasonable alternatives that warrant to be further investigated.
Resumo:
Preface In this thesis we study several questions related to transaction data measured at an individual level. The questions are addressed in three essays that will constitute this thesis. In the first essay we use tick-by-tick data to estimate non-parametrically the jump process of 37 big stocks traded on the Paris Stock Exchange, and of the CAC 40 index. We separate the total daily returns in three components (trading continuous, trading jump, and overnight), and we characterize each one of them. We estimate at the individual and index levels the contribution of each return component to the total daily variability. For the index, the contribution of jumps is smaller and it is compensated by the larger contribution of overnight returns. We test formally that individual stocks jump more frequently than the index, and that they do not respond independently to the arrive of news. Finally, we find that daily jumps are larger when their arrival rates are larger. At the contemporaneous level there is a strong negative correlation between the jump frequency and the trading activity measures. The second essay study the general properties of the trade- and volume-duration processes for two stocks traded on the Paris Stock Exchange. These two stocks correspond to a very illiquid stock and to a relatively liquid stock. We estimate a class of autoregressive gamma process with conditional distribution from the family of non-central gamma (up to a scale factor). This process was introduced by Gouriéroux and Jasiak and it is known as Autoregressive gamma process. We also evaluate the ability of the process to fit the data. For this purpose we use the Diebold, Gunther and Tay (1998) test; and the capacity of the model to reproduce the moments of the observed data, and the empirical serial correlation and the partial serial correlation functions. We establish that the model describes correctly the trade duration process of illiquid stocks, but have problems to adjust correctly the trade duration process of liquid stocks which present long-memory characteristics. When the model is adjusted to volume duration, it successfully fit the data. In the third essay we study the economic relevance of optimal liquidation strategies by calibrating a recent and realistic microstructure model with data from the Paris Stock Exchange. We distinguish the case of parameters which are constant through the day from time-varying ones. An optimization problem incorporating this realistic microstructure model is presented and solved. Our model endogenizes the number of trades required before the position is liquidated. A comparative static exercise demonstrates the realism of our model. We find that a sell decision taken in the morning will be liquidated by the early afternoon. If price impacts increase over the day, the liquidation will take place more rapidly.
Resumo:
The string model with N=2 world-sheet supersymmetry is approached via ghosts, Becchi-Rouet-Stora-Tyutin cohomology, and bosonization. Some amplitudes involving massless scalars and vectors are computed at the tree level. The constraints of locality on the spectrum are analyzed. An attempt is made to "decompactify" the model into a four-dimensional theory.
Resumo:
Une fois déposé, un sédiment est affecté au cours de son enfouissement par un ensemble de processus, regroupé sous le terme diagenèse, le transformant parfois légèrement ou bien suffisamment pour le rendre méconnaissable. Ces modifications ont des conséquences sur les propriétés pétrophysiques qui peuvent être positives ou négatives, c'est-à-dire les améliorer ou bien les détériorer. Une voie alternative de représentation numérique des processus, affranchie de l'utilisation des réactions physico-chimiques, a été adoptée et développée en mimant le déplacement du ou des fluides diagénétiques. Cette méthode s'appuie sur le principe d'un automate cellulaire et permet de simplifier les phénomènes sans sacrifier le résultat et permet de représenter les phénomènes diagénétiques à une échelle fine. Les paramètres sont essentiellement numériques ou mathématiques et nécessitent d'être mieux compris et renseignés à partir de données réelles issues d'études d'affleurements et du travail analytique effectué. La représentation des phénomènes de dolomitisation de faible profondeur suivie d'une phase de dédolomitisation a été dans un premier temps effectuée. Le secteur concerne une portion de la série carbonatée de l'Urgonien (Barrémien-Aptien), localisée dans le massif du Vercors en France. Ce travail a été réalisé à l'échelle de la section afin de reproduire les géométries complexes associées aux phénomènes diagénétiques et de respecter les proportions mesurées en dolomite. De plus, la dolomitisation a été simulée selon trois modèles d'écoulement. En effet, la dédolomitisation étant omniprésente, plusieurs hypothèses sur le mécanisme de dolomitisation ont été énoncées et testées. Plusieurs phases de dolomitisation per ascensum ont été également simulées sur des séries du Lias appartenant aux formations du groupe des Calcaire Gris, localisées au nord-est de l'Italie. Ces fluides diagénétiques empruntent le réseau de fracturation comme vecteur et affectent préférentiellement les lithologies les plus micritisées. Cette étude a permis de mettre en évidence la propagation des phénomènes à l'échelle de l'affleurement. - Once deposited, sediment is affected by diagenetic processes during their burial history. These diagenetic processes are able to affect the petrophysical properties of the sedimentary rocks and also improve as such their reservoir capacity. The modelling of diagenetic processes in carbonate reservoirs is still a challenge as far as neither stochastic nor physicochemical simulations can correctly reproduce the complexity of features and the reservoir heterogeneity generated by these processes. An alternative way to reach this objective deals with process-like methods, which simplify the algorithms while preserving all geological concepts in the modelling process. The aim of the methodology is to conceive a consistent and realistic 3D model of diagenetic overprints on initial facies resulting in petrophysical properties at a reservoir scale. The principle of the method used here is related to a lattice gas automata used to mimic diagenetic fluid flows and to reproduce the diagenetic effects through the evolution of mineralogical composition and petrophysical properties. This method developed in a research group is well adapted to handle dolomite reservoirs through the propagation of dolomitising fluids and has been applied on two case studies. The first study concerns a mid-Cretaceous rudist and granular platform of carbonate succession (Urgonian Fm., Les Gorges du Nan, Vercors, SE France), in which several main diagenetic stages have been identified. The modelling in 2D is focused on dolomitisation followed by a dédolomitisation stage. For the second study, data collected from outcrops on the Venetian platform (Lias, Mont Compomolon NE Italy), in which several diagenetic stages have been identified. The main one is related to per ascensum dolomitisation along fractures. In both examples, the evolution of the effects of the mimetic diagenetic fluid on mineralogical composition can be followed through space and numerical time and help to understand the heterogeneity in reservoir properties. Carbonates, dolomitisation, dédolomitisation, process-like modelling, lattice gas automata, random walk, memory effect.
Resumo:
This article first performs a review of the account that has had early childhood education, and the training of early childhood education teachers in particular, in diferent Spanish education laws since the arrival of democracy; and second instead describes a realistic training model has proven effective to promote professional skills of teachers in early childhood education
Resumo:
The aim of this study was to simulate blood flow in thoracic human aorta and understand the role of flow dynamics in the initialization and localization of atherosclerotic plaque in human thoracic aorta. The blood flow dynamics in idealized and realistic models of human thoracic aorta were numerically simulated in three idealized and two realistic thoracic aorta models. The idealized models of thoracic aorta were reconstructed with measurements available from literature, and the realistic models of thoracic aorta were constructed by image processing Computed Tomographic (CT) images. The CT images were made available by South Karelia Central Hospital in Lappeenranta. The reconstruction of thoracic aorta consisted of operations, such as contrast adjustment, image segmentations, and 3D surface rendering. Additional design operations were performed to make the aorta model compatible for the numerical method based computer code. The image processing and design operations were performed with specialized medical image processing software. Pulsatile pressure and velocity boundary conditions were deployed as inlet boundary conditions. The blood flow was assumed homogeneous and incompressible. The blood was assumed to be a Newtonian fluid. The simulations with idealized models of thoracic aorta were carried out with Finite Element Method based computer code, while the simulations with realistic models of thoracic aorta were carried out with Finite Volume Method based computer code. Simulations were carried out for four cardiac cycles. The distribution of flow, pressure and Wall Shear Stress (WSS) observed during the fourth cardiac cycle were extensively analyzed. The aim of carrying out the simulations with idealized model was to get an estimate of flow dynamics in a realistic aorta model. The motive behind the choice of three aorta models with distinct features was to understand the dependence of flow dynamics on aorta anatomy. Highly disturbed and nonuniform distribution of velocity and WSS was observed in aortic arch, near brachiocephalic, left common artery, and left subclavian artery. On the other hand, the WSS profiles at the roots of branches show significant differences with geometry variation of aorta and branches. The comparison of instantaneous WSS profiles revealed that the model with straight branching arteries had relatively lower WSS compared to that in the aorta model with curved branches. In addition to this, significant differences were observed in the spatial and temporal profiles of WSS, flow, and pressure. The study with idealized model was extended to study blood flow in thoracic aorta under the effects of hypertension and hypotension. One of the idealized aorta models was modified along with the boundary conditions to mimic the thoracic aorta under the effects of hypertension and hypotension. The results of simulations with realistic models extracted from CT scans demonstrated more realistic flow dynamics than that in the idealized models. During systole, the velocity in ascending aorta was skewed towards the outer wall of aortic arch. The flow develops secondary flow patterns as it moves downstream towards aortic arch. Unlike idealized models, the distribution of flow was nonplanar and heavily guided by the artery anatomy. Flow cavitation was observed in the aorta model which was imaged giving longer branches. This could not be properly observed in the model with imaging containing a shorter length for aortic branches. The flow circulation was also observed in the inner wall of the aortic arch. However, during the diastole, the flow profiles were almost flat and regular due the acceleration of flow at the inlet. The flow profiles were weakly turbulent during the flow reversal. The complex flow patterns caused a non-uniform distribution of WSS. High WSS was distributed at the junction of branches and aortic arch. Low WSS was distributed at the proximal part of the junction, while intermedium WSS was distributed in the distal part of the junction. The pulsatile nature of the inflow caused oscillating WSS at the branch entry region and inner curvature of aortic arch. Based on the WSS distribution in the realistic model, one of the aorta models was altered to induce artificial atherosclerotic plaque at the branch entry region and inner curvature of aortic arch. Atherosclerotic plaque causing 50% blockage of lumen was introduced in brachiocephalic artery, common carotid artery, left subclavian artery, and aortic arch. The aim of this part of the study was first to study the effect of stenosis on flow and WSS distribution, understand the effect of shape of atherosclerotic plaque on flow and WSS distribution, and finally to investigate the effect of lumen blockage severity on flow and WSS distributions. The results revealed that the distribution of WSS is significantly affected by plaque with mere 50% stenosis. The asymmetric shape of stenosis causes higher WSS in branching arteries than in the cases with symmetric plaque. The flow dynamics within thoracic aorta models has been extensively studied and reported here. The effects of pressure and arterial anatomy on the flow dynamic were investigated. The distribution of complex flow and WSS is correlated with the localization of atherosclerosis. With the available results we can conclude that the thoracic aorta, with complex anatomy is the most vulnerable artery for the localization and development of atherosclerosis. The flow dynamics and arterial anatomy play a role in the localization of atherosclerosis. The patient specific image based models can be used to diagnose the locations in the aorta vulnerable to the development of arterial diseases such as atherosclerosis.
Conventional and Reciprocal Approaches to the Forward and Inverse Problems of Electroencephalography
Resumo:
Le problème inverse en électroencéphalographie (EEG) est la localisation de sources de courant dans le cerveau utilisant les potentiels de surface sur le cuir chevelu générés par ces sources. Une solution inverse implique typiquement de multiples calculs de potentiels de surface sur le cuir chevelu, soit le problème direct en EEG. Pour résoudre le problème direct, des modèles sont requis à la fois pour la configuration de source sous-jacente, soit le modèle de source, et pour les tissues environnants, soit le modèle de la tête. Cette thèse traite deux approches bien distinctes pour la résolution du problème direct et inverse en EEG en utilisant la méthode des éléments de frontières (BEM): l’approche conventionnelle et l’approche réciproque. L’approche conventionnelle pour le problème direct comporte le calcul des potentiels de surface en partant de sources de courant dipolaires. D’un autre côté, l’approche réciproque détermine d’abord le champ électrique aux sites des sources dipolaires quand les électrodes de surfaces sont utilisées pour injecter et retirer un courant unitaire. Le produit scalaire de ce champ électrique avec les sources dipolaires donne ensuite les potentiels de surface. L’approche réciproque promet un nombre d’avantages par rapport à l’approche conventionnelle dont la possibilité d’augmenter la précision des potentiels de surface et de réduire les exigences informatiques pour les solutions inverses. Dans cette thèse, les équations BEM pour les approches conventionnelle et réciproque sont développées en utilisant une formulation courante, la méthode des résidus pondérés. La réalisation numérique des deux approches pour le problème direct est décrite pour un seul modèle de source dipolaire. Un modèle de tête de trois sphères concentriques pour lequel des solutions analytiques sont disponibles est utilisé. Les potentiels de surfaces sont calculés aux centroïdes ou aux sommets des éléments de discrétisation BEM utilisés. La performance des approches conventionnelle et réciproque pour le problème direct est évaluée pour des dipôles radiaux et tangentiels d’excentricité variable et deux valeurs très différentes pour la conductivité du crâne. On détermine ensuite si les avantages potentiels de l’approche réciproquesuggérés par les simulations du problème direct peuvent êtres exploités pour donner des solutions inverses plus précises. Des solutions inverses à un seul dipôle sont obtenues en utilisant la minimisation par méthode du simplexe pour à la fois l’approche conventionnelle et réciproque, chacun avec des versions aux centroïdes et aux sommets. Encore une fois, les simulations numériques sont effectuées sur un modèle à trois sphères concentriques pour des dipôles radiaux et tangentiels d’excentricité variable. La précision des solutions inverses des deux approches est comparée pour les deux conductivités différentes du crâne, et leurs sensibilités relatives aux erreurs de conductivité du crâne et au bruit sont évaluées. Tandis que l’approche conventionnelle aux sommets donne les solutions directes les plus précises pour une conductivité du crâne supposément plus réaliste, les deux approches, conventionnelle et réciproque, produisent de grandes erreurs dans les potentiels du cuir chevelu pour des dipôles très excentriques. Les approches réciproques produisent le moins de variations en précision des solutions directes pour différentes valeurs de conductivité du crâne. En termes de solutions inverses pour un seul dipôle, les approches conventionnelle et réciproque sont de précision semblable. Les erreurs de localisation sont petites, même pour des dipôles très excentriques qui produisent des grandes erreurs dans les potentiels du cuir chevelu, à cause de la nature non linéaire des solutions inverses pour un dipôle. Les deux approches se sont démontrées également robustes aux erreurs de conductivité du crâne quand du bruit est présent. Finalement, un modèle plus réaliste de la tête est obtenu en utilisant des images par resonace magnétique (IRM) à partir desquelles les surfaces du cuir chevelu, du crâne et du cerveau/liquide céphalorachidien (LCR) sont extraites. Les deux approches sont validées sur ce type de modèle en utilisant des véritables potentiels évoqués somatosensoriels enregistrés à la suite de stimulation du nerf médian chez des sujets sains. La précision des solutions inverses pour les approches conventionnelle et réciproque et leurs variantes, en les comparant à des sites anatomiques connus sur IRM, est encore une fois évaluée pour les deux conductivités différentes du crâne. Leurs avantages et inconvénients incluant leurs exigences informatiques sont également évalués. Encore une fois, les approches conventionnelle et réciproque produisent des petites erreurs de position dipolaire. En effet, les erreurs de position pour des solutions inverses à un seul dipôle sont robustes de manière inhérente au manque de précision dans les solutions directes, mais dépendent de l’activité superposée d’autres sources neurales. Contrairement aux attentes, les approches réciproques n’améliorent pas la précision des positions dipolaires comparativement aux approches conventionnelles. Cependant, des exigences informatiques réduites en temps et en espace sont les avantages principaux des approches réciproques. Ce type de localisation est potentiellement utile dans la planification d’interventions neurochirurgicales, par exemple, chez des patients souffrant d’épilepsie focale réfractaire qui ont souvent déjà fait un EEG et IRM.
Resumo:
Locality to other nodes on a peer-to-peer overlay network can be established by means of a set of landmarks shared among the participating nodes. Each node independently collects a set of latency measures to landmark nodes, which are used as a multi-dimensional feature vector. Each peer node uses the feature vector to generate a unique scalar index which is correlated to its topological locality. A popular dimensionality reduction technique is the space filling Hilbert’s curve, as it possesses good locality preserving properties. However, there exists little comparison between Hilbert’s curve and other techniques for dimensionality reduction. This work carries out a quantitative analysis of their properties. Linear and non-linear techniques for scaling the landmark vectors to a single dimension are investigated. Hilbert’s curve, Sammon’s mapping and Principal Component Analysis have been used to generate a 1d space with locality preserving properties. This work provides empirical evidence to support the use of Hilbert’s curve in the context of locality preservation when generating peer identifiers by means of landmark vector analysis. A comparative analysis is carried out with an artificial 2d network model and with a realistic network topology model with a typical power-law distribution of node connectivity in the Internet. Nearest neighbour analysis confirms Hilbert’s curve to be very effective in both artificial and realistic network topologies. Nevertheless, the results in the realistic network model show that there is scope for improvements and better techniques to preserve locality information are required.
Resumo:
Progress in functional neuroimaging of the brain increasingly relies on the integration of data from complementary imaging modalities in order to improve spatiotemporal resolution and interpretability. However, the usefulness of merely statistical combinations is limited, since neural signal sources differ between modalities and are related non-trivially. We demonstrate here that a mean field model of brain activity can simultaneously predict EEG and fMRI BOLD with proper signal generation and expression. Simulations are shown using a realistic head model based on structural MRI, which includes both dense short-range background connectivity and long-range specific connectivity between brain regions. The distribution of modeled neural masses is comparable to the spatial resolution of fMRI BOLD, and the temporal resolution of the modeled dynamics, importantly including activity conduction, matches the fastest known EEG phenomena. The creation of a cortical mean field model with anatomically sound geometry, extensive connectivity, and proper signal expression is an important first step towards the model-based integration of multimodal neuroimages.