35 resultados para Two-dimensional dynamical system


Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES: Due to the high prevalence of renal failure in transcatheter aortic valve replacement (TAVR) candidates, a non-contrast MR technique is desirable for pre-procedural planning. We sought to evaluate the feasibility of a novel, non-contrast, free-breathing, self-navigated three-dimensional (SN3D) MR sequence for imaging the aorta from its root to the iliofemoral run-off in comparison to non-contrast two-dimensional-balanced steady-state free-precession (2D-bSSFP) imaging. METHODS: SN3D [field of view (FOV), 220-370 mm(3); slice thickness, 1.15 mm; repetition/echo time (TR/TE), 3.1/1.5 ms; and flip angle, 115°] and 2D-bSSFP acquisitions (FOV, 340 mm; slice thickness, 6 mm; TR/TE, 2.3/1.1 ms; flip angle, 77°) were performed in 10 healthy subjects (all male; mean age, 30.3 ± 4.3 yrs) using a 1.5-T MRI system. Aortic root measurements and qualitative image ratings (four-point Likert-scale) were compared. RESULTS: The mean effective aortic annulus diameter was similar for 2D-bSSFP and SN3D (26.7 ± 0.7 vs. 26.1 ± 0.9 mm, p = 0.23). The mean image quality of 2D-bSSFP (4; IQR 3-4) was rated slightly higher (p = 0.03) than SN3D (3; IQR 2-4). The mean total acquisition time for SN3D imaging was 12.8 ± 2.4 min. CONCLUSIONS: Our results suggest that a novel SN3D sequence allows rapid, free-breathing assessment of the aortic root and the aortoiliofemoral system without administration of contrast medium. KEY POINTS: • The prevalence of renal failure is high among TAVR candidates. • Non-contrast 3D MR angiography allows for TAVR procedure planning. • The self-navigated sequence provides a significantly reduced scanning time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Coordination is a strategy chosen by the central nervous system to control the movements and maintain stability during gait. Coordinated multi-joint movements require a complex interaction between nervous outputs, biomechanical constraints, and pro-prioception. Quantitatively understanding and modeling gait coordination still remain a challenge. Surgeons lack a way to model and appreciate the coordination of patients before and after surgery of the lower limbs. Patients alter their gait patterns and their kinematic synergies when they walk faster or slower than normal speed to maintain their stability and minimize the energy cost of locomotion. The goal of this study was to provide a dynamical system approach to quantitatively describe human gait coordination and apply it to patients before and after total knee arthroplasty. Methods: A new method of quantitative analysis of interjoint coordination during gait was designed, providing a general model to capture the whole dynamics and showing the kinematic synergies at various walking speeds. The proposed model imposed a relationship among lower limb joint angles (hips and knees) to parameterize the dynamics of locomotion of each individual. An integration of different analysis tools such as Harmonic analysis, Principal Component Analysis, and Artificial Neural Network helped overcome high-dimensionality, temporal dependence, and non-linear relationships of the gait patterns. Ten patients were studied using an ambulatory gait device (Physilog®). Each participant was asked to perform two walking trials of 30m long at 3 different speeds and to complete an EQ-5D questionnaire, a WOMAC and Knee Society Score. Lower limbs rotations were measured by four miniature angular rate sensors mounted respectively, on each shank and thigh. The outcomes of the eight patients undergoing total knee arthroplasty, recorded pre-operatively and post-operatively at 6 weeks, 3 months, 6 months and 1 year were compared to 2 age-matched healthy subjects. Results: The new method provided coordination scores at various walking speeds, ranged between 0 and 10. It determined the overall coordination of the lower limbs as well as the contribution of each joint to the total coordination. The difference between the pre-operative and post-operative coordination values were correlated with the improvements of the subjective outcome scores. Although the study group was small, the results showed a new way to objectively quantify gait coordination of patients undergoing total knee arthroplasty, using only portable body-fixed sensors. Conclusion: A new method for objective gait coordination analysis has been developed with very encouraging results regarding the objective outcome of lower limb surgery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: To introduce a new k-space traversal strategy for segmented three-dimensional echo planar imaging (3D EPI) that encodes two partitions per radiofrequency excitation, effectively reducing the number excitations used to acquire a 3D EPI dataset by half. METHODS: The strategy was evaluated in the context of functional MRI applications for: image quality compared with segmented 3D EPI, temporal signal-to-noise ratio (tSNR) (the ability to detect resting state networks compared with multislice two-dimensional (2D) EPI and segmented 3D EPI, and temporal resolution (the ability to separate cardiac- and respiration-related fluctuations from the desired blood oxygen level-dependent signal of interest). RESULTS: Whole brain images with a nominal voxel size of 2 mm isotropic could be acquired with a temporal resolution under half a second using traditional parallel imaging acceleration up to 4× in the partition-encode direction and using novel data acquisition speed-up of 2× with a 32-channel coil. With 8× data acquisition speed-up in the partition-encode direction, 3D reduced excitations (RE)-EPI produced acceptable image quality without introduction of noticeable additional artifacts. Due to increased tSNR and better characterization of physiological fluctuations, the new strategy allowed detection of more resting state networks compared with multislice 2D-EPI and segmented 3D EPI. CONCLUSION: 3D RE-EPI resulted in significant increases in temporal resolution for whole brain acquisitions and in improved physiological noise characterization compared with 2D-EPI and segmented 3D EPI. Magn Reson Med 72:786-792, 2014. © 2013 Wiley Periodicals, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AbstractBreast cancer is one of the most common cancers affecting one in eight women during their lives. Survival rates have increased steadily thanks to early diagnosis with mammography screening and more efficient treatment strategies. Post-operative radiation therapy is a standard of care in the management of breast cancer and has been shown to reduce efficiently both local recurrence rate and breast cancer mortality. Radiation therapy is however associated with some late effects for long-term survivors. Radiation-induced secondary cancer is a relatively rare but severe late effect of radiation therapy. Currently, radiotherapy plans are essentially optimized to maximize tumor control and minimize late deterministic effects (tissue reactions) that are mainly associated with high doses (» 1 Gy). With improved cure rates and new radiation therapy technologies, it is also important to evaluate and minimize secondary cancer risks for different treatment techniques. This is a particularly challenging task due to the large uncertainties in the dose-response relationship.In contrast with late deterministic effects, secondary cancers may be associated with much lower doses and therefore out-of-field doses (also called peripheral doses) that are typically inferior to 1 Gy need to be determined accurately. Out-of-field doses result from patient scatter and head scatter from the treatment unit. These doses are particularly challenging to compute and we characterized it by Monte Carlo (MC) calculation. A detailed MC model of the Siemens Primus linear accelerator has been thoroughly validated with measurements. We investigated the accuracy of such a model for retrospective dosimetry in epidemiological studies on secondary cancers. Considering that patients in such large studies could be treated on a variety of machines, we assessed the uncertainty in reconstructed peripheral dose due to the variability of peripheral dose among various linac geometries. For large open fields (> 10x10 cm2), the uncertainty would be less than 50%, but for small fields and wedged fields the uncertainty in reconstructed dose could rise up to a factor of 10. It was concluded that such a model could be used for conventional treatments using large open fields only.The MC model of the Siemens Primus linac was then used to compare out-of-field doses for different treatment techniques in a female whole-body CT-based phantom. Current techniques such as conformai wedged-based radiotherapy and hybrid IMRT were investigated and compared to older two-dimensional radiotherapy techniques. MC doses were also compared to those of a commercial Treatment Planning System (TPS). While the TPS is routinely used to determine the dose to the contralateral breast and the ipsilateral lung which are mostly out of the treatment fields, we have shown that these doses may be highly inaccurate depending on the treatment technique investigated. MC shows that hybrid IMRT is dosimetrically similar to three-dimensional wedge-based radiotherapy within the field, but offers substantially reduced doses to out-of-field healthy organs.Finally, many different approaches to risk estimations extracted from the literature were applied to the calculated MC dose distribution. Absolute risks varied substantially as did the ratio of risk between two treatment techniques, reflecting the large uncertainties involved with current risk models. Despite all these uncertainties, the hybrid IMRT investigated resulted in systematically lower cancer risks than any of the other treatment techniques. More epidemiological studies with accurate dosimetry are required in the future to construct robust risk models. In the meantime, any treatment strategy that reduces out-of-field doses to healthy organs should be investigated. Electron radiotherapy might offer interesting possibilities with this regard.RésuméLe cancer du sein affecte une femme sur huit au cours de sa vie. Grâce au dépistage précoce et à des thérapies de plus en plus efficaces, le taux de guérison a augmenté au cours du temps. La radiothérapie postopératoire joue un rôle important dans le traitement du cancer du sein en réduisant le taux de récidive et la mortalité. Malheureusement, la radiothérapie peut aussi induire des toxicités tardives chez les patients guéris. En particulier, les cancers secondaires radio-induits sont une complication rare mais sévère de la radiothérapie. En routine clinique, les plans de radiothérapie sont essentiellement optimisées pour un contrôle local le plus élevé possible tout en minimisant les réactions tissulaires tardives qui sont essentiellement associées avec des hautes doses (» 1 Gy). Toutefois, avec l'introduction de différentes nouvelles techniques et avec l'augmentation des taux de survie, il devient impératif d'évaluer et de minimiser les risques de cancer secondaire pour différentes techniques de traitement. Une telle évaluation du risque est une tâche ardue étant donné les nombreuses incertitudes liées à la relation dose-risque.Contrairement aux effets tissulaires, les cancers secondaires peuvent aussi être induits par des basses doses dans des organes qui se trouvent hors des champs d'irradiation. Ces organes reçoivent des doses périphériques typiquement inférieures à 1 Gy qui résultent du diffusé du patient et du diffusé de l'accélérateur. Ces doses sont difficiles à calculer précisément, mais les algorithmes Monte Carlo (MC) permettent de les estimer avec une bonne précision. Un modèle MC détaillé de l'accélérateur Primus de Siemens a été élaboré et validé avec des mesures. La précision de ce modèle a également été déterminée pour la reconstruction de dose en épidémiologie. Si on considère que les patients inclus dans de larges cohortes sont traités sur une variété de machines, l'incertitude dans la reconstruction de dose périphérique a été étudiée en fonction de la variabilité de la dose périphérique pour différents types d'accélérateurs. Pour de grands champs (> 10x10 cm ), l'incertitude est inférieure à 50%, mais pour de petits champs et des champs filtrés, l'incertitude de la dose peut monter jusqu'à un facteur 10. En conclusion, un tel modèle ne peut être utilisé que pour les traitements conventionnels utilisant des grands champs.Le modèle MC de l'accélérateur Primus a été utilisé ensuite pour déterminer la dose périphérique pour différentes techniques dans un fantôme corps entier basé sur des coupes CT d'une patiente. Les techniques actuelles utilisant des champs filtrés ou encore l'IMRT hybride ont été étudiées et comparées par rapport aux techniques plus anciennes. Les doses calculées par MC ont été comparées à celles obtenues d'un logiciel de planification commercial (TPS). Alors que le TPS est utilisé en routine pour déterminer la dose au sein contralatéral et au poumon ipsilatéral qui sont principalement hors des faisceaux, nous avons montré que ces doses peuvent être plus ou moins précises selon la technTque étudiée. Les calculs MC montrent que la technique IMRT est dosimétriquement équivalente à celle basée sur des champs filtrés à l'intérieur des champs de traitement, mais offre une réduction importante de la dose aux organes périphériques.Finalement différents modèles de risque ont été étudiés sur la base des distributions de dose calculées par MC. Les risques absolus et le rapport des risques entre deux techniques de traitement varient grandement, ce qui reflète les grandes incertitudes liées aux différents modèles de risque. Malgré ces incertitudes, on a pu montrer que la technique IMRT offrait une réduction du risque systématique par rapport aux autres techniques. En attendant des données épidémiologiques supplémentaires sur la relation dose-risque, toute technique offrant une réduction des doses périphériques aux organes sains mérite d'être étudiée. La radiothérapie avec des électrons offre à ce titre des possibilités intéressantes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Phagocytosis, whether of food particles in protozoa or bacteria and cell remnants in the metazoan immune system, is a conserved process. The particles are taken up into phagosomes, which then undergo complex remodeling of their components, called maturation. By using two-dimensional gel electrophoresis and mass spectrometry combined with genomic data, we identified 179 phagosomal proteins in the amoeba Dictyostelium, including components of signal transduction, membrane traffic, and the cytoskeleton. By carrying out this proteomics analysis over the course of maturation, we obtained time profiles for 1,388 spots and thus generated a dynamic record of phagosomal protein composition. Clustering of the time profiles revealed five clusters and 24 functional groups that were mapped onto a flow chart of maturation. Two heterotrimeric G protein subunits, Galpha4 and Gbeta, appeared at the earliest times. We showed that mutations in the genes encoding these two proteins produce a phagocytic uptake defect in Dictyostelium. This analysis of phagosome protein dynamics provides a reference point for future genetic and functional investigations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ultrastructure of the membrane attack complex (MAC) of complement had been described as representing a hollow cylinder of defined dimensions that is composed of the proteins C5b, C6, C7, C8, and C9. After the characteristic cylindrical structure was identified as polymerized C9 [poly(C9)], the question arose as to the ultrastructural identity and topology of the C9-polymerizing complex C5b-8. An electron microscopic analysis of isolated MAC revealed an asymmetry of individual complexes with respect to their length. Whereas the length of one boundary (+/- SEM) was always 16 +/- 1 nm, the length of the other varied between 16 and 32 nm. In contrast, poly(C9), formed spontaneously from isolated C9, had a uniform tubule length (+/- SEM) of 16 +/- 1 nm. On examination of MAC-phospholipid vesicle complexes, an elongated structure was detected that was closely associated with the poly(C9) tubule and that extended 16-18 nm beyond the torus of the tubule and 28-30 nm above the membrane surface. The width of this structure varied depending on its two-dimensional projection in the electron microscope. By using biotinyl C5b-6 in the formation of the MAC and avidin-coated colloidal gold particles for the ultrastructural analysis, this heretofore unrecognized subunit of the MAC could be identified as the tetramolecular C5b-8 complex. Identification also was achieved by using anti-C5 Fab-coated colloidal gold particles. A similar elongated structure of 25 nm length (above the surface of the membrane) was observed on single C5b-8-vesicle complexes. It is concluded that the C5b-8 complex, which catalyzes poly(C9) formation, constitutes a structure of discrete morphology that remains as such identifiable in the fully assembled MAC, in which it is closely associated with the poly(C9) tubule.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An efficient high-resolution (HR) three-dimensional (3D) seismic reflection system for small-scale targets in lacustrine settings was developed. In Lake Geneva, near the city of Lausanne, Switzerland, the offshore extension of a complex fault zone well mapped on land was chosen for testing our system. A preliminary two-dimensional seismic survey indicated structures that include a thin (<40 m) layer of subhorizontal Quaternary sediments that unconformably overlie south-east-dipping Tertiary Molasse beds and a major fault zone (Paudeze Fault Zone) that separates Plateau and Subalpine Molasse (SM) units. A 3D survey was conducted over this test site using a newly developed three-streamer system. It provided high-quality data with a penetration to depths of 300 m below the water bottom of non-aliased signal for dips up to 30degrees and with a maximum vertical resolution of 1.1 m. The data were subjected to a conventional 3D processing sequence that included post-stack time migration. Tests with 3D pre-stack depth migration showed that such techniques can be applied to HR seismic surveys. Delineation of several horizons and fault surfaces reveals the potential for small-scale geologic and tectonic interpretation in three dimensions. Five major seismic facies and their detailed 3D geometries can be distinguished. Three fault surfaces and the top of a molasse surface were mapped in 3D. Analysis of the geometry of these surfaces and their relative orientation suggests that pre-existing structures within the Plateau Molasse (PM) unit influenced later faulting between the Plateau and SM. In particular, a change in strike of the PM bed dip may indicate a fold formed by a regional stress regime, the orientation of which was different from the one responsible for the creation of the Paudeze Fault Zone. This structure might have later influenced the local stress regime and caused the curved shape of the Paudeze Fault in our surveyed area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three-dimensional sequence stratigraphy is a potent exploration and development tool for the discovery of subtle stratigraphic traps. Reservoir morphology, heterogeneity and subtle stratigraphic trapping mechanisms can be better understood through systematic horizontal identification of sedimentary facies of systems tracts provided by three-dimensional attribute maps used as an important complement to the sequential analysis on the two-dimensional seismic lines and the well log data. On new prospects as well as on already-producing fields, the additional input of sequential analysis on three-dimensional data enables the identification, location and precise delimitation of new potentially productive zones. The first part of this paper presents four typical horizontal seismic facies assigned to the successive systems tracts of a third- or fourth-order sequence deposited in inner to outer neritic conditions on a elastic shelf. The construction of this synthetic representative sequence is based on the observed reproducibility of the horizontal seismic facies response to cyclic eustatic events on more than 35 sequences registered in the Gulf coast Plio-Pleistocene and Late Miocene, offshore Louisiana in the West Cameron region of the Gulf of Mexico. The second part shows how three-dimensional sequence stratigraphy can contribute in localizing and understanding sedimentary facies associated with productive zones. A case study in the early Middle Miocene Cibicides opima sands shows multiple stacked gas accumulations in the top slope fan, prograding wedge and basal transgressive systems tract of the third-order sequence between SB15.5 and SB 13.8 Ma.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ground-penetrating radar (GPR) and microgravimetric surveys have been conducted in the southern Jura mountains of western Switzerland in order to map subsurface karstic features. The study site, La Grande Rolaz cave, is an extensive system in which many portions have been mapped. By using small station spacing and careful processing for the geophysical data, and by modeling these data with topographic information from within the cave, accurate interpretations have been achieved. The constraints on the interpreted geologic models are better when combining the geophysical methods than when using only one of the methods, despite the general limitations of two-dimensional (2D) profiling. For example, microgravimetry can complement GPR methods for accurately delineating a shallow cave section approximately 10 X 10 mt in size. Conversely, GPR methods can be complementary in determining cavity depths and in verifying the presence of off-line features and numerous areas of small cavities and fractures, which may be difficult to resolve in microgravimetric data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Complex wounds pose a major challenge in reconstructive and trauma surgery. Several approaches to increase the healing process have been proposed in the last decades. In this study we study the mechanism of action of the Vacuum Assisted Closure device in diabetic wounds. Methods: Full-thickness wounds were excised in diabetic mice and treated with the VAC device or its isolated components: an occlusive dressing (OD) alone, subathmospheric pressure at 125 mm Hg (Suction), and a polyurethane foam without (Foam) and with (Foamc) downward compression of approximately 125 mm Hg. The last goups were treated with either the complete VAC device (VAC) or with a silicne interface that alows fluid removel (Mepithel-VAC). The effects of the treatment modes on the wound surface were quantified by a two-dimensional immunohistochemical staging system based on vasculature, as defined by blood vessel density (CD31) and cell proliferation (defined by ki67 positivity), 7 days post wounding. Finite element modelling was used to predict wound surface deformation under dressing modes and cross sections of in situ fixed tissues were used to measure actual microstrain. Results: The foam-wound interface of the Vacuum Assisted Closure device causes significant wound stains (60%) causing a deformation of the single cell level leading to a profound upregulation of cell proliferation (4-fold) and angiogenisis (2.2-fold) compared to OD treated wounds. Polyurethane foam exposure itself causes a frather unspecific angiogenic response (Foamc, 2 - fold, Foam, 2.2 - fold) without changes of the cell proliferation rate of the wound bed. Suction alone without a specific interface does not have an effect on meassured parameters, showing similar results to untreated wounds. A perforated silicone interface caused a significant lower microdeforamtion of the wound bed correlating to changes of the wound tissues. Conclusion: The Vacuum Assisted Closure device induce significanttissue growth in diabetic wounds. The wound foam interface under suction causes profound macrodeformation that stimulates tissue growth by angiogenesis and cell proliferation. It needs to be taken in consideration that in the clinical setting different wound types may profit from different elements of this suction device.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A mobile ad hoc network (MANET) is a decentralized and infrastructure-less network. This thesis aims to provide support at the system-level for developers of applications or protocols in such networks. To do this, we propose contributions in both the algorithmic realm and in the practical realm. In the algorithmic realm, we contribute to the field by proposing different context-aware broadcast and multicast algorithms in MANETs, namely six-shot broadcast, six-shot multicast, PLAN-B and ageneric algorithmic approach to optimize the power consumption of existing algorithms. For each algorithm we propose, we compare it to existing algorithms that are either probabilistic or context-aware, and then we evaluate their performance based on simulations. We demonstrate that in some cases, context-aware information, such as location or signal-strength, can improve the effciency. In the practical realm, we propose a testbed framework, namely ManetLab, to implement and to deploy MANET-specific protocols, and to evaluate their performance. This testbed framework aims to increase the accuracy of performance evaluation compared to simulations, while keeping the ease of use offered by the simulators to reproduce a performance evaluation. By evaluating the performance of different probabilistic algorithms with ManetLab, we observe that both simulations and testbeds should be used in a complementary way. In addition to the above original contributions, we also provide two surveys about system-level support for ad hoc communications in order to establish a state of the art. The first is about existing broadcast algorithms and the second is about existing middleware solutions and the way they deal with privacy and especially with location privacy. - Un réseau mobile ad hoc (MANET) est un réseau avec une architecture décentralisée et sans infrastructure. Cette thèse vise à fournir un support adéquat, au niveau système, aux développeurs d'applications ou de protocoles dans de tels réseaux. Dans ce but, nous proposons des contributions à la fois dans le domaine de l'algorithmique et dans celui de la pratique. Nous contribuons au domaine algorithmique en proposant différents algorithmes de diffusion dans les MANETs, algorithmes qui sont sensibles au contexte, à savoir six-shot broadcast,six-shot multicast, PLAN-B ainsi qu'une approche générique permettant d'optimiser la consommation d'énergie de ces algorithmes. Pour chaque algorithme que nous proposons, nous le comparons à des algorithmes existants qui sont soit probabilistes, soit sensibles au contexte, puis nous évaluons leurs performances sur la base de simulations. Nous montrons que, dans certains cas, des informations liées au contexte, telles que la localisation ou l'intensité du signal, peuvent améliorer l'efficience de ces algorithmes. Sur le plan pratique, nous proposons une plateforme logicielle pour la création de bancs d'essai, intitulé ManetLab, permettant d'implémenter, et de déployer des protocoles spécifiques aux MANETs, de sorte à évaluer leur performance. Cet outil logiciel vise à accroître la précision desévaluations de performance comparativement à celles fournies par des simulations, tout en conservant la facilité d'utilisation offerte par les simulateurs pour reproduire uneévaluation de performance. En évaluant les performances de différents algorithmes probabilistes avec ManetLab, nous observons que simulateurs et bancs d'essai doivent être utilisés de manière complémentaire. En plus de ces contributions principales, nous fournissons également deux états de l'art au sujet du support nécessaire pour les communications ad hoc. Le premier porte sur les algorithmes de diffusion existants et le second sur les solutions de type middleware existantes et la façon dont elles traitent de la confidentialité, en particulier celle de la localisation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In three-dimensional (3D) coronary magnetic resonance angiography (MRA), the in-flow contrast between the coronary blood and the surrounding myocardium is attenuated as compared to thin-slab two-dimensional (2D) techniques. The application of a gadolinium (Gd)-based intravascular contrast agent may provide an additional source of signal and contrast by reducing T(1blood) and supporting the visualization of more distal or branching segments of the coronary arterial tree. In six healthy adults, the left coronary artery (LCA) system was imaged pre- and postcontrast with a 0.075-mmol/kg bodyweight dose of the intravascular contrast agent B-22956. For imaging, an optimized free-breathing, navigator-gated and -corrected 3D inversion recovery (IR) sequence was used. For comparison, state-of-the-art baseline 3D coronary MRA with T(2) preparation for non-exogenous contrast enhancement was acquired. The combination of IR 3D coronary MRA, sophisticated navigator technology, and B-22956 allowed for an extensive visualization of the LCA system. Postcontrast, a significant increase in both the signal-to-noise ratio (SNR; 46%, P < 0.05) and contrast-to-noise ratio (CNR; 160%, P < 0.01) was observed, while vessel sharpness of the left anterior descending (LAD) artery and the left coronary circumflex (LCX) were improved by 20% (P < 0.05) and 18% (P < 0.05), respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The hepatitis C virus (HCV) NS3-4A protease isnot only an essential component of the viral replication complexand a prime target for antiviral intervention but also a key playerin the persistence and pathogenesis of HCV. It cleaves andthereby inactivates two crucial adaptor proteins in viral RNAsensing and innate immunity (MAVS and TRIF) as well as aphosphatase involved in growth factor signaling (TC-PTP). Theaim of this ongoing study is to identify novel cellular targets ofthe NS3-4A protease.Methods: Cell lines inducibly expressing the NS3-4A proteasewere established using a tetracycline-regulated geneexpression system. Cells were analyzed in basal as well asinterferon-α-stimulated states. Two-dimensional difference gelelectrophoresis (2D-DIGE) and stable isotopic labeling usingamino acids in cell culture (SILAC) proteomics analysescoupled with mass spectrometry were employed to search forcellular substrates of NS3-4A.Results: A number of candidate cellular targets have beenidentified by these proteomics approaches. These are currentlybeing validated by different experimental techniques. In parallel,we are in the process of further defining the determinants forsubstrate specificity of the NS3-4A protease.Conclusions: The identification of novel cellular targets of theHCV NS3-4A protase should yield new insights into thepathogenesis of hepatitis C and may reveal novel targets forantiviral intervention.