58 resultados para Second and third harmonics


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work compares the structural/dynamics features of the wild-type alb-adrenergic receptor (AR) with those of the D142A active mutant and the agonist-bound state. The two active receptor forms were compared in their isolated states as well as in their ability to form homodimers and to recognize the G alpha q beta 1 gamma 2 heterotrimer. The analysis of the isolated structures revealed that, although the mutation- and agonist-induced active states of the alpha 1b-AR are different, they, however, share several structural peculiarities including (a) the release of some constraining interactions found in the wild-type receptor and (b) the opening of a cytosolic crevice formed by the second and third intracellular loops and the cytosolic extensions of helices 5 and 6. Accordingly, also their tendency to form homodimers shows commonalties and differences. In fact, in both the active receptor forms, helix 6 plays a crucial role in mediating homodimerization. However, the homodimeric models result from different interhelical assemblies. On the same line of evidence, in both of the active receptor forms, the cytosolic opened crevice recognizes similar domains on the G protein. However, the docking solutions are differently populated and the receptor-G protein preorientation models suggest that the final complexes should be characterized by different interaction patterns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

STATEMENT OF PROBLEM: The difficulty of identifying the ownership of lost dentures when found is a common and expensive problem in long term care facilities (LTCFs) and hospitals. PURPOSE: The purpose of this study was to evaluate the reliability of using radiofrequency identification (RFID) in the identification of dentures for LTCF residents after 3 and 6 months. MATERIAL AND METHODS: Thirty-eight residents of 2 LTCFs in Switzerland agreed to participate after providing informed consent. The tag was programmed with the family and first names of the participants and then inserted in the dentures. After placement of the tag, the information was read. A second and third assessment to review the functioning of the tag occurred at 3 and 6 months, and defective tags (if present) were reported and replaced. The data were analyzed with descriptive statistics. RESULTS: At the 3-month assessment of 34 residents (63 tags) 1 tag was unreadable and 62 tags (98.2%) were operational. At 6 months, the tags of 27 of the enrolled residents (50 tags) were available for review. No examined tag was defective at this time period. CONCLUSIONS: Within the limits of this study (number of patients, 6-month time span) RFID appears to be a reliable method of tracking and identifying dentures, with only 1 of 65 devices being unreadable at 3 months and 100% of 50 initially placed tags being readable at the end of the trial.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

WHAT'S KNOWN ON THE SUBJECT? AND WHAT DOES THE STUDY ADD?: The AMS 800 urinary control system is the gold standard for the treatment of urinary incontinence due to sphincter insufficiency. Despite excellent functional outcome and latest technological improvements, the revision rate remains significant. To overcome the shortcomings of the current device, we developed a modern electromechanical artificial urinary sphincter. The results demonstrated that this new sphincter is effective and well tolerated up to 3 months. This preliminary study represents a first step in the clinical application of novel technologies and an alternative compression mechanism to the urethra. OBJECTIVES: To evaluate the effectiveness in continence achievement of a new electromechanical artificial urinary sphincter (emAUS) in an animal model. To assess urethral response and animal general response to short-term and mid-term activation of the emAUS. MATERIALS AND METHODS: The principle of the emAUS is electromechanical induction of alternating compression of successive segments of the urethra by a series of cuffs activated by artificial muscles. Between February 2009 and May 2010 the emAUS was implanted in 17 sheep divided into three groups. The first phase aimed to measure bladder leak point pressure during the activation of the device. The second and third phases aimed to assess tissue response to the presence of the device after 2-9 weeks and after 3 months respectively. Histopathological and immunohistochemistry evaluation of the urethra was performed. RESULTS: Bladder leak point pressure was measured at levels between 1091 ± 30.6 cmH2 O and 1244.1 ± 99 cmH2 O (mean ± standard deviation) depending on the number of cuffs used. At gross examination, the explanted urethra showed no sign of infection, atrophy or stricture. On microscopic examination no significant difference in structure was found between urethral structure surrounded by a cuff and control urethra. In the peripheral tissues, the implanted material elicited a chronic foreign body reaction. Apart from one case, specimens did not show significant presence of lymphocytes, polymorphonuclear leucocytes, necrosis or cell degeneration. Immunohistochemistry confirmed the absence of macrophages in the samples. CONCLUSIONS: This animal study shows that the emAUS can provide continence. This new electronic controlled sequential alternating compression mechanism can avoid damage to urethral vascularity, at least up to 3 months after implantation. After this positive proof of concept, long-term studies are needed before clinical application could be considered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Although it is well recognized that the diagnosis of hypertension should be based on blood pressure (BP) measurements taken on several occasions, notably to account for a transient elevation of BP on the first readings, the prevalence of hypertension in populations has often relied on measurements at a single visit. OBJECTIVE: To identify an efficient strategy for assessing reliably the prevalence of hypertension in the population with regards to the number of BP readings required. DESIGN: Population-based survey of BP and follow-up information. SETTING AND PARTICIPANTS: All residents aged 25-64 years in an area of Dar es Salaam (Tanzania). MAIN OUTCOME MEASURES: Three BP readings at four successive visits in all participants with high BP (n = 653) and in 662 participants without high BP, measured with an automated BP device.RESULTS BP decreased substantially from the first to third readings at each of the four visits. BP decreased substantially between the first two visits but only a little between the next visits. Consequently, the prevalence of high BP based on the third reading--or the average of the second and third readings--at the second visit was not largely different compared to estimates based on readings at the fourth visit. BP decreased similarly when the first three visits were separated by 3-day or 14-day intervals. CONCLUSIONS: Taking triplicate readings on two visits, possibly separated by just a few days, could be a minimal strategy for assessing adequately the mean BP and the prevalence of hypertension at the population level. A sound strategy is important for assessing reliably the burden of hypertension in populations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé Les glissements de terrain représentent un des principaux risques naturels dans les régions montagneuses. En Suisse, chaque année les glissements de terrains causent des dégâts qui affectent les infrastructures et ont des coûts financiers importants. Une bonne compréhension des mécanismes des glissements peut permettre d'atténuer leur impact. Celle-ci passe notamment par la connaissance de la structure interne du glissement, la détermination de son volume et de son ou ses plans de glissement. Dans un glissement de terrain, la désorganisation et la présence de fractures dans le matériel déplacé engendre un changement des paramètres physiques et en particulier une diminution des vitesses de propagation des ondes sismiques ainsi que de la densité du matériel. Les méthodes sismiques sont de ce fait bien adaptées à l'étude des glissements de terrain. Parmi les méthodes sismiques, l'analyse de la dispersion des ondes de surface est une méthode simple à mettre en oeuvre. Elle présente l'avantage d'estimer les variations des vitesses de cisaillement avec la profondeur sans avoir spécifiquement recours à l'utilisation d'une source d'onde S et de géophones horizontaux. Sa mise en oeuvre en trois étapes implique la mesure de la dispersion des ondes de surface sur des réseaux étendus, la détermination des courbes de dispersion pour finir par l'inversion de ces courbes. Les modèles de vitesse obtenus à partir de cette procédure ne sont valides que lorsque les milieux explorés ne présentent pas de variations latérales. En pratique cette hypothèse est rarement vérifiée, notamment pour un glissement de terrain dans lequel les couches remaniées sont susceptibles de présenter de fortes hétérogénéités latérales. Pour évaluer la possibilité de déterminer des courbes de dispersion à partir de réseaux de faible extension des mesures testes ont été effectuées sur un site (Arnex, VD) équipé d'un forage. Un profil sismique de 190 m de long a été implanté dans une vallée creusée dans du calcaire et remplie par des dépôts glacio-lacustres d'une trentaine de mètres d'épaisseur. Les données acquises le long de ce profil ont confirmé que la présence de variations latérales sous le réseau de géophones affecte l'allure des courbes de dispersion jusqu'à parfois empêcher leur détermination. Pour utiliser l'analyse de la dispersion des ondes de surface sur des sites présentant des variations latérales, notre approche consiste à déterminer les courbes de dispersions pour une série de réseaux de faible extension, à inverser chacune des courbes et à interpoler les différents modèles de vitesse obtenus. Le choix de la position ainsi que de l'extension des différents réseaux de géophones est important. Il tient compte de la localisation des hétérogénéités détectées à partir de l'analyse de sismique réfraction, mais également d'anomalies d'amplitudes observées sur des cartes qui représentent dans le domaine position de tir - position du récepteur, l'amplitude mesurée pour différentes fréquences. La procédure proposée par Lin et Lin (2007) s'est avérée être une méthode efficace permettant de déterminer des courbes de dispersion à partir de réseaux de faible extension. Elle consiste à construire à partir d'un réseau de géophones et de plusieurs positions de tir un enregistrement temps-déports qui tient compte d'une large gamme de distances source-récepteur. Au moment d'assembler les différentes données une correction de phase est appliquée pour tenir compte des hétérogénéités situées entre les différents points de tir. Pour évaluer cette correction nous suggérons de calculer pour deux tir successif la densité spectrale croisée des traces de même offset: Sur le site d'Arnex, 22 courbes de dispersions ont été déterminées pour de réseaux de géophones de 10 m d'extension. Nous avons également profité du forage pour acquérir un profil de sismique verticale en ondes S. Le modèle de vitesse S déduit de l'interprétation du profil de sismique verticale est utilisé comme information à priori lors l'inversion des différentes courbes de dispersion. Finalement, le modèle en deux dimension qui a été établi grâce à l'analyse de la dispersion des ondes de surface met en évidence une structure tabulaire à trois couches dont les limites coïncident bien avec les limites lithologiques observées dans le forage. Dans celui-ci des argiles limoneuses associées à une vitesse de propagation des ondes S de l'ordre de 175 m/s surmontent vers 9 m de profondeur des dépôts de moraine argilo-sableuse caractérisés par des vitesses de propagation des ondes S de l'ordre de 300 m/s jusqu'à 14 m de profondeur et supérieur ou égal à 400 m/s entre 14 et 20 m de profondeur. Le glissement de la Grande Combe (Ballaigues, VD) se produit à l'intérieur du remplissage quaternaire d'une combe creusée dans des calcaires Portlandien. Comme dans le cas du site d'Arnex les dépôts quaternaires correspondent à des dépôts glacio-lacustres. Dans la partie supérieure la surface de glissement a été localisée à une vingtaine de mètres de profondeur au niveau de l'interface qui sépare des dépôts de moraine jurassienne et des dépôts glacio-lacustres. Au pied du glissement 14 courbes de dispersions ont été déterminées sur des réseaux de 10 m d'extension le long d'un profil de 144 m. Les courbes obtenues sont discontinues et définies pour un domaine de fréquence de 7 à 35 Hz. Grâce à l'utilisation de distances source-récepteur entre 8 et 72 m, 2 à 4 modes de propagation ont été identifiés pour chacune des courbes. Lors de l'inversion des courbes de dispersion la prise en compte des différents modes de propagation a permis d'étendre la profondeur d'investigation jusqu'à une vingtaine de mètres de profondeur. Le modèle en deux dimensions permet de distinguer 4 couches (Vs1 < 175 m/s, 175 m/s < Vs2 < 225 m/s, 225 m/s < Vs3 < 400 m/s et Vs4 >.400 m/s) qui présentent des variations d'épaisseur. Des profils de sismiques réflexion en ondes S acquis avec une source construite dans le cadre de ce travail, complètent et corroborent le modèle établi à partir de l'analyse de la dispersion des ondes de surface. Un réflecteur localisé entre 5 et 10 m de profondeur et associé à une vitesse de sommation de 180 m/s souligne notamment la géométrie de l'interface qui sépare la deuxième de la troisième couche du modèle établi à partir de l'analyse de la dispersion des ondes de surface. Abstract Landslides are one of the main natural hazards in mountainous regions. In Switzerland, landslides cause damages every year that impact infrastructures and have important financial costs. In depth understanding of sliding mechanisms may help limiting their impact. In particular, this can be achieved through a better knowledge of the internal structure of the landslide, the determination of its volume and its sliding surface or surfaces In a landslide, the disorganization and the presence of fractures in the displaced material generate a change of the physical parameters and in particular a decrease of the seismic velocities and of the material density. Therefoe, seismic methods are well adapted to the study of landslides. Among seismic methods, surface-wave dispersion analysis is a easy to implement. Through it, shearwave velocity variations with depth can be estimated without having to resort to an S-wave source and to horizontal geophones. Its 3-step implementation implies measurement of surface-wave dispersion with long arrays, determination of the dispersion curves and finally inversion of these curves. Velocity models obtained through this approach are only valid when the investigated medium does not include lateral variations. In practice, this assumption is seldom correct, in particular for landslides in which reshaped layers likely include strong lateral heterogeneities. To assess the possibility of determining dispersion curves from short array lengths we carried out tests measurements on a site (Arnex, VD) that includes a borehole. A 190 m long seismic profile was acquired in a valley carved into limestone and filled with 30 m of glacio-lacustrine sediments. The data acquired along this profile confirmed that the presence of lateral variations under the geophone array influences the dispersion-curve shape so much that it sometimes preventes the dispersion curves determination. Our approach to use the analysis of surface-wave dispersion on sites that include lateral variations consists in obtaining dispersion curves for a series of short length arrays; inverting each so obtained curve and interpolating the different obtained velocity model. The choice of the location as well as the geophone array length is important. It takes into account the location of the heterogeneities that are revealed by the seismic refraction interpretation of the data but also, the location of signal amplitude anomalies observed on maps that represent, for a given frequency, the measured amplitude in the shot position - receiver position domain. The procedure proposed by Lin and Lin (2007) turned out to be an efficient one to determine dispersion curves using short extension arrays. It consists in building a time-offset from an array of geophones with a wide offset range by gathering seismograms acquired with different source-to-receiver offsets. When assembling the different data, a phase correction is applied in order to reduce static phase error induced by lateral variation. To evaluate this correction, we suggest to calculate, for two successive shots, the cross power spectral density of common offset traces. On the Arnex site, 22 curves were determined with 10m in length geophone-arrays. We also took advantage of the borehole to acquire a S-wave vertical seismic profile. The S-wave velocity depth model derived from the vertical seismic profile interpretation is used as prior information in the inversion of the dispersion-curves. Finally a 2D velocity model was established from the analysis of the different dispersion curves. It reveals a 3-layer structure in good agreement with the observed lithologies in the borehole. In it a clay layer with a shear-wave of 175 m/s shear-wave velocity overlies a clayey-sandy till layer at 9 m depth that is characterized down to 14 m by a 300 m/s S-wave velocity; these deposits have a S-wave velocity of 400 m/s between depths of 14 to 20 m. The La Grand Combe landslide (Ballaigues, VD) occurs inside the Quaternary filling of a valley carved into Portlandien limestone. As at the Arnex site, the Quaternary deposits correspond to glaciolacustrine sediments. In the upper part of the landslide, the sliding surface is located at a depth of about 20 m that coincides with the discontinuity between Jurassian till and glacio-lacustrine deposits. At the toe of the landslide, we defined 14 dispersion curves along a 144 m long profile using 10 m long geophone arrays. The obtained curves are discontinuous and defined within a frequency range of 7 to 35 Hz. The use of a wide range of offsets (from 8 to 72 m) enabled us to determine 2 to 4 mode of propagation for each dispersion curve. Taking these higher modes into consideration for dispersion curve inversion allowed us to reach an investigation depth of about 20 m. A four layer 2D model was derived (Vs1< 175 m/s, 175 m/s <Vs2< 225 m/s, 225 m/s < Vs3 < 400 m/s, Vs4> 400 m/s) with variable layer thicknesses. S-wave seismic reflection profiles acquired with a source built as part of this work complete and the velocity model revealed by surface-wave analysis. In particular, reflector at a depth of 5 to 10 m associated with a 180 m/s stacking velocity image the geometry of the discontinuity between the second and third layer of the model derived from the surface-wave dispersion analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The comparison of cancer prevalence with cancer mortality can lead under some hypotheses to an estimate of registration rate. A method is proposed, where the cases with cancer as a cause of death are divided into 3 categories: (1) cases already known by the registry (2) unknown cases having occured before the registry creation date (3) unknown cases occuring during the registry operates. The estimate is then the number of cases in the first category divided by the total of those in categories 1 and 3 (these only are to be registered). An application is performed on the data of the Canton de Vaud. Survival rates of the Norvegian Cancer Registry are used for computing the number of unknown cases to be included in second and third category, respectively. The discussion focusses on the possible determinants of the obtained comprehensiveness rates for various cancer sites.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Manival near Grenoble (French Prealps) is a very active debris-flow torrent equipped with a large sediment trap (25 000 m3) protecting an urbanized alluvial fan from debris-flows. We began monitoring the sediment budget of the catchment controlled by the trap in Spring 2009. Terrestrial laser scanner is used for monitoring topographic changes in a small gully, the main channel, and the sediment trap. In the main channel, 39 cross-sections are surveyed after every event. Three periods of intense geomorphic activity are documented here. The first was induced by a convective storm in August 2009 which triggered a debris-flow that deposited ~1,800 m3 of sediment in the trap. The debris-flow originated in the upper reach of the main channel and our observations showed that sediment outputs were entirely supplied by channel scouring. Hillslope debris-flows were initiated on talus slopes, as revealed by terrestrial LiDAR resurveys; however they were disconnected to the main channel. The second and third periods of geomorphic activity were induced by long duration and low intensity rainfall events in September and October 2009 which generate small flow events with intense bedload transport. These events contribute to recharge the debris-flow channel with sediments by depositing important gravel dunes propagating from headwaters. The total recharge in the torrent subsequent to bedload transport events was estimated at 34% of the sediment erosion induced by the August debris-flow.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: An important component of the policy to deal with the H1N1 pandemic in 2009 was to develop and implement vaccination. Since pregnant women were found to be at particular risk of severe morbidity and mortality, the World Health Organization and the European Centers for Disease Control advised vaccinating pregnant women, regardless of trimester of pregnancy. This study reports a survey of vaccination policies for pregnant women in European countries. METHODS: Questionnaires were sent to European competent authorities of 27 countries via the European Medicines Agency and to leaders of registries of European Surveillance of Congenital Anomalies in 21 countries. RESULTS: Replies were received for 24 out of 32 European countries of which 20 had an official pandemic vaccination policy. These 20 countries all had a policy targeting pregnant women. For two of the four countries without official pandemic vaccination policies, some vaccination of pregnant women took place. In 12 out of 20 countries the policy was to vaccinate only second and third trimester pregnant women and in 8 out of 20 countries the policy was to vaccinate pregnant women regardless of trimester of pregnancy. Seven different vaccines were used for pregnant women, of which four contained adjuvants. Few countries had mechanisms to monitor the number of vaccinations given specifically to pregnant women over time. Vaccination uptake varied. CONCLUSIONS: Differences in pandemic vaccination policy and practice might relate to variation in perception of vaccine efficacy and safety, operational issues related to vaccine manufacturing and procurement, and vaccination campaign systems. Increased monitoring of pandemic influenza vaccine coverage of pregnant women is recommended to enable evaluation of the vaccine safety in pregnancy and pandemic vaccination campaign effectiveness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerous links between genetic variants and phenotypes are known and genome-wide association studies dramatically increased the number of genetic variants associated with traits during the last decade. However, how changes in the DNA perturb the molecular mechanisms and impact on the phenotype of an organism remains elusive. Studies suggest that many traitassociated variants are in the non-coding region of the genome and probably act through regulation of gene expression. During my thesis I investigated how genetic variants affect gene expression through gene regulatory mechanisms. The first chapter was a collaborative project with a pharmaceutical company, where we investigated genome-wide copy number variation (CNVs) among Cynomolgus monkeys (Macaca fascicularis) used in pharmaceutical studies, and associated them to changes in gene expression. We found substantial copy number variation and identified CNVs linked to tissue-specific expression changes of proximal genes. The second and third chapters focus on genetic variation in humans and its effects on gene regulatory mechanisms and gene expression. The second chapter studies two human trios, where the allelic effects of genetic variation on genome-wide gene expression, protein-DNA binding and chromatin modifications were investigated. We found abundant allele specific activity across all measured molecular phenotypes and show extended coordinated behavior among them. In the third chapter, we investigated the impact of genetic variation on these phenotypes in 47 unrelated individuals. We found that chromatin phenotypes are organized into local variable modules, often linked to genetic variation and gene expression. Our results suggest that chromatin variation emerges as a result of perturbations of cis-regulatory elements by genetic variants, leading to gene expression changes. The work of this thesis provides novel insights into how genetic variation impacts gene expression by perturbing regulatory mechanisms. -- De nombreux liens entre variations génétiques et phénotypes sont connus. Les études d'association pangénomique ont considérablement permis d'augmenter le nombre de variations génétiques associées à des phénotypes au cours de la dernière décennie. Cependant, comprendre comment ces changements perturbent les mécanismes moléculaires et affectent le phénotype d'un organisme nous échappe encore. Des études suggèrent que de nombreuses variations, associées à des phénotypes, sont situées dans les régions non codantes du génome et sont susceptibles d'agir en modifiant la régulation d'expression des gènes. Au cours de ma thèse, j'ai étudié comment les variations génétiques affectent les niveaux d'expression des gènes en perturbant les mécanismes de régulation de leur expression. Le travail présenté dans le premier chapitre est un projet en collaboration avec une société pharmaceutique. Nous avons étudié les variations en nombre de copies (CNV) présentes chez le macaque crabier (Macaca fascicularis) qui est utilisé dans les études pharmaceutiques, et nous les avons associées avec des changements d'expression des gènes. Nous avons découvert qu'il existe une variabilité substantielle du nombre de copies et nous avons identifié des CNVs liées aux changements d'expression des gènes situés dans leur voisinage. Ces associations sont présentes ou absentes de manière spécifique dans certains tissus. Les deuxième et troisième chapitres se concentrent sur les variations génétiques dans les populations humaines et leurs effets sur les mécanismes de régulation des gènes et leur expression. Le premier se penche sur deux trios humains, père, mère, enfant, au sein duquel nous avons étudié les effets alléliques des variations génétiques sur l'expression des gènes, les liaisons protéine-ADN et les modifications de la chromatine. Nous avons découvert que l'activité spécifique des allèles est abondante abonde dans tous ces phénotypes moléculaires et nous avons démontré que ces derniers ont un comportement coordonné entre eux. Dans le second, nous avons examiné l'impact des variations génétiques de ces phénotypes moléculaires chez 47 individus, sans lien de parenté. Nous avons observé que les phénotypes de la chromatine sont organisés en modules locaux, qui sont liés aux variations génétiques et à l'expression des gènes. Nos résultats suggèrent que la variabilité de la chromatine est due à des variations génétiques qui perturbent des éléments cis-régulateurs, et peut conduire à des changements dans l'expression des gènes. Le travail présenté dans cette thèse fournit de nouvelles pistes pour comprendre l'impact des différentes variations génétiques sur l'expression des gènes à travers les mécanismes de régulation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

RÉSUMÉ En combinant la perspective du parcours de vie à la théorie du stress et selon une approche psychosociale, cette thèse montre comment les expériences individuelles et collectives de victimisation ont marqué les parcours de vie, les croyances et le bien-être d'une cohorte de jeunes adultes ayant traversé les guerres en ex-Yougoslavie. Le premier article applique des analyses de courbes de croissance à classes latentes et dégage différentes trajectoires d'exclusion entre 1990 et 2006. L'analyse de ces trajectoires met en évidence les intersections entre vies individuelles, contexte et temps socio-historique et démontre que les expériences de guerre et les périodes d'exclusion socio-économique laissent des traces sur le bien-être à long terme. Les deuxième et troisième articles montrent que la croyance en un monde juste est ébranlée suite à des expériences de précarité socio-économique et de victimisation dues à la guerre au niveau individuel et contextuel. Un effet curvilinéaire et des interactions entre les niveaux indiquent que ces relations varient en fonction de l'intensité de la victimisation au niveau contextuel. Des effets de récence sont aussi relevés. Le quatrième article démontre que l'impact négatif de la victimisation sur le bien-être est en partie expliqué par un effritement de la croyance en un monde juste. De plus, si les individus qui croient davantage en un monde juste sont plus satisfaits de leur vie, la force de ce lien varie en fonction du niveau de victimisation dans certains contextes. Cette thèse présente un modèle multiniveaux dynamique dans lequel la croyance en un monde juste n'exerce plus le rôle de ressource personnelle stable mais s'érode face à la victimisation, entraînant ainsi un bien-être moindre. Ce travail souligne l'importance d'articuler les niveaux individuels et contextuels et de considérer la dimension temporelle pour expliquer les liens entre victimisation, croyance en un monde juste et bien-être. ABSTRACT By combining a life course perspective to stress theory and according to a psychosocial approach, this thesis shows how individual and collective victimisation experiences marked the life course, beliefs and well-being of a cohort of young adults who lived through the wars in former Yugoslavia. In the first article, latent class growth analyses were applied to identify different exclusion trajectories between 1990 and 2006. The analysis of these trajectories highlighted the intersections between individual lives, socio-historical context and time and demonstrated that experiences of war and socio-economic exclusion leave traces on well-being in the long term. The second and third articles showed that the belief in a just world was shattered due to socio-economic precariousness and war victimisation at individual and contextual levels. A curvilinear effect and cross-level interactions indicated that these relations varied according to the intensity of victimisation at the contextual level. Time effects were also noted. The fourth article showed that the negative impact of victimisation on well-being was partly explained by an erosion of the belief in a just world. Furthermore, if high believers were more satisfied with their lives, the strength of this relation varied depending on the level of victimisation in particular contexts. This thesis presents a multilevel dynamic model in which the belief in a just world no longer exercises the role of a stable personal resource but erodes in the face of victimisation, leading to a lower well-being. This work stresses the importance of articulating individual and contextual levels as well as considering the temporal dimension to explain the links between victimisation, belief in a just world and well-being.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION: HIV-infected pregnant women are very likely to engage in HIV medical care to prevent transmission of HIV to their newborn. After delivery, however, childcare and competing commitments might lead to disengagement from HIV care. The aim of this study was to quantify loss to follow-up (LTFU) from HIV care after delivery and to identify risk factors for LTFU. METHODS: We used data on 719 pregnancies within the Swiss HIV Cohort Study from 1996 to 2012 and with information on follow-up visits available. Two LTFU events were defined: no clinical visit for >180 days and no visit for >360 days in the year after delivery. Logistic regression analysis was used to identify risk factors for a LTFU event after delivery. RESULTS: Median maternal age at delivery was 32 years (IQR 28-36), 357 (49%) women were black, 280 (39%) white, 56 (8%) Asian and 4% other ethnicities. One hundred and seven (15%) women reported any history of IDU. The majority (524, 73%) of women received their HIV diagnosis before pregnancy, most of those (413, 79%) had lived with diagnosed HIV longer than three years and two-thirds (342, 65%) were already on antiretroviral therapy (ART) at time of conception. Of the 181 women diagnosed during pregnancy by a screening test, 80 (44%) were diagnosed in the first trimester, 67 (37%) in the second and 34 (19%) in the third trimester. Of 357 (69%) women who had been seen in HIV medical care during three months before conception, 93% achieved an undetectable HIV viral load (VL) at delivery. Of 62 (12%) women with the last medical visit more than six months before conception, only 72% achieved an undetectable VL (p=0.001). Overall, 247 (34%) women were LTFU over 180 days in the year after delivery and 86 (12%) women were LTFU over 360 days with 43 (50%) of those women returning. Being LTFU for 180 days was significantly associated with history of intravenous drug use (aOR 1.73, 95% CI 1.09-2.77, p=0.021) and not achieving an undetectable VL at delivery (aOR 1.79, 95% CI 1.03-3.11, p=0.040) after adjusting for maternal age, ethnicity, time of HIV diagnosis and being on ART at conception. CONCLUSIONS: Women with a history of IDU and women with a detectable VL at delivery were more likely to be LTFU after delivery. This is of concern regarding their own health, as well as risk for sexual partners and subsequent pregnancies. Further strategies should be developed to enhance retention in medical care beyond pregnancy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé Le cancer du sein est le cancer le plus commun chez les femmes et est responsable de presque 30% de tous les nouveaux cas de cancer en Europe. On estime le nombre de décès liés au cancer du sein en Europe est à plus de 130.000 par an. Ces chiffres expliquent l'impact social considérable de cette maladie. Les objectifs de cette thèse étaient: (1) d'identifier les prédispositions et les mécanismes biologiques responsables de l'établissement des sous-types spécifiques de cancer du sein; (2) les valider dans un modèle ín vivo "humain-dans-souris"; et (3) de développer des traitements spécifiques à chaque sous-type de cancer du sein identifiés. Le premier objectif a été atteint par l'intermédiaire de l'analyse des données d'expression de gènes des tumeurs, produite dans notre laboratoire. Les données obtenues par puces à ADN ont été produites à partir de 49 biopsies des tumeurs du sein provenant des patientes participant dans l'essai clinique EORTC 10994/BIG00-01. Les données étaient très riches en information et m'ont permis de valider des données précédentes des autres études d'expression des gènes dans des tumeurs du sein. De plus, cette analyse m'a permis d'identifier un nouveau sous-type biologique de cancer du sein. Dans la première partie de la thèse, je décris I identification des tumeurs apocrines du sein par l'analyse des puces à ADN et les implications potentielles de cette découverte pour les applications cliniques. Le deuxième objectif a été atteint par l'établissement d'un modèle de cancer du sein humain, basé sur des cellules épithéliales mammaires humaines primaires (HMECs) dérivées de réductions mammaires. J'ai choisi d'adapter un système de culture des cellules en suspension basé sur des mammosphères précédemment décrit et pat décidé d'exprimer des gènes en utilisant des lentivirus. Dans la deuxième partie de ma thèse je décris l'établissement d'un système de culture cellulaire qui permet la transformation quantitative des HMECs. Par la suite, j'ai établi un modèle de xénogreffe dans les souris immunodéficientes NOD/SCID, qui permet de modéliser la maladie humaine chez la souris. Dans la troisième partie de ma thèse je décris et je discute les résultats que j'ai obtenus en établissant un modèle estrogène-dépendant de cancer du sein par transformation quantitative des HMECs avec des gènes définis, identifiés par analyse de données d'expression des gènes dans le cancer du sein. Les cellules transformées dans notre modèle étaient estrogène-dépendantes pour la croissance, diploïdes et génétiquement normales même après la culture cellulaire in vitro prolongée. Les cellules formaient des tumeurs dans notre modèle de xénogreffe et constituaient des métastases péritonéales disséminées et du foie. Afin d'atteindre le troisième objectif de ma thèse, j'ai défini et examiné des stratégies de traitement qui permettent réduire les tumeurs et les métastases. J'ai produit un modèle de cancer du sein génétiquement défini et positif pour le récepteur de l'estrogène qui permet de modéliser le cancer du sein estrogène-dépendant humain chez la souris. Ce modèle permet l'étude des mécanismes impliqués dans la formation des tumeurs et des métastases. Abstract Breast cancer is the most common cancer in women and accounts for nearly 30% of all new cancer cases in Europe. The number of deaths from breast cancer in Europe is estimated to be over 130,000 each year, implying the social impact of the disease. The goals of this thesis were first, to identify biological features and mechanisms --responsible for the establishment of specific breast cancer subtypes, second to validate them in a human-in-mouse in vivo model and third to develop specific treatments for identified breast cancer subtypes. The first objective was achieved via the analysis of tumour gene expression data produced in our lab. The microarray data were generated from 49 breast tumour biopsies that were collected from patients enrolled in the clinical trial EORTC 10994/BIG00-01. The data set was very rich in information and allowed me to validate data of previous breast cancer gene expression studies and to identify biological features of a novel breast cancer subtype. In the first part of the thesis I focus on the identification of molecular apacrine breast tumours by microarray analysis and the potential imptìcation of this finding for the clinics. The second objective was attained by the production of a human breast cancer model system based on primary human mammary epithelial cells {HMECs) derived from reduction mammoplasties. I have chosen to adopt a previously described suspension culture system based on mammospheres and expressed selected target genes using lentiviral expression constructs. In the second part of my thesis I mainly focus on the establishment of a cell culture system allowing for quantitative transformation of HMECs. I then established a xenograft model in immunodeficient NOD/SCID mice, allowing to model human disease in a mouse. In the third part of my thesis I describe and discuss the results that I obtained while establishing an oestrogen-dependent model of breast cancer by quantitative transformation of HMECs with defined genes identified after breast cancer gene expression data analysis. The transformed cells in our model are oestrogen-dependent for growth; remain diploid and genetically normal even after prolonged cell culture in vitro. The cells farm tumours and form disseminated peritoneal and liver metastases in our xenograft model. Along the lines of the third objective of my thesis I defined and tested treatment schemes allowing reducing tumours and metastases. I have generated a genetically defined model of oestrogen receptor alpha positive human breast cancer that allows to model human oestrogen-dependent breast cancer in a mouse and enables the study of mechanisms involved in tumorigenesis and metastasis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Superficial layers I to III of the human cerebral cortex are more vulnerable toward Aβ peptides than deep layers V to VI in aging. Three models of layers were used to investigate this pattern of frailty. First, primary neurons from E14 and E17 embryonic murine cortices, corresponding respectively to future deep and superficial layers, were treated either with Aβ1-42, okadaic acid, or kainic acid. Second, whole E14 and E17 embryonic cortices, and third, in vitro separated deep and superficial layers of young and old C57BL/6J mice, were treated identically. We observed that E14 and E17 neurons in culture were prone to death after the Aβ and particularly the kainic acid treatment. This was also the case for the superficial layers of the aged cortex, but not for the embryonic, the young cortex, and the deep layers of the aged cortex. Thus, the aged superficial layers appeared to be preferentially vulnerable against Aβ and kainic acid. This pattern of vulnerability corresponds to enhanced accumulation of senile plaques in the superficial cortical layers with aging and Alzheimer's disease.