34 resultados para cyclic oxidation – Ti3SiC2-base material – scale spallation – adherence
em Université de Lausanne, Switzerland
Resumo:
BACKGROUND: Lipid-lowering therapy is costly but effective at reducing coronary heart disease (CHD) risk. OBJECTIVE: To assess the cost-effectiveness and public health impact of Adult Treatment Panel III (ATP III) guidelines and compare with a range of risk- and age-based alternative strategies. DESIGN: The CHD Policy Model, a Markov-type cost-effectiveness model. DATA SOURCES: National surveys (1999 to 2004), vital statistics (2000), the Framingham Heart Study (1948 to 2000), other published data, and a direct survey of statin costs (2008). TARGET POPULATION: U.S. population age 35 to 85 years. Time Horizon: 2010 to 2040. PERSPECTIVE: Health care system. INTERVENTION: Lowering of low-density lipoprotein cholesterol with HMG-CoA reductase inhibitors (statins). OUTCOME MEASURE: Incremental cost-effectiveness. RESULTS OF BASE-CASE ANALYSIS: Full adherence to ATP III primary prevention guidelines would require starting (9.7 million) or intensifying (1.4 million) statin therapy for 11.1 million adults and would prevent 20,000 myocardial infarctions and 10,000 CHD deaths per year at an annual net cost of $3.6 billion ($42,000/QALY) if low-intensity statins cost $2.11 per pill. The ATP III guidelines would be preferred over alternative strategies if society is willing to pay $50,000/QALY and statins cost $1.54 to $2.21 per pill. At higher statin costs, ATP III is not cost-effective; at lower costs, more liberal statin-prescribing strategies would be preferred; and at costs less than $0.10 per pill, treating all persons with low-density lipoprotein cholesterol levels greater than 3.4 mmol/L (>130 mg/dL) would yield net cost savings. RESULTS OF SENSITIVITY ANALYSIS: Results are sensitive to the assumptions that LDL cholesterol becomes less important as a risk factor with increasing age and that little disutility results from taking a pill every day. LIMITATION: Randomized trial evidence for statin effectiveness is not available for all subgroups. CONCLUSION: The ATP III guidelines are relatively cost-effective and would have a large public health impact if implemented fully in the United States. Alternate strategies may be preferred, however, depending on the cost of statins and how much society is willing to pay for better health outcomes. FUNDING: Flight Attendants' Medical Research Institute and the Swanson Family Fund. The Framingham Heart Study and Framingham Offspring Study are conducted and supported by the National Heart, Lung, and Blood Institute.
Resumo:
Background: Interpersonal violence is a worldwide social reality which seems to increasingly affect even the safest of countries, such as Switzerland. In this country, road traffic accidents, as well as professional and recreational activities, are the main providers of trauma-related injuries. The incidence of penetrative trauma related to stab wounds seems to be regularly increasing in our ED. The question arises of whether our strategies in trauma management are adapted to deal efficiently with these injuries.Methods: To answer this question, the study analysed patients admitted for intentional penetrative injuries in a tertiary urban emergency department (ED) during a 23 month period. Demographics, conditions of the assault, injury type and treatments applied were analysed.Results: Eighty patients admitted due to an intentional penetrating trauma accounted for 0.2% of the surgical practice of our ED. The assault occurred equally in a public or a private context, mainly affecting young males during the night and the weekend. Sixty six patients (83%) were treated as out-patients. Only 10 patients needed surgery. None of them required damage control surgery. No patient died and the mean hospital stay was 5.5 days.Conclusions: The prevalence of stab wounds in Switzerland is low. These injuries rarely need complex, surgical procedures. Observational strategies should be considered according to the patient status.
Resumo:
AIMS: There is no standard test to determine the fatigue resistance of denture teeth. With the increasing number of patients with implant-retained dentures the mechanical strength of the denture teeth requires more attention and valid laboratory test set-ups. The purpose of the present study was to determine the fatigue resistance of various denture teeth using a dynamic load testing machine. METHODS: Four denture teeth were used: Bonartic II (Candulor), Physiodens (Vita), SR Phonares II (Ivoclar Vivadent) and Trubyte (Dentsply). For dynamic load testing, first upper molars with a similar shape and cusp inclination were selected. The molar teeth were embedded in cylindrical steel molds with denture base material (ProBase, Ivoclar Vivadent). Dynamic fatigue loading was carried out on the mesio-buccal cusp at a 45° angle using dynamic testing machines and 2,000,000 cycles at 2Hz in water (37°C). Three specimens per group and load were submitted to decreasing load levels (at least 4) until all the three specimens no longer showed any failures. All the specimens were evaluated under a stereo microscope (20× magnification). The number of cycles reached before observing a failure, and its dependence on the load and on the material, has been modeled using a parametric survival regression model with a lognormal distribution. This allowed to estimate the fatigue resistance for a given material as the maximal load for which one would observe less than 1% failure after 2,000,000 cycles. RESULTS: The failure pattern was similar for all denture teeth, showing a large chipping of the loaded mesio-buccal cusp. In our regression model, there were statistically significant differences among the different materials, with SR Phonares II and Bonartic II showing a higher resistance than Physiodens and Trubyte, the fatigue resistance being estimated at around 110N for the former two, and at about 60N for the latter two materials. CONCLUSION: The fatigue resistance may be a useful parameter to assess and to compare the clinical risk of chipping and fracture of denture tooth materials.
Resumo:
Major climatic and geological events but also population history (secondary contacts) have generated cycles of population isolation and connection of long and short periods. Recent empirical and theoretical studies suggest that fast evolutionary processes might be triggered by such events, as commonly illustrated in ecology by the adaptive radiation of cichlid fishes (isolation and reconnection of lakes and watersheds) and in epidemiology by the fast adaptation of the influenza virus (isolation and reconnection in hosts). We test whether cyclic population isolation and connection provide the raw material (standing genetic variation) for species evolution and diversification. Our analytical results demonstrate that population isolation and connection can provide, to populations, a high excess of genetic diversity compared with what is expected at equilibrium. This excess is either cyclic (high allele turnover) or cumulates with time depending on the duration of the isolation and the connection periods and the mutation rate. We show that diversification rates of animal clades are associated with specific periods of climatic cycles in the Quaternary. We finally discuss the importance of our results for macroevolutionary patterns and for the inference of population history from genomic data.
Resumo:
Les reconstructions palinspastiques fournissent le cadre idéal à de nombreuses études géologiques, géographiques, océanographique ou climatiques. En tant qu?historiens de la terre, les "reconstructeurs" essayent d?en déchiffrer le passé. Depuis qu?ils savent que les continents bougent, les géologues essayent de retracer leur évolution à travers les âges. Si l?idée originale de Wegener était révolutionnaire au début du siècle passé, nous savons depuis le début des années « soixante » que les continents ne "dérivent" pas sans but au milieu des océans mais sont inclus dans un sur-ensemble associant croûte « continentale » et « océanique »: les plaques tectoniques. Malheureusement, pour des raisons historiques aussi bien que techniques, cette idée ne reçoit toujours pas l'écho suffisant parmi la communauté des reconstructeurs. Néanmoins, nous sommes intimement convaincus qu?en appliquant certaines méthodes et certains principes il est possible d?échapper à l?approche "Wégenerienne" traditionnelle pour enfin tendre vers la tectonique des plaques. Le but principal du présent travail est d?exposer, avec tous les détails nécessaires, nos outils et méthodes. Partant des données paléomagnétiques et paléogéographiques classiquement utilisées pour les reconstructions, nous avons développé une nouvelle méthodologie replaçant les plaques tectoniques et leur cinématique au coeur du problème. En utilisant des assemblages continentaux (aussi appelés "assemblées clés") comme des points d?ancrage répartis sur toute la durée de notre étude (allant de l?Eocène jusqu?au Cambrien), nous développons des scénarios géodynamiques permettant de passer de l?une à l?autre en allant du passé vers le présent. Entre deux étapes, les plaques lithosphériques sont peu à peu reconstruites en additionnant/ supprimant les matériels océaniques (symbolisés par des isochrones synthétiques) aux continents. Excepté lors des collisions, les plaques sont bougées comme des entités propres et rigides. A travers les âges, les seuls éléments évoluant sont les limites de plaques. Elles sont préservées aux cours du temps et suivent une évolution géodynamique consistante tout en formant toujours un réseau interconnecté à travers l?espace. Cette approche appelée "limites de plaques dynamiques" intègre de multiples facteurs parmi lesquels la flottabilité des plaques, les taux d'accrétions aux rides, les courbes de subsidence, les données stratigraphiques et paléobiogéographiques aussi bien que les évènements tectoniques et magmatiques majeurs. Cette méthode offre ainsi un bon contrôle sur la cinématique des plaques et fournit de sévères contraintes au modèle. Cette approche "multi-source" nécessite une organisation et une gestion des données efficaces. Avant le début de cette étude, les masses de données nécessaires était devenues un obstacle difficilement surmontable. Les SIG (Systèmes d?Information Géographiques) et les géo-databases sont des outils informatiques spécialement dédiés à la gestion, au stockage et à l?analyse des données spatialement référencées et de leurs attributs. Grâce au développement dans ArcGIS de la base de données PaleoDyn nous avons pu convertir cette masse de données discontinues en informations géodynamiques précieuses et facilement accessibles pour la création des reconstructions. Dans le même temps, grâce à des outils spécialement développés, nous avons, tout à la fois, facilité le travail de reconstruction (tâches automatisées) et amélioré le modèle en développant fortement le contrôle cinématique par la création de modèles de vitesses des plaques. Sur la base des 340 terranes nouvellement définis, nous avons ainsi développé un set de 35 reconstructions auxquelles est toujours associé un modèle de vitesse. Grâce à cet ensemble de données unique, nous pouvons maintenant aborder des problématiques majeurs de la géologie moderne telles que l?étude des variations du niveau marin et des changements climatiques. Nous avons commencé par aborder un autre problème majeur (et non définitivement élucidé!) de la tectonique moderne: les mécanismes contrôlant les mouvements des plaques. Nous avons pu observer que, tout au long de l?histoire de la terre, les pôles de rotation des plaques (décrivant les mouvements des plaques à la surface de la terre) tendent à se répartir le long d'une bande allant du Pacifique Nord au Nord de l'Amérique du Sud, l'Atlantique Central, l'Afrique du Nord, l'Asie Centrale jusqu'au Japon. Fondamentalement, cette répartition signifie que les plaques ont tendance à fuir ce plan médian. En l'absence d'un biais méthodologique que nous n'aurions pas identifié, nous avons interprété ce phénomène comme reflétant l'influence séculaire de la Lune sur le mouvement des plaques. La Lune sur le mouvement des plaques. Le domaine océanique est la clé de voute de notre modèle. Nous avons attaché un intérêt tout particulier à le reconstruire avec beaucoup de détails. Dans ce modèle, la croûte océanique est préservée d?une reconstruction à l?autre. Le matériel crustal y est symbolisé sous la forme d?isochrones synthétiques dont nous connaissons les âges. Nous avons également reconstruit les marges (actives ou passives), les rides médio-océaniques et les subductions intra-océaniques. En utilisant ce set de données très détaillé, nous avons pu développer des modèles bathymétriques 3-D unique offrant une précision bien supérieure aux précédents.<br/><br/>Palinspastic reconstructions offer an ideal framework for geological, geographical, oceanographic and climatology studies. As historians of the Earth, "reconstructers" try to decipher the past. Since they know that continents are moving, geologists a trying to retrieve the continents distributions through ages. If Wegener?s view of continent motions was revolutionary at the beginning of the 20th century, we know, since the Early 1960?s that continents are not drifting without goal in the oceanic realm but are included in a larger set including, all at once, the oceanic and the continental crust: the tectonic plates. Unfortunately, mainly due to technical and historical issues, this idea seems not to receive a sufficient echo among our particularly concerned community. However, we are intimately convinced that, by applying specific methods and principles we can escape the traditional "Wegenerian" point of view to, at last, reach real plate tectonics. This is the main aim of this study to defend this point of view by exposing, with all necessary details, our methods and tools. Starting with the paleomagnetic and paleogeographic data classically used in reconstruction studies, we developed a modern methodology placing the plates and their kinematics at the centre of the issue. Using assemblies of continents (referred as "key assemblies") as anchors distributed all along the scope of our study (ranging from Eocene time to Cambrian time) we develop geodynamic scenarios leading from one to the next, from the past to the present. In between, lithospheric plates are progressively reconstructed by adding/removing oceanic material (symbolized by synthetic isochrones) to major continents. Except during collisions, plates are moved as single rigid entities. The only evolving elements are the plate boundaries which are preserved and follow a consistent geodynamical evolution through time and form an interconnected network through space. This "dynamic plate boundaries" approach integrates plate buoyancy factors, oceans spreading rates, subsidence patterns, stratigraphic and paleobiogeographic data, as well as major tectonic and magmatic events. It offers a good control on plate kinematics and provides severe constraints for the model. This multi-sources approach requires an efficient data management. Prior to this study, the critical mass of necessary data became a sorely surmountable obstacle. GIS and geodatabases are modern informatics tools of specifically devoted to store, analyze and manage data and associated attributes spatially referenced on the Earth. By developing the PaleoDyn database in ArcGIS software we converted the mass of scattered data offered by the geological records into valuable geodynamical information easily accessible for reconstructions creation. In the same time, by programming specific tools we, all at once, facilitated the reconstruction work (tasks automation) and enhanced the model (by highly increasing the kinematic control of plate motions thanks to plate velocity models). Based on the 340 terranes properly defined, we developed a revised set of 35 reconstructions associated to their own velocity models. Using this unique dataset we are now able to tackle major issues of the geology (such as the global sea-level variations and climate changes). We started by studying one of the major unsolved issues of the modern plate tectonics: the driving mechanism of plate motions. We observed that, all along the Earth?s history, plates rotation poles (describing plate motions across the Earth?s surface) tend to follow a slight linear distribution along a band going from the Northern Pacific through Northern South-America, Central Atlantic, Northern Africa, Central Asia up to Japan. Basically, it sighifies that plates tend to escape this median plan. In the absence of a non-identified methodological bias, we interpreted it as the potential secular influence ot the Moon on plate motions. The oceanic realms are the cornerstone of our model and we attached a particular interest to reconstruct them with many details. In this model, the oceanic crust is preserved from one reconstruction to the next. The crustal material is symbolised by the synthetic isochrons from which we know the ages. We also reconstruct the margins (active or passive), ridges and intra-oceanic subductions. Using this detailed oceanic dataset, we developed unique 3-D bathymetric models offering a better precision than all the previously existing ones.
Resumo:
In order to evaluate the relationship between the apparent complexity of hillslope soil moisture and the emergent patterns of catchment hydrological behaviour and water quality, we need fine-resolution catchment-wide data on soil moisture characteristics. This study proposes a methodology whereby vegetation patterns obtained from high-resolution orthorectified aerial photographs are used as an indicator of soil moisture characteristics. This enables us to examine a set of hypotheses regarding what drives the spatial patterns of soil moisture at the catchment scale (material properties or topography). We find that the pattern of Juncus effusus vegetation is controlled largely by topography and mediated by the catchment's material properties. Characterizing topography using the topographic index adds value to the soil moisture predictions relative to slope or upslope contributing area (UCA). However, these predictions depart from the observed soil moisture patterns at very steep slopes or low UCAs. Copyright (c) 2012 John Wiley & Sons, Ltd.
Resumo:
Le présent article vise à sensibiliser les médecins à la sémiologie précoce des troubles schizophréniques. Nous évoquons quelques-uns des troubles de base décrits dans le manuel BSABS (Bonn scale for the assessment of basic symploms) et la manière de les investiguer. ll s'agit notamment de troubles perceptifs, moleurs et cénesthésiques. Les troubles de base donnent des clés pour comprendre les mécanismes étiopathogéniques de la schizophrénie et sont de précieux indicateurs diagnostiques, ainsi que des éléments importants dans la compréhension et le suivi à long terme des patients schizophrènes, notamment dans l'établissement d'une bonne alliance thérapeutique et dans la prévention des rechutes. Même si la fiabilité de leur appréciation n'est pas aussi grande que pour les symptômes psychotiques florides, leur validité clinique et leur valeur prédictive sont bonnes.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
L'objectif de l'étude présentée est d'adapter et de valider une version française de la Stigma Scale (King, 2007) auprès d'une population de personnes souffrant de troubles psychiques. Dans une première phase, la stabilité temporelle (fidélité test-retest), la cohérence interne et la validité convergente de l'instrument original à 28 items traduit en français ont été évaluées auprès d'un échantillon de 183 patients. Les résultats d'analyses factorielles confirmatoires ne nous ont pas permis de confirmer la structure originale de l'instrument. Nous avons donc proposé, sur la base des résultats d'une analyse factorielle exploratoire, une version courte de l'échelle de stigmatisation (9 items) qui conserve la structure en trois facteurs du modèle original. Dans une deuxième phase, nous avons examiné les qualités psychométriques et validé cette version abrégée de l'échelle de stigmatisation auprès d'un second échantillon de 234 patients. Les indices d'ajustements de notre analyse factorielle confirmatoire confirme la structure en trois facteurs de la version abrégée de la Stigma Scale. Les résultats suggèrent que la version française abrégée de l'échelle de stigmatisation constitue un instrument utile, fiable et valide dans l'autoévaluation de la stigmatisation perçue par des personnes souffrant de troubles psychiques. - Aim People suffering from mental illness are exposed to stigma. However, only few tools are available to assess stigmatization as perceived from the patient's perspective. The aim of this study is to adapt and validate a French version of the Stigma Scale (King, 2007). This self-report questionnaire has a three-factor structure: discrimination, disclosure and positive aspects of mental illness. Discrimination subscale refers to perceived negative reactions by others. Disclosure subscale refers mainly to managing disclosure to avoid discrimination and finally positive aspects subscale taps into how patients are becoming more accepting, more understanding toward their illness. Method In the first step, internal consistency, convergent validity and test-retest reliability of the French adaptation of the 28-item scale have been assessed on a sample of 183 patients. Results of confirmatory factor analyses (CFA) did not confirm the hypothesized structure. In light of the failed attempts to validate the original version, an alternative 9-item short-form version of the Stigma Scale, maintaining the integrity of the original model, was developed based on results of exploratory factor analyses in the first sample and cross- validated in a new sample of 234 patients. Results Results of CFA did not confirm that the data fitted well to the three-factor model of the 28-item Stigma Scale (χ2/άί=2.02, GFI=0.77, AGFI=0.73, RMSEA=0.07, CFI=0.77 et NNFI=0.75). Cronbach's α are excellent for discrimination (0.84) and disclosure (0.83) subscales but poor for potential positive aspects (0.46). External validity is satisfactory. Overall Stigma Scale total score is negatively correlated with score on Rosenberg's Self-Esteem Scale (r = -0.49), and each sub-scale is significantly correlated with a visual analogue scale that refers to the specific aspect of stigma (0.43 < |r| < 0.60). Intraclass correlation coefficients between 0.68 and 0.89 indicate good test- retest reliability. Results of CFA demonstrate that the items chosen for the short version of the Stigma Scale have the expected fit properties fa2/df=1.02, GFI=0.98, AGFI=0.98, RMSEA=0.01, CFI=1.0 et NNFI=1.0). Considering the small number (3 items) of items in each subscales of the short version of the Stigma Scale, a coefficients for the discrimination (0.57), disclosure (0.80) and potential positive aspects subscales (0.62) are considered as good. Conclusion Our results suggest that the 9-item French short-version of the Stigma Scale is a useful, reliable and valid self-report questionnaire to assess perceived stigmatization in people suffering from mental illness. The time of completion is really short and questions are well understood and accepted by the patients.
Resumo:
BACKGROUND: Accurate catalogs of structural variants (SVs) in mammalian genomes are necessary to elucidate the potential mechanisms that drive SV formation and to assess their functional impact. Next generation sequencing methods for SV detection are an advance on array-based methods, but are almost exclusively limited to four basic types: deletions, insertions, inversions and copy number gains. RESULTS: By visual inspection of 100 Mbp of genome to which next generation sequence data from 17 inbred mouse strains had been aligned, we identify and interpret 21 paired-end mapping patterns, which we validate by PCR. These paired-end mapping patterns reveal a greater diversity and complexity in SVs than previously recognized. In addition, Sanger-based sequence analysis of 4,176 breakpoints at 261 SV sites reveal additional complexity at approximately a quarter of structural variants analyzed. We find micro-deletions and micro-insertions at SV breakpoints, ranging from 1 to 107 bp, and SNPs that extend breakpoint micro-homology and may catalyze SV formation. CONCLUSIONS: An integrative approach using experimental analyses to train computational SV calling is essential for the accurate resolution of the architecture of SVs. We find considerable complexity in SV formation; about a quarter of SVs in the mouse are composed of a complex mixture of deletion, insertion, inversion and copy number gain. Computational methods can be adapted to identify most paired-end mapping patterns.
Resumo:
The protein sequence deduced from the open reading frame of a human placental cDNA encoding a cAMP-responsive enhancer (CRE)-binding protein (CREB-327) has structural features characteristic of several other transcriptional transactivator proteins including jun, fos, C/EBP, myc, and CRE-BP1. Results of Southwestern analysis of nuclear extracts from several different cell lines show that there are multiple CRE-binding proteins, which vary in size in cell lines derived from different tissues and animal species. To examine the molecular diversity of CREB-327 and related proteins at the nucleic acid level, we used labeled cDNAs from human placenta that encode two different CRE-binding proteins (CREB-327 and CRE-BP1) to probe Northern and Southern blots. Both probes hybridized to multiple fragments on Southern blots of genomic DNA from various species. Alternatively, when a human placental c-jun probe was hybridized to the same blot, a single fragment was detected in most cases, consistent with the intronless nature of the human c-jun gene. The CREB-327 probe hybridized to multiple mRNAs, derived from human placenta, ranging in size from 2-9 kilobases. In contrast, the CRE-BP1 probe identified a single 4-kilobase mRNA. Sequence analyses of several overlapping human genomic cosmid clones containing CREB-327 sequences in conjunction with polymerase chain reaction indicates that the CREB-327/341 cDNAs are composed of at least eight or nine exons, and analyses of human placental cDNAs provide direct evidence for at least one alternatively spliced exon. Analyses of mouse/hamster-human hybridoma DNAs by Southern blotting and polymerase chain reaction localizes the CREB-327/341 gene to human chromosome 2. The results indicate that there is a dichotomy of CREB-like proteins, those that are related by overall structure and DNA-binding specificity as well as those that are related by close similarities of primary sequences.
Oral cancer treatments and adherence: medication event monitoring system assessment for capecitabine
Resumo:
Background: Oncological treatments are traditionally administered via intravenous injection by qualified personnel. Oral formulas which are developing rapidly are preferred by patients and facilitate administration however they may increase non-adherence. In this study 4 common oral chemotherapeutics are given to 50 patients, who are still in the process of inclusion, divided into 4 groups. The aim is to evaluate adherence and offer these patients interdisciplinary support with the joint help of doctors and pharmacists. We present here the results for capecitabine. Materials and Methods: The final goal is to evaluate adhesion in 50 patients split into 4 groups according to oral treatments (letrozole/exemestane, imatinib/sunitinib, capecitabine and temozolomide) using persistence and quality of execution as parameters. These parameters are evaluated using a medication event monitoring system (MEMS®) in addition to routine oncological visits and semi-structured interviews. Patients were monitored for the entire duration of treatment up to a maximum of 1 year. Patient satisfaction was assessed at the end of the monitoring period using a standardized questionary. Results: Capecitabine group included 2 women and 8 men with a median age of 55 years (range: 36−77 years) monitored for an average duration of 100 days (range: 5-210 days). Persistence was 98% and quality of execution 95%. 5 patients underwent cyclic treatment (2 out of 3 weeks) and 5 patients continuous treatment. Toxicities higher than grade 1 were grade 2−3 hand-foot syndrome in 1 patient and grade 3 acute coronary syndrome in 1 patient both without impact on adherence. Patients were satisfied with the interviews undergone during the study (57% useful, 28% very useful, 15% useless) and successfully integrated the MEMS® in their daily lives (57% very easily, 43% easily) according to the results obtained by questionary at the end of the monitoring period. Conclusion: Persistence and quality of execution observed in our Capecitabine group of patients were excellent and better than expected compared to previously published studies. The interdisciplinary approach allowed us to better identify and help patients with toxicities to maintain adherence. Overall patients were satisfied with the global interdisciplinary follow-up. With longer follow up better evaluation of our method and its impact will be possible. Interpretation of the results of patients in the other groups of this ongoing trial will provide us information for a more detailed analysis.
Resumo:
BACKGROUND: The Complete Arabidopsis Transcript MicroArray (CATMA) initiative combines the efforts of laboratories in eight European countries 1 to deliver gene-specific sequence tags (GSTs) for the Arabidopsis research community. The CATMA initiative offers the power and flexibility to regularly update the GST collection according to evolving knowledge about the gene repertoire. These GST amplicons can easily be reamplified and shared, subsets can be picked at will to print dedicated arrays, and the GSTs can be cloned and used for other functional studies. This ongoing initiative has already produced approximately 24,000 GSTs that have been made publicly available for spotted microarray printing and RNA interference. RESULTS: GSTs from the CATMA version 2 repertoire (CATMAv2, created in 2002) were mapped onto the gene models from two independent Arabidopsis nuclear genome annotation efforts, TIGR5 and PSB-EuGène, to consolidate a list of genes that were targeted by previously designed CATMA tags. A total of 9,027 gene models were not tagged by any amplified CATMAv2 GST, and 2,533 amplified GSTs were no longer predicted to tag an updated gene model. To validate the efficacy of GST mapping criteria and design rules, the predicted and experimentally observed hybridization characteristics associated to GST features were correlated in transcript profiling datasets obtained with the CATMAv2 microarray, confirming the reliability of this platform. To complete the CATMA repertoire, all 9,027 gene models for which no GST had yet been designed were processed with an adjusted version of the Specific Primer and Amplicon Design Software (SPADS). A total of 5,756 novel GSTs were designed and amplified by PCR from genomic DNA. Together with the pre-existing GST collection, this new addition constitutes the CATMAv3 repertoire. It comprises 30,343 unique amplified sequences that tag 24,202 and 23,009 protein-encoding nuclear gene models in the TAIR6 and EuGène genome annotations, respectively. To cover the remaining untagged genes, we identified 543 additional GSTs using less stringent design criteria and designed 990 sequence tags matching multiple members of gene families (Gene Family Tags or GFTs) to cover any remaining untagged genes. These latter 1,533 features constitute the CATMAv4 addition. CONCLUSION: To update the CATMA GST repertoire, we designed 7,289 additional sequence tags, bringing the total number of tagged TAIR6-annotated Arabidopsis nuclear protein-coding genes to 26,173. This resource is used both for the production of spotted microarrays and the large-scale cloning of hairpin RNA silencing vectors. All information about the resulting updated CATMA repertoire is available through the CATMA database http://www.catma.org.
Resumo:
Initial topography and inherited structural discontinuities are known to play a dominant role in rock slope stability. Previous 2-D physical modeling results demonstrated that even if few preexisting fractures are activated/propagated during gravitational failure all of those heterogeneities had a great influence on mobilized volume and its kinematics. The question we address in the present study is to determine if such a result is also observed in 3-D. As in 2-D previous models we examine geologically stable model configuration, based upon the well documented landslide at Randa, Switzerland. The 3-D models consisted of a homogeneous material in which several fracture zones were introduced in order to study simplified but realistic configurations of discontinuities (e.g. based on natural example rather than a parametric study). Results showed that the type of gravitational failure (deep-seated landslide or sequential failure) and resulting slope morphology evolution are the result of the interplay of initial topography and inherited preexisting fractures (orientation and density). The three main results are i) the initial topography exerts a strong control on gravitational slope failure. Indeed in each tested configuration (even in the isotropic one without fractures) the model is affected by a rock slide, ii) the number of simulated fracture sets greatly influences the volume mobilized and its kinematics, and iii) the failure zone involved in the 1991 event is smaller than the results produced by the analog modeling. This failure may indicate that the zone mobilized in 1991 is potentially only a part of a larger deep-seated landslide and/or wider deep seated gravitational slope deformation.
Resumo:
Three novel members of the Xenopus nuclear hormone receptor superfamily have been cloned. They are related to each other and similar to the group of receptors that includes those for thyroid hormones, retinoids, and vitamin D3. Their transcriptional activity is regulated by agents causing peroxisome proliferation and carcinogenesis in rodent liver. All three Xenopus receptors activate the promoter of the acyl coenzyme A oxidase gene, which encodes the key enzyme of peroxisomal fatty acid beta-oxidation, via a cognate response element that has been identified. Therefore, peroxisome proliferators may exert their hypolipidemic effects through these receptors, which stimulate the peroxisomal degradation of fatty acids. Finally, the multiplicity of these receptors suggests the existence of hitherto unknown cellular signaling pathways for xenobiotics and putative endogenous ligands.