132 resultados para number fields
Resumo:
BACKGROUND AND PURPOSE: Carotid artery stenting (CAS) is associated with a higher risk of both hemodynamic depression and new ischemic brain lesions on diffusion-weighted imaging than carotid endarterectomy (CEA). We assessed whether the occurrence of hemodynamic depression is associated with these lesions in patients with symptomatic carotid stenosis treated by CAS or CEA in the randomized International Carotid Stenting Study (ICSS)-MRI substudy. METHODS: The number and total volume of new ischemic lesions on diffusion-weighted imaging 1 to 3 days after CAS or CEA was measured in the ICSS-MRI substudy. Hemodynamic depression was defined as periprocedural bradycardia, asystole, or hypotension requiring treatment. The number of new ischemic lesions was the primary outcome measure. We calculated risk ratios and 95% confidence intervals per treatment with Poisson regression comparing the number of lesions in patients with or without hemodynamic depression. RESULTS: A total of 229 patients were included (122 allocated CAS; 107 CEA). After CAS, patients with hemodynamic depression had a mean of 13 new diffusion-weighted imaging lesions, compared with a mean of 4 in those without hemodynamic depression (risk ratio, 3.36; 95% confidence interval, 1.73-6.50). The number of lesions after CEA was too small for reliable analysis. Lesion volumes did not differ between patients with or without hemodynamic depression. CONCLUSIONS: In patients treated by CAS, periprocedural hemodynamic depression is associated with an excess of new ischemic lesions on diffusion-weighted imaging. The findings support the hypothesis that hypoperfusion increases the susceptibility of the brain to embolism. CLINICAL TRIAL REGISTRATION URL: http://www.controlled-trials.com. Unique identifier: ISRCTN25337470.
Resumo:
Abstract : Copy number variation (CNV) of DNA segments has recently gained considerable interest as a source of genetic variation likely to play a role in phenotypic diversity and evolution. Much effort has been put into the identification and mapping of regions that vary in copy number among seemingly normal individuals, both in humans and in a number of model organisms, using both bioinformatic and hybridization-based methods. Synteny studies suggest the existence of CNV hotspots in mammalian genomes, often in connection with regions of segmental duplication. CNV alleles can be in equilibrium within a population, but can also arise de novo between generations, illustrating the highly dynamic nature of these regions. A small number of studies have assessed the effect of CNV on single loci, however, at the genome-wide scale, the functional impact of CNV remains poorly studied. We have explored the influence of CNV on gene expression, first using the Williams-Beuren syndrome (WBS) associated deletion as a model, and second at the genome-wide scale in inbred mouse strains. We found that the WBS deletion influences the expression levels not only of the hemizygous genes, but also affects the euploid genes mapping nearby. Consistently, on a genome wide scale we observe that CNV genes are expressed at more variable levels than genes that do not vary in copy number. Likewise, CNVs influence the relative expression levels of genes that map to the flank of the genome rearrangements, thus globally influencing tissue transcriptomes. Further studies are warranted to complete cataloguing and fine mapping of CNV regions, as well as to elucidate the different mechanisms by which CNVs influence gene expression. Résumé : La variation en nombre de copies (copy number variation ou CNV) de segments d'ADN suscite un intérêt en tant que variation génétique susceptible de jouer un r81e dans la diversité phénotypique et l'évolution. Les régions variables en nombre de copies parmi des individus apparemment normaux ont été cartographiées et cataloguées au moyen de puces à ADN et d'analyse bioinformatique. L'étude de la synténie entre plusieurs espèces de mammifères laisse supposer l'existence de régions à haut taux de variation, souvent liées à des duplications segmentaires. Les allèles CNV peuvent être en équilibre au sein d'une population ou peuvent apparaître de novo. Ces faits illustrent la nature hautement dynamique de ces régions. Quelques études se sont penchées sur l'effet de la variation en nombre de copies de loci isolés, cependant l'impact de ce phénomène n'a pas été étudié à l'échelle génomique. Nous avons examiné l'influence des CNV sur l'expression des gènes. Dans un premier temps nous avons utilisé la délétion associée au syndrome de Williams-Beuren (WBS), puis, dans un second temps, nous avons poursuivi notre étude à l'échelle du génome, dans des lignées consanguines de souris. Nous avons établi que la délétion WBS influence l'expression non seulement des gènes hémizygotes, mais également celle des gènes euploïdes voisins. A l'échelle génomique, nous observons des phénomènes concordants. En effet, l'expression des gènes variant en nombre de copies est plus variable que celles des gènes ne variant pas. De plus, à l'instar de la délétion WBS, les CNV influencent l'expression des gènes adjacents, exerçant ainsi un impact global sur les profils d'expression dans les tissus. Résumé pour un large public : De nombreuses maladies ont pour cause un défaut génétique. Parmi les types de mutations, on compte la disparition (délétion) d'une partie de notre génome ou sa duplication. Bien que l'on connaisse les anomalies associées à certaines maladies, les mécanismes moléculaires par lesquels ces réarrangements de notre matériel génétique induisent les maladies sont encore méconnus. C'est pourquoi nous nous sommes intéressés à la régulation des gènes dans les régions susceptibles à délétion ou duplication. Dans ce travail, nous avons démontré que les délétions et les duplications influencent la régulation des gènes situés à proximité, et que ces changements interviennent dans plusieurs organes.
Resumo:
Several authors have demonstrated an increased number of mitotic figures in breast cancer resection specimen when compared with biopsy material. This has been ascribed to a sampling artifact where biopsies are (i) either too small to allow formal mitotic figure counting or (ii) not necessarily taken form the proliferating tumor periphery. Herein, we propose a different explanation for this phenomenon. Biopsy and resection material of 52 invasive ductal carcinomas was studied. We counted mitotic figures in 10 representative high power fields and quantified MIB-1 immunohistochemistry by visual estimation, counting and image analysis. We found that mitotic figures were elevated by more than three-fold on average in resection specimen over biopsy material from the same tumors (20±6 vs 6±2 mitoses per 10 high power fields, P=0.008), and that this resulted in a relative diminution of post-metaphase figures (anaphase/telophase), which made up 7% of all mitotic figures in biopsies but only 3% in resection specimen (P<0.005). At the same time, the percentages of MIB-1 immunostained tumor cells among total tumor cells were comparable in biopsy and resection material, irrespective of the mode of MIB-1 quantification. Finally, we found no association between the size of the biopsy material and the relative increase of mitotic figures in resection specimen. We propose that the increase in mitotic figures in resection specimen and the significant shift towards metaphase figures is not due to a sampling artifact, but reflects ongoing cell cycle activity in the resected tumor tissue due to fixation delay. The dwindling energy supply will eventually arrest tumor cells in metaphase, where they are readily identified by the diagnostic pathologist. Taken together, we suggest that the rapidly fixed biopsy material better represents true tumor biology and should be privileged as predictive marker of putative response to cytotoxic chemotherapy.
Resumo:
The age of the patient is of prime importance when assessing the radiological risk to patients due to medical X-ray exposures and the total detriment to the population due to radiodiagnostics. In order to take into account the age-specific radiosensitivity, three age groups are considered: children, adults and the elderly. In this work, the relative number of examinations carried out on paediatric and geriatric patients is established, compared with adult patients, for radiodiagnostics as a whole, for dental and medical radiology, for 8 radiological modalities as well as for 40 types of X-ray examinations. The relative numbers of X-ray examinations are determined based on the corresponding age distributions of patients and that of the general population. Two broad groups of X-ray examinations may be defined. Group A comprises conventional radiography, fluoroscopy and computed tomography; for this group a paediatric patient undergoes half the number of examinations as that of an adult, and a geriatric patient undergoes 2.5 times more. Group B comprises angiography and interventional procedures; for this group a paediatric patient undergoes a one-fourth of the number of examinations carried out on an adult, and a geriatric patient undergoes five times more.
Resumo:
Background: Simultaneous polydrug use (SPU) may represent a greater incremental risk factor for human health than concurrent polydrug use (CPU). However, few studies have examined these patterns of use in relation to health issues, particularly with regard to the number of drugs used. Methods: In the present study, we have analyzed data from a representative sample of 5734 young Swiss males from the Cohort Study on Substance Use Risk Factors. Exposure to drugs (i.e., alcohol, tobacco, cannabis, and 15 other illicit drugs), as well as mental, social and physical factors, were studied through regression analysis. Results: We found that individuals engaging in CPU and SPU followed the known stages of drug use, involving initial experiences with licit drugs (e.g., alcohol and tobacco), followed by use of cannabis and then other illicit drugs. In this regard, two classes of illicit drugs were identified, including first uppers, hallucinogens and sniffed drugs; and then "harder" drugs (ketamine, heroin, and crystal meth), which were only consumed by polydrug users who were already taking numerous drugs. Moreover, we observed an association between the number of drugs used simultaneously and social issues (i.e., social consequences and aggressiveness). In fact, the more often the participants simultaneously used substances, the more likely they were to experience social problems. In contrast, we did not find any relationship between SPU and depression, anxiety, health consequences, or health. Conclusions: We identified some associations with SPU that were independent of CPU. Moreover, we found that the number of concurrently used drugs can be a strong factor associated with mental and physical health, although their simultaneous use may not significantly contribute to this association. Finally, the negative effects related to the use of one substance might be counteracted by the use of an additional substance.
Resumo:
Random mating is the null model central to population genetics. One assumption behind random mating is that individuals mate an infinite number of times. This is obviously unrealistic. Here we show that when each female mates a finite number of times, the effective size of the population is substantially decreased.
Resumo:
Abstract-Due to the growing use of biometric technologies inour modern society, spoofing attacks are becoming a seriousconcern. Many solutions have been proposed to detect the use offake "fingerprints" on an acquisition device. In this paper, wepropose to take advantage of intrinsic features of friction ridgeskin: pores. The aim of this study is to investigate the potential ofusing pores to detect spoofing attacks.Results show that the use of pores is a promising approach. Fourmajor observations were made: First, results confirmed that thereproduction of pores on fake "fingerprints" is possible. Second,the distribution of the total number of pores between fake andgenuine fingerprints cannot be discriminated. Third, thedifference in pore quantities between a query image and areference image (genuine or fake) can be used as a discriminatingfactor in a linear discriminant analysis. In our sample, theobserved error rates were as follows: 45.5% of false positive (thefake passed the test) and 3.8% of false negative (a genuine printhas been rejected). Finally, the performance is improved byusing the difference of pore quantity obtained between adistorted query fingerprint and a non-distorted referencefingerprint. By using this approach, the error rates improved to21.2% of false acceptation rate and 8.3% of false rejection rate.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
In this paper, we study the average crossing number of equilateral random walks and polygons. We show that the mean average crossing number ACN of all equilateral random walks of length n is of the form . A similar result holds for equilateral random polygons. These results are confirmed by our numerical studies. Furthermore, our numerical studies indicate that when random polygons of length n are divided into individual knot types, the for each knot type can be described by a function of the form where a, b and c are constants depending on and n0 is the minimal number of segments required to form . The profiles diverge from each other, with more complex knots showing higher than less complex knots. Moreover, the profiles intersect with the ACN profile of all closed walks. These points of intersection define the equilibrium length of , i.e., the chain length at which a statistical ensemble of configurations with given knot type -upon cutting, equilibration and reclosure to a new knot type -does not show a tendency to increase or decrease . This concept of equilibrium length seems to be universal, and applies also to other length-dependent observables for random knots, such as the mean radius of gyration Rg.
Resumo:
The identification of genetically homogeneous groups of individuals is a long standing issue in population genetics. A recent Bayesian algorithm implemented in the software STRUCTURE allows the identification of such groups. However, the ability of this algorithm to detect the true number of clusters (K) in a sample of individuals when patterns of dispersal among populations are not homogeneous has not been tested. The goal of this study is to carry out such tests, using various dispersal scenarios from data generated with an individual-based model. We found that in most cases the estimated 'log probability of data' does not provide a correct estimation of the number of clusters, K. However, using an ad hoc statistic DeltaK based on the rate of change in the log probability of data between successive K values, we found that STRUCTURE accurately detects the uppermost hierarchical level of structure for the scenarios we tested. As might be expected, the results are sensitive to the type of genetic marker used (AFLP vs. microsatellite), the number of loci scored, the number of populations sampled, and the number of individuals typed in each sample.
Resumo:
To develop a comprehensive overview of copy number aberrations (CNAs) in stage-II/III colorectal cancer (CRC), we characterized 302 tumors from the PETACC-3 clinical trial. Microsatellite-stable (MSS) samples (n = 269) had 66 minimal common CNA regions, with frequent gains on 20 q (72.5%), 7 (41.8%), 8 q (33.1%) and 13 q (51.0%) and losses on 18 (58.6%), 4 q (26%) and 21 q (21.6%). MSS tumors have significantly more CNAs than microsatellite-instable (MSI) tumors: within the MSI tumors a novel deletion of the tumor suppressor WWOX at 16 q23.1 was identified (p<0.01). Focal aberrations identified by the GISTIC method confirmed amplifications of oncogenes including EGFR, ERBB2, CCND1, MET, and MYC, and deletions of tumor suppressors including TP53, APC, and SMAD4, and gene expression was highly concordant with copy number aberration for these genes. Novel amplicons included putative oncogenes such as WNK1 and HNF4A, which also showed high concordance between copy number and expression. Survival analysis associated a specific patient segment featured by chromosome 20 q gains to an improved overall survival, which might be due to higher expression of genes such as EEF1B2 and PTK6. The CNA clustering also grouped tumors characterized by a poor prognosis BRAF-mutant-like signature derived from mRNA data from this cohort. We further revealed non-random correlation between CNAs among unlinked loci, including positive correlation between 20 q gain and 8 q gain, and 20 q gain and chromosome 18 loss, consistent with co-selection of these CNAs. These results reinforce the non-random nature of somatic CNAs in stage-II/III CRC and highlight loci and genes that may play an important role in driving the development and outcome of this disease.
Resumo:
OBJECTIVE: To assess the change in non-compliant items in prescription orders following the implementation of a computerized physician order entry (CPOE) system named PreDiMed. SETTING: The department of internal medicine (39 and 38 beds) in two regional hospitals in Canton Vaud, Switzerland. METHOD: The prescription lines in 100 pre- and 100 post-implementation patients' files were classified according to three modes of administration (medicines for oral or other non-parenteral uses; medicines administered parenterally or via nasogastric tube; pro re nata (PRN), as needed) and analyzed for a number of relevant variables constitutive of medical prescriptions. MAIN OUTCOME MEASURE: The monitored variables depended on the pharmaceutical category and included mainly name of medicine, pharmaceutical form, posology and route of administration, diluting solution, flow rate and identification of prescriber. RESULTS: In 2,099 prescription lines, the total number of non-compliant items was 2,265 before CPOE implementation, or 1.079 non-compliant items per line. Two-thirds of these were due to missing information, and the remaining third to incomplete information. In 2,074 prescription lines post-CPOE implementation, the number of non-compliant items had decreased to 221, or 0.107 non-compliant item per line, a dramatic 10-fold decrease (chi(2) = 4615; P < 10(-6)). Limitations of the computerized system were the risk for erroneous items in some non-prefilled fields and ambiguity due to a field with doses shown on commercial products. CONCLUSION: The deployment of PreDiMed in two departments of internal medicine has led to a major improvement in formal aspects of physicians' prescriptions. Some limitations of the first version of PreDiMed were unveiled and are being corrected.
Resumo:
The quantity of interest for high-energy photon beam therapy recommended by most dosimetric protocols is the absorbed dose to water. Thus, ionization chambers are calibrated in absorbed dose to water, which is the same quantity as what is calculated by most treatment planning systems (TPS). However, when measurements are performed in a low-density medium, the presence of the ionization chamber generates a perturbation at the level of the secondary particle range. Therefore, the measured quantity is close to the absorbed dose to a volume of water equivalent to the chamber volume. This quantity is not equivalent to the dose calculated by a TPS, which is the absorbed dose to an infinitesimally small volume of water. This phenomenon can lead to an overestimation of the absorbed dose measured with an ionization chamber of up to 40% in extreme cases. In this paper, we propose a method to calculate correction factors based on the Monte Carlo simulations. These correction factors are obtained by the ratio of the absorbed dose to water in a low-density medium □D(w,Q,V1)(low) averaged over a scoring volume V₁ for a geometry where V₁ is filled with the low-density medium and the absorbed dose to water □D(w,QV2)(low) averaged over a volume V₂ for a geometry where V₂ is filled with water. In the Monte Carlo simulations, □D(w,QV2)(low) is obtained by replacing the volume of the ionization chamber by an equivalent volume of water, according to the definition of the absorbed dose to water. The method is validated in two different configurations which allowed us to study the behavior of this correction factor as a function of depth in phantom, photon beam energy, phantom density and field size.
Resumo:
En Suisse, comme dans la plupart des pays industrialisés, le stress au travail et l'épuisement qui en découle sont devenus, au cours des dernières décennies, une réalité qui ne cesse de s'accentuer. Différentes disciplines scientifiques ont tenté de rendre compte, depuis le milieu du siècle dernier, des difficultés rencontrées par les individus dans le cadre de leur travail, avec une prédominance marquée pour des analyses de type causaliste. Dans le cadre de cette étude doctorale, nous nous sommes penché sur le cas d'un office régional de placement, mais avec une perspective sensiblement différente. La grille de lecture psychodynamique utilisée permet en effet de donner accès au sens des situations de travail et d'ouvrir sur une compréhension originale des mécanismes à l'origine des problèmes de santé mentale au travail. Cette approche permet ainsi de comprendre les rapports complexes que les individus entretiennent avec leur travail tel que structuré et organisé, et d'analyser leur expérience en termes de plaisir, de souffrance, de défenses face à la souffrance et de répercussions sur la santé. Dans ce but, nous avons utilisé une méthodologie basée sur des entrevues collectives, afin de stimuler l'expression libre des travailleurs. L'enquête s'est déroulée en deux temps : une première série d'entretiens de groupe a permis la récolte des données empiriques, puis une seconde série, appelée entretiens de restitution, a donné la possibilité aux participants de réagir sur l'interprétation de leur parole faite par le chercheur, et de valider l'analyse. Nos résultats mettent alors en évidence que le travail, tel qu'organisé au sein de cette institution de service public, apparaît considérablement pathogène, mais heureusement compensé par le pouvoir structurant de la relation d'aide aux assurés. Ils montrent également que l'expérience subjective de travail des participants a pour principales sources de souffrance la perception désagréable d'un manque de reconnaissance, d'autonomie et de pouvoir sur leurs actes. - In Switzerland and in other industrialized countries, work-related stress and resulting burn-out has become an ever increasing problem in recent decades. Many researchers Jrom many different fields have made efforts to understand the difficulties employees encounter at work since the middle of the last century. Most of this research is based on a cause and effect analysis approach. For this doctoral research project, we have analyzed cases handled by an unemployment office in Switzerland. We have taken a novel approach by using a number of psychodynamic criteria which permitted us to interpret situations at work and to open up a new way of understanding the mechanisms at work which lead to mental health problems. This approach allows us to understand account the complex relationship people have towards structured and organized work as well as to take into account and to analyze their experience in terms of pleasure, suffering, defense mechanisms against suffering and the consequences on their mental health. In order to achieve this goal we performed collective interviews in order to encourage workers to express themselves freely. The interviews were divided into two series. The first series of group interviews allowed us to collect empirical statistics and the second series gave the workers an opportunity to react to the researchers ' analysis of their answers and to validate the researchers ' interpretation of their answers. Our results show that work has considerable negative effects on mental health. Fortunately, these negative effects are counterbalanced by the psychological support system offered by the unemployment office. Our project also shows that the subjective negative experiences of workers are caused by their perceptions of being under-appreciated, lack of autonomy and having no power over their acts.