127 resultados para Concept of angle
Resumo:
The non-invasive evaluation of myocardial ischemia is a priority in cardiology. The preferred initial non-invasive test is exercise ECG, because of its high accessibility and its low cost. Stress radionuclide myocardial perfusion imaging or stress echocardiography are now routinely performed, and new non-invasive techniques such as perfusion-MRI, dobutamine stress-MRI or 82rubidium perfusion PET have recently gained acceptance in clinical practice. In the same time, an increasing attention has been accorded to the concept of myocardial viability in the decisional processes in case of ischemic heart failure. In this indication, MRI with late enhancement after intravenous injection of gadolinium and 18F-FDG PET showed an excellent diagnostic accuracy. This article will present these new imaging modalities and their accepted indications.
Resumo:
This Ph.D. dissertation seeks to study the work motivation of employees in the delivery of public services. The questioning on work motivation in public services in not new but it becomes central for governments which are now facing unprecedented public debts. The objective of this research is twofold : First, we want to see if the work motivation of employees in public services is a continuum (intrinsic and extrinsic motivations cannot coexist) or a bi-dimensional construct (intrinsic and extrinsic motivations coexist simultaneously). The research in public administration literature has focused on the concept of public service motivation, and considered motivation to be uni-dimensional (Perry and Hondeghem 2008). However, no study has yet tackled both types of motivation, the intrinsic and extrinsic ones, in the same time. This dissertation proposes, in Part I, a theoretical assessment and an empirical test of a global work motivational structure, by using a self-constructed Swiss dataset with employees from three public services, the education sector, the security sector and the public administrative services sector. Our findings suggest that work motivation in public services in not uni-dimensional but bi-dimensional, the intrinsic and extrinsic motivations coexist simultaneously and can be positively correlated (Amabile et al. 1994). Our findings show that intrinsic motivation is as important as extrinsic motivation, thus, the assumption that employees in public services are less attracted by extrinsic rewards is not confirmed for this sample. Other important finding concerns the public service motivation concept, which, as theoretically predicted, represents the major motivational dimension of employees in the delivery of public services. Second, the theory of public service motivation makes the assumption that employees in public services engage in activities that go beyond their self-interest, but never uses this construct as a determinant for their pro-social behavior. In the same time, several studies (Gregg et al. 2011 and Georgellis et al. 2011) bring evidence about the pro-social behavior of employees in public services. However, they do not identify which type of motivation is at the origin of this behavior, they only make the assumption of an intrinsically motivated behavior. We analyze the pro-social behavior of employees in public services and use the public service motivation as determinant of their pro-social behavior. We add other determinants highlighted by the theory of pro-social behavior (Bénabou and Tirole 2006), by Le Grand (2003) and by fit theories (Besley and Ghatak 2005). We test these determinants on Part II and identify for each sector of activity the positive or the negative impact on pro-social behavior of Swiss employees. Contrary to expectations, we find, for this sample, that both intrinsic and extrinsic factors have a positive impact on pro-social behavior, no crowding-out effect is identified in this sample. We confirm the hypothesis of Le Grand (2003) about the positive impact of the opportunity cost on pro-social behavior. Our results suggest a mix of action-oriented altruism and out-put oriented altruism of employees in public services. These results are relevant when designing incentives schemes for employees in the delivery of public services.
Resumo:
This article examines the relationship between red tape, Public Service Motivation (PSM) and a particular work outcome labelled 'resigned satisfaction'. Using data from a national survey of over 3754 public servants working at the municipal level in Switzerland, this study shows the importance of looking more closely at the concept of work satisfaction and, furthermore, of thoroughly investigating the impact of the different PSM dimensions on work outcomes. Unsurprisingly, research findings show that red tape is the most important predictor of resignation. Nevertheless, when PSM dimensions are analysed separately, results demonstrate that 'commitment to public interest/civic duty' and, to a lesser extent, 'attraction to policy-making' decrease resignation, whereas 'compassion' and 'self-sacrifice' increase it. This study thus highlights some of the negative (or undesirable) effects of PSM that have not been previously addressed in PSM literature.
Resumo:
Paul Ehrlich's inspired concept of 'magic bullets' for the cure of diseases has been revitalized by recent advances in immunology1. In particular, the development of cell fusion technology allowing the production of monoclonal antibodies (Mabs) with exquisite specificities2 triggered new hopes that we may now have the perfect carrier molecules with which to deliver cytotoxic drugs3 or toxins4 to the hidden cancer cells. This article reviews data on one aspect of the magic bullet concept, the use of radiolabelled antibodies as tracers for tumour localization. It will also discuss the very recent clinical use of 131I-labelled Mabs against carcinoembryonic antigen (CEA)5 to detect carcinoma either by conventional external photoscanning or by single photon emission computerized tomography (SPELT). This alliance of the most modern tools from immunology (Mabs) and nuclear medicine (SPELT) appears promising as a way to improve the sensitivity of 'immunoscintigraphy'. However, this approach is not yet ready, for widespread clinical use.
Resumo:
In therapy for chronic posttraumatic stress disorder (PTSD), prolonged exposure (PE) to stimuli associated with an original trauma experience is considered a state-of-the-art treatment method. The present case report outlines the use of Foa and Rothbaum's (1998) manual for this type of treatment in the year-long, 40-session treatment of Caroline, an adult female victim of child sexual abuse. The manual was supplemented by Caspar's (1995, 2007) Plan Analysis technique for individualized case formulation and treatment planning, along with Caspar's concept of the Motive-Oriented Therapeutic Relationship (MOTR). As indicated by standardized, quantitative measures, by changes in the client's behavior patterns, and by the client's subjective report, the treatment was very effective. An analysis of the therapy process illustrates the importance of a combination of manual-based procedures with individualized case formulations and interventions. The case is discussed in the context of enhancing the cognitive-behavioral treatment of PTSD.
Resumo:
INTRODUCTION: Auscultatory nonmercury manual devices seem good alternatives for the mercury sphygmomanometers in the clinic and for research settings, but individual internal validation of each device is time-consuming. The aim of this study was to validate a new technique capable of testing two devices simultaneously, based on the International protocol of the European Society of Hypertension. METHODS: The concept of the new technique is to measure blood pressure alternatively by two observers using a mercury sphygmomanometer and by two observers using the A&D UM-101 and Accoson Greenlight 300 devices, connected by Y-tube to obtain simultaneous readings with both nonmercury devices. Thirty-three participants were enrolled (mean age 47.2±14.0 years). Nine sequential blood pressure measurements were performed for each participant. RESULTS: Both devices passed phase 1 using 15 participants. In phase 2.1 (n=33), on a maximum of 99 measurements, the Accoson device produced 81/95/99 measurements within 5/10/15 mmHg for systolic blood pressure (SBP) and 87/98/99 for diastolic blood pressure (DBP). The A&D device produced 86/96/99 for SBP and 94/99/99 for DBP. In phase 2.2 (n=33), 30 participants had at least 2 out of 3 SBP obtained with Accoson device within 5 mmHg of the mercury device, as compared with 29 of 33 participants with the A&D device. For DBP, this was 33 of 33 participants for both devices. CONCLUSION: Both the nonmercury devices passed the International protocol. The new technique of simultaneous device testing using a Y-tube represents a time saving application of the International protocol.
Resumo:
Background: Earlier contributions have documented significant changes in sensory, attention-related endogenous event-related potential (ERP) components and θ band oscillatory responses during working memory activation in patients with schizophrenia. In patients with first-episode psychosis, such studies are still scarce and mostly focused on auditory sensory processing. The present study aimed to explore whether subtle deficits of cortical activation are present in these patients before the decline of working memory performance. Methods: We assessed exogenous and endogenous ERPs and frontal θ event-related synchronization (ERS) in patients with first-episode psychosis and healthy controls who successfully performed an adapted 2-back working memory task, including 2 visual n-backworking memory tasks as well as oddball detection and passive fixation tasks. Results: We included 15 patients with first-episode psychosis and 18 controls in this study. Compared with controls, patients with first-episode psychosis displayed increased latencies of early visual ERPs and phasic θ ERS culmination peak in all conditions. However, they also showed a rapid recruitment of working memory-related neural generators, even in pure attention tasks, as indicated by the decreased N200 latency and increased amplitude of sustained θ ERS in detection compared with controls. Limitations: Owing to the limited sample size, no distinction was made between patients with first-episode psychosis with positive and negative symptoms. Although we controlled for the global load of neuroleptics, medication effect cannot be totally ruled out. Conclusion: The present findings support the concept of a blunted electroencephalographic response in patients with first-episode psychosis who recruit the maximum neural generators in simple attention conditions without being able to modulate their brain activation with increased complexity of working memory tasks.
Resumo:
Les laves torrentielles sont l'un des vecteurs majeurs de sédiments en milieu montagneux. Leur comportement hydrogéomorphologique est contrôlé par des facteurs géologique, géomorphologique, topographique, hydrologique, climatique et anthropique. Si, en Europe, la recherche s'est plus focalisée sur les aspects hydrologiques que géomorphologiques de ces phénomènes, l'identification des volumes de sédiments potentiellement mobilisables au sein de petits systèmes torrentiels et des processus responsables de leur transfert est d'une importance très grande en termes d'aménagement du territoire et de gestion des dangers naturels. De plus, une corrélation entre des événements pluviométriques et l'occurrence de laves torrentielles n'est pas toujours établie et de nombreux événements torrentiels semblent se déclencher lorsqu'un seuil géomorphologique intrinsèque (degré de remplissage du chenal) au cours d'eau est atteint.Une méthodologie pragmatique a été développée pour cartographier les stocks sédimentaires constituant une source de matériaux pour les laves torrentielles, comme outil préliminaire à la quantification des volumes transportés par ces phénomènes. La méthode s'appuie sur des données dérivées directement d'analyses en environnement SIG réalisées sur des modèles numériques d'altitude de haute précision, de mesures de terrain et d'interprétation de photographies aériennes. La méthode a été conçue pour évaluer la dynamique des transferts sédimentaires, en prenant en compte le rôle des différents réservoirs sédimentaires, par l'application du concept de cascade sédimentaire sous un angle cartographique.Les processus de transferts sédimentaires ont été étudiés dans deux bassins versants des Alpes suisses (torrent du Bruchi, à Blatten beiNaters et torrent du Meretschibach, à Agarn). La cartographie géomorphologique a été couplée avec des mesures complémentaires permettant d'estimer les flux sédimentaires et les taux d'érosion (traçages de peinture, piquets de dénudation et utilisation du LiDAR terrestre). La méthode proposée se révèle innovatrice en comparaison avec la plupart des systèmes de légendes géomorphologiques existants, qui ne sont souvent pas adaptés pour cartographier de manière satisfaisante les systèmes géomorphologiques complexes et actifs que sont les bassins torrentiels. L'intérêt de cette méthode est qu'elle permet l'établissement d'une cascade sédimentaire, mais uniquement pour des systèmes où l'occurrence d'une lave torrentielle est contrôlé par le degré de remplissage en matériaux du chenal. Par ailleurs, le produit cartographique ne peut être directement utilisé pour la création de cartes de dangers - axées sur les zones de dépôt - mais revêt un intérêt pour la mise en place de mesures de correction et pour l'installation de systèmes de monitoring ou d'alerte.La deuxième partie de ce travail de recherche est consacrée à la cartographie géomorphologique. Une analyse a porté sur un échantillon de 146 cartes ou systèmes de légende datant des années 1950 à 2009 et réalisés dans plus de 40 pays. Cette analyse a permis de mettre en évidence la diversité des applications et des techniques d'élaboration des cartes géomorphologiques. - Debris flows are one of the most important vectors of sediment transfer in mountainous areas. Their hydro-geomorphological behaviour is conditioned by geological, geomorphological, topographical, hydrological, climatic and anthropic factors. European research in torrential systems has focused more on hydrological processes than on geomorphological processes acting as debris flow triggers. Nevertheless, the identification of sediment volumes that have the potential to be mobilised in small torrential systems, as well as the recognition of processes responsible for their mobilisation and transfer within the torrential system, are important in terms of land-use planning and natural hazard management. Moreover, a correlation between rainfall and debris flow occurrence is not always established and a number of debris flows seems to occur when a poorly understood geomorphological threshold is reached.A pragmatic methodology has been developed for mapping sediment storages that may constitute source zone of bed load transport and debris flows as a preliminary tool before quantifying their volumes. It is based on data directly derived from GIS analysis using high resolution DEM's, field measurements and aerial photograph interpretations. It has been conceived to estimate sediment transfer dynamics, taking into account the role of different sediment stores in the torrential system applying the concept of "sediment cascade" in a cartographic point of view.Sediment transfer processes were investigated in two small catchments in the Swiss Alps (Bruchi torrent, Blatten bei Naters and Meretschibach torrent, Agarn). Thorough field geomorphological mapping coupled with complementary measurements were conducted to estimate sediment fluxes and denudation rates, using various methods (reference coloured lines, wooden markers and terrestrial LiDAR). The proposed geomorphological mapping methodology is quite innovative in comparison with most legend systems that are not adequate for mapping active and complex geomorphological systems such as debris flow catchments. The interest of this mapping method is that it allows the concept of sediment cascade to be spatially implemented but only for supply-limited systems. The map cannot be used directly for the creation of hazard maps, focused on the deposition areas, but for the design of correction measures and the implementation of monitoring and warning systems.The second part of this work focuses on geomorphological mapping. An analysis of a sample of 146 (extracts of) maps or legend systems dating from the middle of the 20th century to 2009 - realised in more than 40 different countries - was carried out. Even if this study is not exhaustive, it shows a clear renewed interest for the discipline worldwide. It highlights the diversity of applications, techniques (scale, colours and symbology) used for their conception.
Resumo:
PURPOSE OF REVIEW: We present an overview of recent concepts in mechanisms underlying cognitive decline associated with brain aging and neurodegeneration from the perspective of MRI. RECENT FINDINGS: Recent findings challenge the established link between neuroimaging biomarkers of neurodegeneration and age-related or disease-related cognitive decline. Amyloid burden, white matter hyperintensities and local patterns of brain atrophy seem to have differential impact on cognition, particularly on episodic and working memory - the most vulnerable domains in 'normal aging' and Alzheimer's disease. Studies suggesting that imaging biomarkers of neurodegeneration are independent of amyloid-β give rise to new hypothesis regarding the pathological cascade in Alzheimer's disease. Findings in patients with autosomal-dominant Alzheimer's disease confirm the notion of differential temporal trajectory of amyloid deposition and brain atrophy to add another layer of complexity on the basic mechanisms of cognitive aging and neurodegeneration. Finally, the concept of cognitive reserve in 'supernormal aging' is questioned by evidence for the preservation of neurochemical, structural and functional brain integrity in old age rather than recruitment of 'reserves' for maintaining cognitive abilities. SUMMARY: Recent advances in clinical neuroscience, brain imaging and genetics challenge pathophysiological hypothesis of neurodegeneration and cognitive aging dominating the field in the last decade and call for reconsidering the choice of therapeutic window for early intervention.
Resumo:
General Summary Although the chapters of this thesis address a variety of issues, the principal aim is common: test economic ideas in an international economic context. The intention has been to supply empirical findings using the largest suitable data sets and making use of the most appropriate empirical techniques. This thesis can roughly be divided into two parts: the first one, corresponding to the first two chapters, investigates the link between trade and the environment, the second one, the last three chapters, is related to economic geography issues. Environmental problems are omnipresent in the daily press nowadays and one of the arguments put forward is that globalisation causes severe environmental problems through the reallocation of investments and production to countries with less stringent environmental regulations. A measure of the amplitude of this undesirable effect is provided in the first part. The third and the fourth chapters explore the productivity effects of agglomeration. The computed spillover effects between different sectors indicate how cluster-formation might be productivity enhancing. The last chapter is not about how to better understand the world but how to measure it and it was just a great pleasure to work on it. "The Economist" writes every week about the impressive population and economic growth observed in China and India, and everybody agrees that the world's center of gravity has shifted. But by how much and how fast did it shift? An answer is given in the last part, which proposes a global measure for the location of world production and allows to visualize our results in Google Earth. A short summary of each of the five chapters is provided below. The first chapter, entitled "Unraveling the World-Wide Pollution-Haven Effect" investigates the relative strength of the pollution haven effect (PH, comparative advantage in dirty products due to differences in environmental regulation) and the factor endowment effect (FE, comparative advantage in dirty, capital intensive products due to differences in endowments). We compute the pollution content of imports using the IPPS coefficients (for three pollutants, namely biological oxygen demand, sulphur dioxide and toxic pollution intensity for all manufacturing sectors) provided by the World Bank and use a gravity-type framework to isolate the two above mentioned effects. Our study covers 48 countries that can be classified into 29 Southern and 19 Northern countries and uses the lead content of gasoline as proxy for environmental stringency. For North-South trade we find significant PH and FE effects going in the expected, opposite directions and being of similar magnitude. However, when looking at world trade, the effects become very small because of the high North-North trade share, where we have no a priori expectations about the signs of these effects. Therefore popular fears about the trade effects of differences in environmental regulations might by exaggerated. The second chapter is entitled "Is trade bad for the Environment? Decomposing worldwide SO2 emissions, 1990-2000". First we construct a novel and large database containing reasonable estimates of SO2 emission intensities per unit labor that vary across countries, periods and manufacturing sectors. Then we use these original data (covering 31 developed and 31 developing countries) to decompose the worldwide SO2 emissions into the three well known dynamic effects (scale, technique and composition effect). We find that the positive scale (+9,5%) and the negative technique (-12.5%) effect are the main driving forces of emission changes. Composition effects between countries and sectors are smaller, both negative and of similar magnitude (-3.5% each). Given that trade matters via the composition effects this means that trade reduces total emissions. We next construct, in a first experiment, a hypothetical world where no trade happens, i.e. each country produces its imports at home and does no longer produce its exports. The difference between the actual and this no-trade world allows us (under the omission of price effects) to compute a static first-order trade effect. The latter now increases total world emissions because it allows, on average, dirty countries to specialize in dirty products. However, this effect is smaller (3.5%) in 2000 than in 1990 (10%), in line with the negative dynamic composition effect identified in the previous exercise. We then propose a second experiment, comparing effective emissions with the maximum or minimum possible level of SO2 emissions. These hypothetical levels of emissions are obtained by reallocating labour accordingly across sectors within each country (under the country-employment and the world industry-production constraints). Using linear programming techniques, we show that emissions are reduced by 90% with respect to the worst case, but that they could still be reduced further by another 80% if emissions were to be minimized. The findings from this chapter go together with those from chapter one in the sense that trade-induced composition effect do not seem to be the main source of pollution, at least in the recent past. Going now to the economic geography part of this thesis, the third chapter, entitled "A Dynamic Model with Sectoral Agglomeration Effects" consists of a short note that derives the theoretical model estimated in the fourth chapter. The derivation is directly based on the multi-regional framework by Ciccone (2002) but extends it in order to include sectoral disaggregation and a temporal dimension. This allows us formally to write present productivity as a function of past productivity and other contemporaneous and past control variables. The fourth chapter entitled "Sectoral Agglomeration Effects in a Panel of European Regions" takes the final equation derived in chapter three to the data. We investigate the empirical link between density and labour productivity based on regional data (245 NUTS-2 regions over the period 1980-2003). Using dynamic panel techniques allows us to control for the possible endogeneity of density and for region specific effects. We find a positive long run elasticity of density with respect to labour productivity of about 13%. When using data at the sectoral level it seems that positive cross-sector and negative own-sector externalities are present in manufacturing while financial services display strong positive own-sector effects. The fifth and last chapter entitled "Is the World's Economic Center of Gravity Already in Asia?" computes the world economic, demographic and geographic center of gravity for 1975-2004 and compares them. Based on data for the largest cities in the world and using the physical concept of center of mass, we find that the world's economic center of gravity is still located in Europe, even though there is a clear shift towards Asia. To sum up, this thesis makes three main contributions. First, it provides new estimates of orders of magnitudes for the role of trade in the globalisation and environment debate. Second, it computes reliable and disaggregated elasticities for the effect of density on labour productivity in European regions. Third, it allows us, in a geometrically rigorous way, to track the path of the world's economic center of gravity.
Resumo:
The concept of tripartite synapse suggests that astrocytes make up a functional synapse with pre- and postsynaptic neuronal elements to modulate synaptic transmission through the regulated release of neuromodulators called gliotransmitters. Release of gliotransmitters such as glutamate or D-serine has been shown to depend on Ca21-dependent exocytosis. However, the origin (cytosolic versus vesicular) of the released gliotransmitter is still a matter of debate. The existence of Ca21-regulated exocytosis in astrocytes has been questioned mostly because the nature of secretory organelles which are loaded with gliotransmitters is unknown. Here we show the existence of a population of vesicles that uptakes and stores glutamate and D-serine in astrocytes which are present in situ. Immunoisolated glial organelles expressing synaptobrevin 2 (Sb2) display morphological and biochemical features very similar to synaptic vesicles. We demonstrate that these organelles not only contain and uptake glutamate but also display a glia-specific transport activity for D-serine. Furthermore, we report that the uptake of D-serine is energized by a H1-ATPase present on the immunoisolated vesicles and that cytosolic chloride ions modulate the uptake of D-serine. Finally, we show that serine racemase (SR), the synthesizing enzyme for D-serine, is anchored to the membrane of glial organelles allowing a local and efficient concentration of the gliotransmitter to be transported. We conclude that vesicles in astrocytes do exist with the goal to store and release D-serine, glutamate and most likely other neuromodulators.
Resumo:
Painful femoro-acetabular impingement symptoms localized in the groin in flexion, adduction and internal rotation can be explained either by a primary disease of the labrum often post-traumatic, and more frequently as part of femoro-acetabular primary or secondary dysmorphia. The kinematic of the normal hip joint depends of peri-acetabular structures, geometry of joints and possible pathologies that could contribute to modify either the geometry or the proprioceptive function. By combining and analyzing these parameters it is possible to describe a joint concept of centricity, an essential parameter for optimal functions of the joint. The concept of overload is explained as the inability of the hip to ensure its centricity during activities that could lead to the occurrence of any degenerative disorders.
Resumo:
A range of models describing metapopulations is surveyed and their implications for conservation biology are described. An overview of the use of both population genetic elements and demographic theory in metapopulation models is given. It would appear that most of the current models suffer from either the use of over-simplified demography or the avoidance of selectively important genetic factors. The scale for which predictions are made by the various models is often obscure. A conceptual framework for describing metapopulations by utilising the concept of fitness of local populations is provided and some examples are given. The expectation that any general theory, such as that of metapopulations, can make useful predictions for particular problems of conservation is examined and compared with the prevailing 'state of the art' recommendations.
Resumo:
While the incidence of sleep disorders is continuously increasing in western societies, there is a clear demand for technologies to asses sleep-related parameters in ambulatory scenarios. The present study introduces a novel concept of accurate sensor to measure RR intervals via the analysis of photo-plethysmographic signals recorded at the wrist. In a cohort of 26 subjects undergoing full night polysomnography, the wrist device provided RR interval estimates in agreement with RR intervals as measured from standard electrocardiographic time series. The study showed an overall agreement between both approaches of 0.05 ± 18 ms. The novel wrist sensor opens the door towards a new generation of comfortable and easy-to-use sleep monitors.
Resumo:
Recognition and identification processes for deceased persons. Determining the identity of deceased persons is a routine task performed essentially by police departments and forensic experts. This thesis highlights the processes necessary for the proper and transparent determination of the civil identities of deceased persons. The identity of a person is defined as the establishment of a link between that person ("the source") and information pertaining to the same individual ("identifiers"). Various identity forms could emerge, depending on the nature of the identifiers. There are two distinct types of identity, namely civil identity and biological identity. The paper examines four processes: identification by witnesses (the recognition process) and comparisons of fingerprints, dental data and DNA profiles (the identification processes). During the recognition process, the memory function is examined and helps to clarify circumstances that may give rise to errors. To make the process more rigorous, a body presentation procedure is proposed to investigators. Before examining the other processes, three general concepts specific to forensic science are considered with regard to the identification of a deceased person, namely, matter divisibility (Inman and Rudin), transfer (Locard) and uniqueness (Kirk). These concepts can be applied to the task at hand, although some require a slightly broader scope of application. A cross comparison of common forensic fields and the identification of deceased persons reveals certain differences, including 1 - reverse positioning of the source (i.e. the source is not sought from traces, but rather the identifiers are obtained from the source); 2 - the need for civil identity determination in addition to the individualisation stage; and 3 - a more restricted population (closed set), rather than an open one. For fingerprints, dental and DNA data, intravariability and intervariability are examined, as well as changes in these post mortem (PM) identifiers. Ante-mortem identifiers (AM) are located and AM-PM comparisons made. For DNA, it has been shown that direct identifiers (taken from a person whose civil identity has been alleged) tend to lead to determining civil identity whereas indirect identifiers (obtained from a close relative) direct towards a determination of biological identity. For each process, a Bayesian model is presented which includes sources of uncertainty deemed to be relevant. The results of the different processes combine to structure and summarise an overall outcome and a methodology. The modelling of dental data presents a specific difficulty with respect to intravariability, which in itself is not quantifiable. The concept of "validity" is, therefore, suggested as a possible solution to the problem. Validity uses various parameters that have an acknowledged impact on teeth intravariability. In cases where identifying deceased persons proves to be extremely difficult due to the limited discrimination of certain procedures, the use of a Bayesian approach is of great value in bringing a transparent and synthetic value. RESUME : Titre: Processus de reconnaissance et d'identification de personnes décédées. L'individualisation de personnes décédées est une tâche courante partagée principalement par des services de police, des odontologues et des laboratoires de génétique. L'objectif de cette recherche est de présenter des processus pour déterminer valablement, avec une incertitude maîtrisée, les identités civiles de personnes décédées. La notion d'identité est examinée en premier lieu. L'identité d'une personne est définie comme l'établissement d'un lien entre cette personne et des informations la concernant. Les informations en question sont désignées par le terme d'identifiants. Deux formes distinctes d'identité sont retenues: l'identité civile et l'identité biologique. Quatre processus principaux sont examinés: celui du témoignage et ceux impliquant les comparaisons d'empreintes digitales, de données dentaires et de profils d'ADN. Concernant le processus de reconnaissance, le mode de fonctionnement de la mémoire est examiné, démarche qui permet de désigner les paramètres pouvant conduire à des erreurs. Dans le but d'apporter un cadre rigoureux à ce processus, une procédure de présentation d'un corps est proposée à l'intention des enquêteurs. Avant d'entreprendre l'examen des autres processus, les concepts généraux propres aux domaines forensiques sont examinés sous l'angle particulier de l'identification de personnes décédées: la divisibilité de la matière (Inman et Rudin), le transfert (Locard) et l'unicité (Kirk). Il est constaté que ces concepts peuvent être appliqués, certains nécessitant toutefois un léger élargissement de leurs principes. Une comparaison croisée entre les domaines forensiques habituels et l'identification de personnes décédées montre des différences telles qu'un positionnement inversé de la source (la source n'est plus à rechercher en partant de traces, mais ce sont des identifiants qui sont recherchés en partant de la source), la nécessité de devoir déterminer une identité civile en plus de procéder à une individualisation ou encore une population d'intérêt limitée plutôt qu'ouverte. Pour les empreintes digitales, les dents et l'ADN, l'intra puis l'inter-variabilité sont examinées, de même que leurs modifications post-mortem (PM), la localisation des identifiants ante-mortem (AM) et les comparaisons AM-PM. Pour l'ADN, il est démontré que les identifiants directs (provenant de la personne dont l'identité civile est supposée) tendent à déterminer une identité civile alors que les identifiants indirects (provenant d'un proche parent) tendent à déterminer une identité biologique. Puis une synthèse des résultats provenant des différents processus est réalisée grâce à des modélisations bayesiennes. Pour chaque processus, une modélisation est présentée, modélisation intégrant les paramètres reconnus comme pertinents. À ce stade, une difficulté apparaît: celle de quantifier l'intra-variabilité dentaire pour laquelle il n'existe pas de règle précise. La solution préconisée est celle d'intégrer un concept de validité qui intègre divers paramètres ayant un impact connu sur l'intra-variabilité. La possibilité de formuler une valeur de synthèse par l'approche bayesienne s'avère d'une aide précieuse dans des cas très difficiles pour lesquels chacun des processus est limité en termes de potentiel discriminant.