168 resultados para Reliable Theoretical Procedures
Resumo:
Antibiotic prophylaxis is commonly prescribed to patients with total arthroplasties before a dental intervention. This attitude is not evidence-based for several reasons: 1) the usual pathogens of prosthetic joint infections are not of oral origin; 2) even if given, systemic antibiotic do not completely suppress the occult bacteraemia occurring during dental intervention and 3) humans may have up to twelve episodes of occult bacteraemia of dental origin per day. Routine antibiotic prophylaxis should be clearly distinguished from the antibiotic treatment required in case of established oral cavity infection. A constant optimal oral and dental hygiene is more important in terms of prevention and should be routinely recommended to every patient carrying a joint arthroplasty.
Resumo:
BACKGROUND: Maternal-infant transmission of hepatitis B virus (HBV) during birth carries a high risk for chronic HBV infection in infants with frequent subsequent development of chronic disease. This can be efficiently prevented by early immunization of exposed newborns. The purpose of this study was to determine the compliance with official recommendations for prevention of perinatal HBV transmission in hepatitis B surface antigen (HBsAg) exposed infants. METHODS: Records of pregnant women at 4 sites in Switzerland, admitted for delivery in 2005 and 2006, were screened for maternal HBsAg testing. In HBsAg-exposed infants, recommended procedures (postnatal active and passive immunization, completion of immunization series, and serological success control) were checked. RESULTS: Of 27,131 women tested for HBsAg, 194 (0.73%) were positive with 196 exposed neonates. Of these neonates, 143 (73%) were enrolled and 141 (99%) received simultaneous active and passive HBV immunization within 24 hours of birth. After discharge, the HBV immunization series was completed in 83%. Only 38% of children were tested for anti-HBs afterwards and protective antibody values (>100 U/L) were documented in 27% of the study cohort. No chronically infected child was identified. Analysis of hospital discharge letters revealed significant quality problems. CONCLUSIONS: Intensified efforts are needed to improve the currently suboptimal medical care in HBsAg-exposed infants. We propose standardized discharge letters, as well as reminders to primary care physicians with precise instructions on the need to complete the immunization series in HBsAg-exposed infants and to evaluate success by determination of anti-HBs antibodies after the last dose.
Resumo:
INTRODUCTION: to assess the outcome of endovascular aortic aneurysm repair (EVAR) using intravascular ultrasound (IVUS) without angiography. MATERIALS/METHODS: eighty consecutive patients (median age 69 years (range 25-90): male 72 (90%), female 8 (10%)) underwent endovascular aneurysm repair (AAA 68 (85%), TAA 12 (15%)) using either angiography in 31/80 patients (39%) or IVUS in 49/80 patients (61%) in accordance to the surgeons preference. RESULTS: hospital mortality was 2/80 (3%), 1/68 for AAA (2%), 1/12 for TAA (8%), 2/31 for angiography (7%), and 0/49 for IVUS (0.0%: NS). Median quantity of contrast medium was 190 ml (range: 20-350) for angiography versus 0 ml for IVUS (p<0.01). Median X-ray exposure time 24 min (range 9-65 min) versus 8 min (range 0-60 min) for IVUS (p<0.05). No coverage of renal or suprarenal artery orifices occurred in either group. Conversion to open surgery was necessary in 4/80 patients (5%), 1/31 for angiography (3%) and 3/49 patients for IVUS (6%: NS). Early endoleaks were observed in 13/80 patients (16%): 8/31 patients for angiography (26%) versus 5/49 for IVUS (10%: p<0.05): 5/13 endoleaks resolved spontaneously (39%) whereas 8/13 (61%) required additional procedures. CONCLUSIONS: IVUS is a reliable tool for EVAR. In most cases, perprocedural angiography is not necessary.
Resumo:
Invasive fungal diseases (IFDs) continue to cause considerable morbidity and mortality in patients with haematological malignancy. Diagnosis of IFD is difficult, with the sensitivity of the gold standard tests (culture and histopathology) often reported to be low, which may at least in part be due to sub-optimal sampling or subsequent handling in the routine microbiological laboratory. Therefore, a working group of the European Conference in Infections in Leukaemia was convened in 2009 with the task of reviewing the classical diagnostic procedures and providing recommendations for their optimal use. The recommendations were presented and approved at the ECIL-3 conference in September 2009. Although new serological and molecular tests are examined in separate papers, this review focuses on sample types, microscopy and culture procedures, antifungal susceptibility testing and imaging. The performance and limitations of these procedures are discussed and recommendations are provided on when and how to use them and how to interpret the results.
Resumo:
Qualitative research and psycho-cultural approaches to deviant behaviour¦In this paper, the authors discuss the relevance of some historical, theoretical and¦methodological features of qualitative research for a psycho-cultural approach to¦deviance. Specifically, three methods are presented: ethnography, life stories and¦grounded theory. Some common features of these methods are: their potentialities of¦articulation with other methods, their plasticity and their procedures grounded in¦research contexts, experiences and meanings lived by participants. The role of the¦researcher, as well as the constructed and dialogical characteristics of both process¦and products of research, are also emphasised in these approaches. In this way,¦qualitative methods seem particularly adequate to a psycho-cultural approach to¦deviance, allowing the research of "hidden" phenomena and an understanding of¦deviance that takes into account its cultural norms. Thus, qualitative research is as a¦methodological device which allows to get beyond the traditional ethnocentrism of psychology.
Resumo:
The purpose of this study was to assess the spatial resolution of a computed tomography (CT) scanner with an automatic approach developed for routine quality controls when varying CT parameters. The methods available to assess the modulation transfer functions (MTF) with the automatic approach were Droege's and the bead point source (BPS) methods. These MTFs were compared with presampled ones obtained using Boone's method. The results show that Droege's method is not accurate in the low-frequency range, whereas the BPS method is highly sensitive to image noise. While both methods are well adapted to routine stability controls, it was shown that they are not able to provide absolute measurements. On the other hand, Boone's method, which is robust with respect to aliasing, more resilient to noise and provides absolute measurements, satisfies the commissioning requirements perfectly. Thus, Boone's method combined with a modified Catphan 600 phantom could be a good solution to assess CT spatial resolution in the different CT planes.
Resumo:
Résumé La thématique de cette thèse peut être résumée par le célèbre paradoxe de biologie évolutive sur le maintien du polymorphisme face à la sélection et par l'équation du changement de fréquence gamétique au cours du temps dû, à la sélection. La fréquence d'un gamète xi à la génération (t + 1) est: !!!Equation tronquée!!! Cette équation est utilisée pour générer des données utlisée tout au long de ce travail pour 2, 3 et 4 locus dialléliques. Le potentiel de l'avantage de l'hétérozygote pour le maintien du polymorphisme est le sujet de la première partie. La définition commune de l'avantage de l'hétérozygote n'etant applicable qu'a un locus ayant 2 allèles, cet avantage est redéfini pour un système multilocus sur les bases de précédentes études. En utilisant 5 définitions différentes de l'avantage de l'hétérozygote, je montre que cet avantage ne peut être un mécanisme général dans le maintien du polymorphisme sous sélection. L'étude de l'influence de locus non-détectés sur les processus évolutifs, seconde partie de cette thèse, est motivée par les travaux moléculaires ayant pour but de découvrir le nombre de locus codant pour un trait. La plupart de ces études sous-estiment le nombre de locus. Je montre que des locus non-détectés augmentent la probabilité d'observer du polymorphisme sous sélection. De plus, les conclusions sur les facteurs de maintien du polymorphisme peuvent être trompeuses si tous les locus ne sont pas détectés. Dans la troisième partie, je m'intéresse à la valeur attendue de variance additive après un goulot d'étranglement pour des traits sélectionés. Une études précédente montre que le niveau de variance additive après goulot d'étranglement augmente avec le nombre de loci. Je montre que le niveau de variance additive après un goulot d'étranglement augmente (comparé à des traits neutres), mais indépendamment du nombre de loci. Par contre, le taux de recombinaison a une forte influence, entre autre en regénérant les gamètes disparus suite au goulot d'étranglement. La dernière partie de ce travail de thèse décrit un programme pour le logiciel de statistique R. Ce programme permet d'itérer l'équation ci-dessus en variant les paramètres de sélection, recombinaison et de taille de populations pour 2, 3 et 4 locus dialléliques. Cette thèse montre qu'utiliser un système multilocus permet d'obtenir des résultats non-conformes à ceux issus de systèmes rnonolocus (la référence en génétique des populations). Ce programme ouvre donc d'intéressantes perspectives en génétique des populations. Abstract The subject of this PhD thesis can be summarized by one famous paradox of evolu-tionary biology: the maintenance of polymorphism in the face of selection, and one classical equation of theoretical population genetics: the changes in gametic frequencies due to selection and recombination. The frequency of gamete xi at generation (t + 1) is given by: !!! Truncated equation!!! This equation is used to generate data on selection at two, three, and four diallelic loci for the different parts of this work. The first part focuses on the potential of heterozygote advantage to maintain genetic polymorphism. Results of previous studies are used to (re)define heterozygote advantage for multilocus systems, since the classical definition is for one diallelic locus. I use 5 different definitions of heterozygote advantage. And for these five definitions, I show that heterozygote advantage is not a general mechanism for the maintenance of polymorphism. The study of the influence of undetected loci on evolutionary processes (second part of this work) is motivated by molecular works which aim at discovering the loci coding for a trait. For most of these works, some coding loci remains undetected. I show that undetected loci increases the probability of maintaining polymorphism under selection. In addition, conclusions about the factor that maintain polymorphism can be misleading if not all loci are considered. This is, therefore, only when all loci are detected that exact conclusions on the level of maintained polymorphism or on the factor(s) that maintain(s) polymorphism could be drawn. In the third part, the focus is on the expected release of additive genetic variance after bottleneck for selected traits. A previous study shows that the expected release of additive variance increases with an increase in the number of loci. I show that the expected release of additive variance after bottleneck increases for selected traits (compared with neutral), but this increase is not a function of the number of loci, but function of the recombination rate. Finally, the last part of this PhD thesis is a description of a package for the statistical software R that implements the Equation given above. It allows to generate data for different scenario regarding selection, recombination, and population size. This package opens perspectives for the theoretical population genetics that mainly focuses on one locus, while this work shows that increasing the number of loci leads not necessarily to straightforward results.
Resumo:
A workshop recently held at the Ecole Polytechnique Federale de Lausanne (EPFL, Switzerland) was dedicated to understanding the genetic basis of adaptive change, taking stock of the different approaches developed in theoretical population genetics and landscape genomics and bringing together knowledge accumulated in both research fields. Indeed, an important challenge in theoretical population genetics is to incorporate effects of demographic history and population structure. But important design problems (e.g. focus on populations as units, focus on hard selective sweeps, no hypothesis-based framework in the design of the statistical tests) reduce their capability of detecting adaptive genetic variation. In parallel, landscape genomics offers a solution to several of these problems and provides a number of advantages (e.g. fast computation, landscape heterogeneity integration). But the approach makes several implicit assumptions that should be carefully considered (e.g. selection has had enough time to create a functional relationship between the allele distribution and the environmental variable, or this functional relationship is assumed to be constant). To address the respective strengths and weaknesses mentioned above, the workshop brought together a panel of experts from both disciplines to present their work and discuss the relevance of combining these approaches, possibly resulting in a joint software solution in the future.
Resumo:
Abstract¦This thesis examines through three essays the role of the social context and of people concern for justice in explaining workplace aggressive behaviors.¦In the first essay, I argue that a work group instrumental climate - a climate emphasizing respect of organizational procedures -deters employees to manifest counterproductive work behaviors through informal sanctions (i.e., socio-emotional disapproval) they anticipate from it for misbehaving. A contrario, a work group affective climate - a climate concerned about others' well-being - leads employees to infer less informal sanctions and thus indirectly facilitates counterproductive work behaviors. I additionally expect these indirect effects to be conditional on employees' level of conscientiousness and agreeableness. Cross-level structural equations on cross-sectional data obtained from 158 employees in 26 work groups supported my expectations. By promoting collective responsibility for the respect of organizational rules and by knowing what their work group considers threatening their well-being, leaders may be able to prevent counterproductive work behaviors.¦Adopting an organizational justice perspective, the second essay provides a theoretical explanation of why and how collective deviance can emerge in a collective. In interdependent situations, employees use justice perceptions to infer others' cooperative intent. Even if moral transgressions (e.g., injustice) are ambiguous, their repetition and configuration within a team can lead employees to assign blame and develop collective cynicism toward the transgressor. Over time, collective cynicism - a shared belief about the transgressor's intentional lack of integrity - progressively constrains the diversity of employees' response to blame and leads collective deviance to emerge. This essay contributes to workplace deviance research by offering a theoretical framework for investigations of the phenomenon at the collective level. It organizations effort to manage and prevent deviance should consider.¦In the third essay, I solve an apparent contradiction in the literature showing that justice concerns sometimes lead employees to react aggressively to injustice and sometimes to refrain from it. Drawing from just-world theory, a cross-sectional field study and an experiment provide evidence that retaliatory tendencies following injustice are moderated by personal and general just-world beliefs. Whereas a high personal just-world belief facilitates retaliatory reactions to injustice, a high general just-world belief attenuates such reactions. This essay uncovers a dark side of personal just-world belief and a bright one of general just-world belief, and participates to extend just-world theory to the working context.
Resumo:
EXECUTIVE SUMMARY : Evaluating Information Security Posture within an organization is becoming a very complex task. Currently, the evaluation and assessment of Information Security are commonly performed using frameworks, methodologies and standards which often consider the various aspects of security independently. Unfortunately this is ineffective because it does not take into consideration the necessity of having a global and systemic multidimensional approach to Information Security evaluation. At the same time the overall security level is globally considered to be only as strong as its weakest link. This thesis proposes a model aiming to holistically assess all dimensions of security in order to minimize the likelihood that a given threat will exploit the weakest link. A formalized structure taking into account all security elements is presented; this is based on a methodological evaluation framework in which Information Security is evaluated from a global perspective. This dissertation is divided into three parts. Part One: Information Security Evaluation issues consists of four chapters. Chapter 1 is an introduction to the purpose of this research purpose and the Model that will be proposed. In this chapter we raise some questions with respect to "traditional evaluation methods" as well as identifying the principal elements to be addressed in this direction. Then we introduce the baseline attributes of our model and set out the expected result of evaluations according to our model. Chapter 2 is focused on the definition of Information Security to be used as a reference point for our evaluation model. The inherent concepts of the contents of a holistic and baseline Information Security Program are defined. Based on this, the most common roots-of-trust in Information Security are identified. Chapter 3 focuses on an analysis of the difference and the relationship between the concepts of Information Risk and Security Management. Comparing these two concepts allows us to identify the most relevant elements to be included within our evaluation model, while clearing situating these two notions within a defined framework is of the utmost importance for the results that will be obtained from the evaluation process. Chapter 4 sets out our evaluation model and the way it addresses issues relating to the evaluation of Information Security. Within this Chapter the underlying concepts of assurance and trust are discussed. Based on these two concepts, the structure of the model is developed in order to provide an assurance related platform as well as three evaluation attributes: "assurance structure", "quality issues", and "requirements achievement". Issues relating to each of these evaluation attributes are analysed with reference to sources such as methodologies, standards and published research papers. Then the operation of the model is discussed. Assurance levels, quality levels and maturity levels are defined in order to perform the evaluation according to the model. Part Two: Implementation of the Information Security Assurance Assessment Model (ISAAM) according to the Information Security Domains consists of four chapters. This is the section where our evaluation model is put into a welldefined context with respect to the four pre-defined Information Security dimensions: the Organizational dimension, Functional dimension, Human dimension, and Legal dimension. Each Information Security dimension is discussed in a separate chapter. For each dimension, the following two-phase evaluation path is followed. The first phase concerns the identification of the elements which will constitute the basis of the evaluation: ? Identification of the key elements within the dimension; ? Identification of the Focus Areas for each dimension, consisting of the security issues identified for each dimension; ? Identification of the Specific Factors for each dimension, consisting of the security measures or control addressing the security issues identified for each dimension. The second phase concerns the evaluation of each Information Security dimension by: ? The implementation of the evaluation model, based on the elements identified for each dimension within the first phase, by identifying the security tasks, processes, procedures, and actions that should have been performed by the organization to reach the desired level of protection; ? The maturity model for each dimension as a basis for reliance on security. For each dimension we propose a generic maturity model that could be used by every organization in order to define its own security requirements. Part three of this dissertation contains the Final Remarks, Supporting Resources and Annexes. With reference to the objectives of our thesis, the Final Remarks briefly analyse whether these objectives were achieved and suggest directions for future related research. Supporting resources comprise the bibliographic resources that were used to elaborate and justify our approach. Annexes include all the relevant topics identified within the literature to illustrate certain aspects of our approach. Our Information Security evaluation model is based on and integrates different Information Security best practices, standards, methodologies and research expertise which can be combined in order to define an reliable categorization of Information Security. After the definition of terms and requirements, an evaluation process should be performed in order to obtain evidence that the Information Security within the organization in question is adequately managed. We have specifically integrated into our model the most useful elements of these sources of information in order to provide a generic model able to be implemented in all kinds of organizations. The value added by our evaluation model is that it is easy to implement and operate and answers concrete needs in terms of reliance upon an efficient and dynamic evaluation tool through a coherent evaluation system. On that basis, our model could be implemented internally within organizations, allowing them to govern better their Information Security. RÉSUMÉ : Contexte général de la thèse L'évaluation de la sécurité en général, et plus particulièrement, celle de la sécurité de l'information, est devenue pour les organisations non seulement une mission cruciale à réaliser, mais aussi de plus en plus complexe. A l'heure actuelle, cette évaluation se base principalement sur des méthodologies, des bonnes pratiques, des normes ou des standards qui appréhendent séparément les différents aspects qui composent la sécurité de l'information. Nous pensons que cette manière d'évaluer la sécurité est inefficiente, car elle ne tient pas compte de l'interaction des différentes dimensions et composantes de la sécurité entre elles, bien qu'il soit admis depuis longtemps que le niveau de sécurité globale d'une organisation est toujours celui du maillon le plus faible de la chaîne sécuritaire. Nous avons identifié le besoin d'une approche globale, intégrée, systémique et multidimensionnelle de l'évaluation de la sécurité de l'information. En effet, et c'est le point de départ de notre thèse, nous démontrons que seule une prise en compte globale de la sécurité permettra de répondre aux exigences de sécurité optimale ainsi qu'aux besoins de protection spécifiques d'une organisation. Ainsi, notre thèse propose un nouveau paradigme d'évaluation de la sécurité afin de satisfaire aux besoins d'efficacité et d'efficience d'une organisation donnée. Nous proposons alors un modèle qui vise à évaluer d'une manière holistique toutes les dimensions de la sécurité, afin de minimiser la probabilité qu'une menace potentielle puisse exploiter des vulnérabilités et engendrer des dommages directs ou indirects. Ce modèle se base sur une structure formalisée qui prend en compte tous les éléments d'un système ou programme de sécurité. Ainsi, nous proposons un cadre méthodologique d'évaluation qui considère la sécurité de l'information à partir d'une perspective globale. Structure de la thèse et thèmes abordés Notre document est structuré en trois parties. La première intitulée : « La problématique de l'évaluation de la sécurité de l'information » est composée de quatre chapitres. Le chapitre 1 introduit l'objet de la recherche ainsi que les concepts de base du modèle d'évaluation proposé. La maniéré traditionnelle de l'évaluation de la sécurité fait l'objet d'une analyse critique pour identifier les éléments principaux et invariants à prendre en compte dans notre approche holistique. Les éléments de base de notre modèle d'évaluation ainsi que son fonctionnement attendu sont ensuite présentés pour pouvoir tracer les résultats attendus de ce modèle. Le chapitre 2 se focalise sur la définition de la notion de Sécurité de l'Information. Il ne s'agit pas d'une redéfinition de la notion de la sécurité, mais d'une mise en perspectives des dimensions, critères, indicateurs à utiliser comme base de référence, afin de déterminer l'objet de l'évaluation qui sera utilisé tout au long de notre travail. Les concepts inhérents de ce qui constitue le caractère holistique de la sécurité ainsi que les éléments constitutifs d'un niveau de référence de sécurité sont définis en conséquence. Ceci permet d'identifier ceux que nous avons dénommés « les racines de confiance ». Le chapitre 3 présente et analyse la différence et les relations qui existent entre les processus de la Gestion des Risques et de la Gestion de la Sécurité, afin d'identifier les éléments constitutifs du cadre de protection à inclure dans notre modèle d'évaluation. Le chapitre 4 est consacré à la présentation de notre modèle d'évaluation Information Security Assurance Assessment Model (ISAAM) et la manière dont il répond aux exigences de l'évaluation telle que nous les avons préalablement présentées. Dans ce chapitre les concepts sous-jacents relatifs aux notions d'assurance et de confiance sont analysés. En se basant sur ces deux concepts, la structure du modèle d'évaluation est développée pour obtenir une plateforme qui offre un certain niveau de garantie en s'appuyant sur trois attributs d'évaluation, à savoir : « la structure de confiance », « la qualité du processus », et « la réalisation des exigences et des objectifs ». Les problématiques liées à chacun de ces attributs d'évaluation sont analysées en se basant sur l'état de l'art de la recherche et de la littérature, sur les différentes méthodes existantes ainsi que sur les normes et les standards les plus courants dans le domaine de la sécurité. Sur cette base, trois différents niveaux d'évaluation sont construits, à savoir : le niveau d'assurance, le niveau de qualité et le niveau de maturité qui constituent la base de l'évaluation de l'état global de la sécurité d'une organisation. La deuxième partie: « L'application du Modèle d'évaluation de l'assurance de la sécurité de l'information par domaine de sécurité » est elle aussi composée de quatre chapitres. Le modèle d'évaluation déjà construit et analysé est, dans cette partie, mis dans un contexte spécifique selon les quatre dimensions prédéfinies de sécurité qui sont: la dimension Organisationnelle, la dimension Fonctionnelle, la dimension Humaine, et la dimension Légale. Chacune de ces dimensions et son évaluation spécifique fait l'objet d'un chapitre distinct. Pour chacune des dimensions, une évaluation en deux phases est construite comme suit. La première phase concerne l'identification des éléments qui constituent la base de l'évaluation: ? Identification des éléments clés de l'évaluation ; ? Identification des « Focus Area » pour chaque dimension qui représentent les problématiques se trouvant dans la dimension ; ? Identification des « Specific Factors » pour chaque Focus Area qui représentent les mesures de sécurité et de contrôle qui contribuent à résoudre ou à diminuer les impacts des risques. La deuxième phase concerne l'évaluation de chaque dimension précédemment présentées. Elle est constituée d'une part, de l'implémentation du modèle général d'évaluation à la dimension concernée en : ? Se basant sur les éléments spécifiés lors de la première phase ; ? Identifiant les taches sécuritaires spécifiques, les processus, les procédures qui auraient dû être effectués pour atteindre le niveau de protection souhaité. D'autre part, l'évaluation de chaque dimension est complétée par la proposition d'un modèle de maturité spécifique à chaque dimension, qui est à considérer comme une base de référence pour le niveau global de sécurité. Pour chaque dimension nous proposons un modèle de maturité générique qui peut être utilisé par chaque organisation, afin de spécifier ses propres exigences en matière de sécurité. Cela constitue une innovation dans le domaine de l'évaluation, que nous justifions pour chaque dimension et dont nous mettons systématiquement en avant la plus value apportée. La troisième partie de notre document est relative à la validation globale de notre proposition et contient en guise de conclusion, une mise en perspective critique de notre travail et des remarques finales. Cette dernière partie est complétée par une bibliographie et des annexes. Notre modèle d'évaluation de la sécurité intègre et se base sur de nombreuses sources d'expertise, telles que les bonnes pratiques, les normes, les standards, les méthodes et l'expertise de la recherche scientifique du domaine. Notre proposition constructive répond à un véritable problème non encore résolu, auquel doivent faire face toutes les organisations, indépendamment de la taille et du profil. Cela permettrait à ces dernières de spécifier leurs exigences particulières en matière du niveau de sécurité à satisfaire, d'instancier un processus d'évaluation spécifique à leurs besoins afin qu'elles puissent s'assurer que leur sécurité de l'information soit gérée d'une manière appropriée, offrant ainsi un certain niveau de confiance dans le degré de protection fourni. Nous avons intégré dans notre modèle le meilleur du savoir faire, de l'expérience et de l'expertise disponible actuellement au niveau international, dans le but de fournir un modèle d'évaluation simple, générique et applicable à un grand nombre d'organisations publiques ou privées. La valeur ajoutée de notre modèle d'évaluation réside précisément dans le fait qu'il est suffisamment générique et facile à implémenter tout en apportant des réponses sur les besoins concrets des organisations. Ainsi notre proposition constitue un outil d'évaluation fiable, efficient et dynamique découlant d'une approche d'évaluation cohérente. De ce fait, notre système d'évaluation peut être implémenté à l'interne par l'entreprise elle-même, sans recourir à des ressources supplémentaires et lui donne également ainsi la possibilité de mieux gouverner sa sécurité de l'information.
Resumo:
Genotypic frequencies at codominant marker loci in population samples convey information on mating systems. A classical way to extract this information is to measure heterozygote deficiencies (FIS) and obtain the selfing rate s from FIS = s/(2 - s), assuming inbreeding equilibrium. A major drawback is that heterozygote deficiencies are often present without selfing, owing largely to technical artefacts such as null alleles or partial dominance. We show here that, in the absence of gametic disequilibrium, the multilocus structure can be used to derive estimates of s independent of FIS and free of technical biases. Their statistical power and precision are comparable to those of FIS, although they are sensitive to certain types of gametic disequilibria, a bias shared with progeny-array methods but not FIS. We analyse four real data sets spanning a range of mating systems. In two examples, we obtain s = 0 despite positive FIS, strongly suggesting that the latter are artefactual. In the remaining examples, all estimates are consistent. All the computations have been implemented in a open-access and user-friendly software called rmes (robust multilocus estimate of selfing) available at http://ftp.cefe.cnrs.fr, and can be used on any multilocus data. Being able to extract the reliable information from imperfect data, our method opens the way to make use of the ever-growing number of published population genetic studies, in addition to the more demanding progeny-array approaches, to investigate selfing rates.