286 resultados para Penalized likelihood
Resumo:
Background: Postoperative cognitive dysfunction (POCD) occurs frequently after cardiac surgery. Some data suggest that inflammation plays a key role in the development of POCD. N-3 fatty acids have been shown to have a beneficial effect on inflammation. We hypothesised that perioperative n-3 enriched nutrition therapy would reduce the incidence of POCD in this group of patients. Methods: Randomized, double blind placebo controlled trial in patients aged 65 or older undergoing elective cardiac surgery with cardiopulmonary bypass. 2x 250 mL placebo (Ensure Plus™, Abbott Nutrition) or n-3 enriched nutrition therapy (ProSure™ Abbott Nutrition) were administered for ten days starting 5 days prior to surgery. Cognition was assessed preoperatively and 7 days after surgery with the Consortium to Establish a Registry for Alzheimer's Disease - Neuropsychological Assessment Battery (CERAD-NAB) [1]. Results: 16 patients were included. Mean age was 72 } 5.3 for placebo and 75 } 4.8 for ProSure™ respectively. CRP and IL-6 did not differ significantly between groups preoperatively and on postoperative days 1, 3, and 7. Preoperative CERAD total scores were 86 } 10 and 81 } 9 (p = n.s.) for Placebo and ProSure™, respectively. Postoperative scores were 88 } 12, and 77 } 19 (p = n.s.) The change in score was not different between the two groups (Placebo: +3 } 5; ProSure: -5 } 11). Conclusion: In this very small sample no effect of preoperatively started n-3 enriched nutritional supplements on inflammation or cognitive functions were detected. However, there is a large likelihood of a type II error and more patients need to be included to assess possible beneficial effects of this intervention in elderly patients undergoing elective cardiac surgery. 1 Chandler MJ, et al. Neurology. 2005;65:102-6.
Resumo:
PURPOSE: To assess (1) the lifetime prevalence of exposure both to trauma and post-traumatic stress disorder (PTSD); (2) the risk of PTSD by type of trauma; and (3) the determinants of the development of PTSD in the community. METHODS: The Diagnostic Interview for Genetic Studies was administered to a random sample of an urban area (N = 3,691). RESULTS: (1) The lifetime prevalence estimates of exposure to trauma and PTSD were 21.0 and 5.0%; respectively, with a twice as high prevalence of PTSD in women compared to men despite a similar likelihood of exposure in the two sexes; (2) Sexual abuse was the trauma involving the highest risk of PTSD; (3) The risk of PTSD was most strongly associated with sexual abuse followed by preexisting bipolar disorder, alcohol dependence, antisocial personality, childhood separation anxiety disorder, being victim of crime, witnessing violence, Neuroticism and Problem-focused coping strategies. After adjustment for these characteristics, female sex was no longer found to be significantly associated with the risk of PTSD. CONCLUSIONS: The risk for the development of PTSD after exposure to traumatic events is associated with several factors including the type of exposure, preexisting psychopathology, personality features and coping strategies which independently contribute to the vulnerability to PTSD.
Resumo:
The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.
Resumo:
The value of earmarks as an efficient means of personal identification is still subject to debate. It has been argued that the field is lacking a firm systematic and structured data basis to help practitioners to form their conclusions. Typically, there is a paucity of research guiding as to the selectivity of the features used in the comparison process between an earmark and reference earprints taken from an individual. This study proposes a system for the automatic comparison of earprints and earmarks, operating without any manual extraction of key-points or manual annotations. For each donor, a model is created using multiple reference prints, hence capturing the donor within source variability. For each comparison between a mark and a model, images are automatically aligned and a proximity score, based on a normalized 2D correlation coefficient, is calculated. Appropriate use of this score allows deriving a likelihood ratio that can be explored under known state of affairs (both in cases where it is known that the mark has been left by the donor that gave the model and conversely in cases when it is established that the mark originates from a different source). To assess the system performance, a first dataset containing 1229 donors elaborated during the FearID research project was used. Based on these data, for mark-to-print comparisons, the system performed with an equal error rate (EER) of 2.3% and about 88% of marks are found in the first 3 positions of a hitlist. When performing print-to-print transactions, results show an equal error rate of 0.5%. The system was then tested using real-case data obtained from police forces.
Resumo:
Unlike the evaluation of single items of scientific evidence, the formal study and analysis of the jointevaluation of several distinct items of forensic evidence has to date received some punctual, ratherthan systematic, attention. Questions about the (i) relationships among a set of (usually unobservable)propositions and a set of (observable) items of scientific evidence, (ii) the joint probative valueof a collection of distinct items of evidence as well as (iii) the contribution of each individual itemwithin a given group of pieces of evidence still represent fundamental areas of research. To somedegree, this is remarkable since both, forensic science theory and practice, yet many daily inferencetasks, require the consideration of multiple items if not masses of evidence. A recurrent and particularcomplication that arises in such settings is that the application of probability theory, i.e. the referencemethod for reasoning under uncertainty, becomes increasingly demanding. The present paper takesthis as a starting point and discusses graphical probability models, i.e. Bayesian networks, as frameworkwithin which the joint evaluation of scientific evidence can be approached in some viable way.Based on a review of existing main contributions in this area, the article here aims at presentinginstances of real case studies from the author's institution in order to point out the usefulness andcapacities of Bayesian networks for the probabilistic assessment of the probative value of multipleand interrelated items of evidence. A main emphasis is placed on underlying general patterns of inference,their representation as well as their graphical probabilistic analysis. Attention is also drawnto inferential interactions, such as redundancy, synergy and directional change. These distinguish thejoint evaluation of evidence from assessments of isolated items of evidence. Together, these topicspresent aspects of interest to both, domain experts and recipients of expert information, because theyhave bearing on how multiple items of evidence are meaningfully and appropriately set into context.
Resumo:
Newer antiepileptic drugs (AEDs) are increasingly prescribed and seem to have a comparable efficacy as the classical AEDs; however, their impact on status epilepticus (SE) prognosis has received little attention. In our prospective SE database (2006-2010), we assessed the use of older versus newer AEDs (levetiracetam, pregabalin, topiramate, lacosamide) over time and its relationship to outcome (return to clinical baseline conditions, new handicap, or death). Newer AEDs were used more often toward the end of the study period (42% of episodes versus 30%). After adjustment for SE etiology, SE severity score, and number of compounds needed to terminate SE, newer AEDs were independently related to a reduced likelihood of return to baseline (p<0.001) but not to increased mortality. These findings seem in line with recent findings on refractory epilepsy. Also, in view of the higher price of the newer AEDs, well-designed, prospective assessments analyzing the impact of newer AEDs on efficacy and tolerability in patients with SE appear mandatory.
Resumo:
BACKGROUND: Controversy exists regarding the usefulness of troponin testing for the risk stratification of patients with acute pulmonary embolism (PE). We conducted an updated systematic review and a metaanalysis of troponin-based risk stratification of normotensive patients with acute symptomatic PE. The sources of our data were publications listed in Medline and Embase from 1980 through April 2008 and a review of cited references in those publications. METHODS: We included all studies that estimated the relation between troponin levels and the incidence of all-cause mortality in normotensive patients with acute symptomatic PE. Two reviewers independently abstracted data and assessed study quality. From the literature search, 596 publications were screened. Nine studies that consisted of 1,366 normotensive patients with acute symptomatic PE were deemed eligible. Pooled results showed that elevated troponin levels were associated with a 4.26-fold increased odds of overall mortality (95% CI, 2.13 to 8.50; heterogeneity chi(2) = 12.64; degrees of freedom = 8; p = 0.125). Summary receiver operating characteristic curve analysis showed a relationship between the sensitivity and specificity of troponin levels to predict overall mortality (Spearman rank correlation coefficient = 0.68; p = 0.046). Pooled likelihood ratios (LRs) were not extreme (negative LR, 0.59 [95% CI, 0.39 to 0.88]; positive LR, 2.26 [95% CI, 1.66 to 3.07]). The Begg rank correlation method did not detect evidence of publication bias. CONCLUSIONS: The results of this metaanalysis indicate that elevated troponin levels do not adequately discern normotensive patients with acute symptomatic PE who are at high risk for death from those who are at low risk for death.
Resumo:
Positive selection is widely estimated from protein coding sequence alignments by the nonsynonymous-to-synonymous ratio omega. Increasingly elaborate codon models are used in a likelihood framework for this estimation. Although there is widespread concern about the robustness of the estimation of the omega ratio, more efforts are needed to estimate this robustness, especially in the context of complex models. Here, we focused on the branch-site codon model. We investigated its robustness on a large set of simulated data. First, we investigated the impact of sequence divergence. We found evidence of underestimation of the synonymous substitution rate for values as small as 0.5, with a slight increase in false positives for the branch-site test. When dS increases further, underestimation of dS is worse, but false positives decrease. Interestingly, the detection of true positives follows a similar distribution, with a maximum for intermediary values of dS. Thus, high dS is more of a concern for a loss of power (false negatives) than for false positives of the test. Second, we investigated the impact of GC content. We showed that there is no significant difference of false positives between high GC (up to similar to 80%) and low GC (similar to 30%) genes. Moreover, neither shifts of GC content on a specific branch nor major shifts in GC along the gene sequence generate many false positives. Our results confirm that the branch-site is a very conservative test.
Resumo:
BACKGROUND: HCV coinfection remains a major cause of morbidity and mortality among HIV-infected individuals and its incidence has increased dramatically in HIV-infected men who have sex with men(MSM). METHODS: Hepatitis C virus (HCV) coinfection in the Swiss HIV Cohort Study(SHCS) was studied by combining clinical data with HIV-1 pol-sequences from the SHCS Drug Resistance Database(DRDB). We inferred maximum-likelihood phylogenetic trees, determined Swiss HIV-transmission pairs as monophyletic patient pairs, and then considered the distribution of HCV on those pairs. RESULTS: Among the 9748 patients in the SHCS-DRDB with known HCV status, 2768(28%) were HCV-positive. Focusing on subtype B(7644 patients), we identified 1555 potential HIV-1 transmission pairs. There, we found that, even after controlling for transmission group, calendar year, age and sex, the odds for an HCV coinfection were increased by an odds ratio (OR) of 3.2 [95% confidence interval (CI) 2.2, 4.7) if a patient clustered with another HCV-positive case. This strong association persisted if transmission groups of intravenous drug users (IDUs), MSMs and heterosexuals (HETs) were considered separately(in all cases OR>2). Finally we found that HCV incidence was increased by a hazard ratio of 2.1 (1.1, 3.8) for individuals paired with an HCV-positive partner. CONCLUSIONS: Patients whose HIV virus is closely related to the HIV virus of HIV/HCV-coinfected patients have a higher risk for carrying or acquiring HCV themselves. This indicates the occurrence of domestic and sexual HCV transmission and allows the identification of patients with a high HCV-infection risk.
Resumo:
Purpose: To investigate the differences between the Fundus Camera (Topcon TRC-50X) and Confocal Scanning Laser Ophthalmoscope (Heidelberg retina angiogram (HRA)) on the fundus autofluorescence (FAF) imaging (resolution and FAF characteristics). Methods: Eighty nine eyes of 46 patients with various retinal diseases underwent FAF imaging with HRA (488nm exciter / 500nm barrier filter) before fluorescein angiography (FFA) and Topcon Fundus Camera (580nm exciter / 695nm barrier filter) before and after FFA. The quality of the FAF images was estimated, compared for their resolution and analysed for the influence of fixation stability and cataracts. Hypo- and hyper-FAF behaviour was analysed for the healthy disc, healthy fovea, and a variety of pathological features. Results: HRA images were found to be of superior quality in 18 eyes, while Topcon images were estimated superior in 21 eyes. No difference was found in 50 eyes. Both poor fixation (p=0.009) and more advanced cataract (p=0.013) were found to strongly increase the likelihood of better image quality by Topcon. Images acquired by Topcon before and after FFA were identical (100%). The healthy disc was usually dark on HRA (71%), but showed mild autofluorescence on Topcon (88%). The healthy fovea showed in 100% Hypo-FAF on HRA, while Topcon showed in 52% Iso-FAF, in 43% mild Hypo-FAF, and in 5% Hypo-FAF as on HRA. No difference of FAF was found for geographic atrophy, pigment changes, and drusen, although Topcon images were often more detailed. Hyper-FAF due to exudation showed better on HRA. Pigment epithelium detachment showed identical FAF behaviour on the border, but reduced FAF with Topcon in the center. Cystic edema was visible only on HRA in a petaloid pattern. Hard exsudates caused Hypo-FAF only on HRA, hardly visible on Topcon. Blocage phenomenon by blood however was identical. Conclusions: The filter set of Topcon and the single image acquisition appear to be an advantage for patients with cataract or poor fixation. Preceding FFA does not alter the Topcon FAF image. Regarding the FAF behaviour, there are differences between the two systems which need to be taken into account when interpreting the images.
Resumo:
EXECUTIVE SUMMARY : Evaluating Information Security Posture within an organization is becoming a very complex task. Currently, the evaluation and assessment of Information Security are commonly performed using frameworks, methodologies and standards which often consider the various aspects of security independently. Unfortunately this is ineffective because it does not take into consideration the necessity of having a global and systemic multidimensional approach to Information Security evaluation. At the same time the overall security level is globally considered to be only as strong as its weakest link. This thesis proposes a model aiming to holistically assess all dimensions of security in order to minimize the likelihood that a given threat will exploit the weakest link. A formalized structure taking into account all security elements is presented; this is based on a methodological evaluation framework in which Information Security is evaluated from a global perspective. This dissertation is divided into three parts. Part One: Information Security Evaluation issues consists of four chapters. Chapter 1 is an introduction to the purpose of this research purpose and the Model that will be proposed. In this chapter we raise some questions with respect to "traditional evaluation methods" as well as identifying the principal elements to be addressed in this direction. Then we introduce the baseline attributes of our model and set out the expected result of evaluations according to our model. Chapter 2 is focused on the definition of Information Security to be used as a reference point for our evaluation model. The inherent concepts of the contents of a holistic and baseline Information Security Program are defined. Based on this, the most common roots-of-trust in Information Security are identified. Chapter 3 focuses on an analysis of the difference and the relationship between the concepts of Information Risk and Security Management. Comparing these two concepts allows us to identify the most relevant elements to be included within our evaluation model, while clearing situating these two notions within a defined framework is of the utmost importance for the results that will be obtained from the evaluation process. Chapter 4 sets out our evaluation model and the way it addresses issues relating to the evaluation of Information Security. Within this Chapter the underlying concepts of assurance and trust are discussed. Based on these two concepts, the structure of the model is developed in order to provide an assurance related platform as well as three evaluation attributes: "assurance structure", "quality issues", and "requirements achievement". Issues relating to each of these evaluation attributes are analysed with reference to sources such as methodologies, standards and published research papers. Then the operation of the model is discussed. Assurance levels, quality levels and maturity levels are defined in order to perform the evaluation according to the model. Part Two: Implementation of the Information Security Assurance Assessment Model (ISAAM) according to the Information Security Domains consists of four chapters. This is the section where our evaluation model is put into a welldefined context with respect to the four pre-defined Information Security dimensions: the Organizational dimension, Functional dimension, Human dimension, and Legal dimension. Each Information Security dimension is discussed in a separate chapter. For each dimension, the following two-phase evaluation path is followed. The first phase concerns the identification of the elements which will constitute the basis of the evaluation: ? Identification of the key elements within the dimension; ? Identification of the Focus Areas for each dimension, consisting of the security issues identified for each dimension; ? Identification of the Specific Factors for each dimension, consisting of the security measures or control addressing the security issues identified for each dimension. The second phase concerns the evaluation of each Information Security dimension by: ? The implementation of the evaluation model, based on the elements identified for each dimension within the first phase, by identifying the security tasks, processes, procedures, and actions that should have been performed by the organization to reach the desired level of protection; ? The maturity model for each dimension as a basis for reliance on security. For each dimension we propose a generic maturity model that could be used by every organization in order to define its own security requirements. Part three of this dissertation contains the Final Remarks, Supporting Resources and Annexes. With reference to the objectives of our thesis, the Final Remarks briefly analyse whether these objectives were achieved and suggest directions for future related research. Supporting resources comprise the bibliographic resources that were used to elaborate and justify our approach. Annexes include all the relevant topics identified within the literature to illustrate certain aspects of our approach. Our Information Security evaluation model is based on and integrates different Information Security best practices, standards, methodologies and research expertise which can be combined in order to define an reliable categorization of Information Security. After the definition of terms and requirements, an evaluation process should be performed in order to obtain evidence that the Information Security within the organization in question is adequately managed. We have specifically integrated into our model the most useful elements of these sources of information in order to provide a generic model able to be implemented in all kinds of organizations. The value added by our evaluation model is that it is easy to implement and operate and answers concrete needs in terms of reliance upon an efficient and dynamic evaluation tool through a coherent evaluation system. On that basis, our model could be implemented internally within organizations, allowing them to govern better their Information Security. RÉSUMÉ : Contexte général de la thèse L'évaluation de la sécurité en général, et plus particulièrement, celle de la sécurité de l'information, est devenue pour les organisations non seulement une mission cruciale à réaliser, mais aussi de plus en plus complexe. A l'heure actuelle, cette évaluation se base principalement sur des méthodologies, des bonnes pratiques, des normes ou des standards qui appréhendent séparément les différents aspects qui composent la sécurité de l'information. Nous pensons que cette manière d'évaluer la sécurité est inefficiente, car elle ne tient pas compte de l'interaction des différentes dimensions et composantes de la sécurité entre elles, bien qu'il soit admis depuis longtemps que le niveau de sécurité globale d'une organisation est toujours celui du maillon le plus faible de la chaîne sécuritaire. Nous avons identifié le besoin d'une approche globale, intégrée, systémique et multidimensionnelle de l'évaluation de la sécurité de l'information. En effet, et c'est le point de départ de notre thèse, nous démontrons que seule une prise en compte globale de la sécurité permettra de répondre aux exigences de sécurité optimale ainsi qu'aux besoins de protection spécifiques d'une organisation. Ainsi, notre thèse propose un nouveau paradigme d'évaluation de la sécurité afin de satisfaire aux besoins d'efficacité et d'efficience d'une organisation donnée. Nous proposons alors un modèle qui vise à évaluer d'une manière holistique toutes les dimensions de la sécurité, afin de minimiser la probabilité qu'une menace potentielle puisse exploiter des vulnérabilités et engendrer des dommages directs ou indirects. Ce modèle se base sur une structure formalisée qui prend en compte tous les éléments d'un système ou programme de sécurité. Ainsi, nous proposons un cadre méthodologique d'évaluation qui considère la sécurité de l'information à partir d'une perspective globale. Structure de la thèse et thèmes abordés Notre document est structuré en trois parties. La première intitulée : « La problématique de l'évaluation de la sécurité de l'information » est composée de quatre chapitres. Le chapitre 1 introduit l'objet de la recherche ainsi que les concepts de base du modèle d'évaluation proposé. La maniéré traditionnelle de l'évaluation de la sécurité fait l'objet d'une analyse critique pour identifier les éléments principaux et invariants à prendre en compte dans notre approche holistique. Les éléments de base de notre modèle d'évaluation ainsi que son fonctionnement attendu sont ensuite présentés pour pouvoir tracer les résultats attendus de ce modèle. Le chapitre 2 se focalise sur la définition de la notion de Sécurité de l'Information. Il ne s'agit pas d'une redéfinition de la notion de la sécurité, mais d'une mise en perspectives des dimensions, critères, indicateurs à utiliser comme base de référence, afin de déterminer l'objet de l'évaluation qui sera utilisé tout au long de notre travail. Les concepts inhérents de ce qui constitue le caractère holistique de la sécurité ainsi que les éléments constitutifs d'un niveau de référence de sécurité sont définis en conséquence. Ceci permet d'identifier ceux que nous avons dénommés « les racines de confiance ». Le chapitre 3 présente et analyse la différence et les relations qui existent entre les processus de la Gestion des Risques et de la Gestion de la Sécurité, afin d'identifier les éléments constitutifs du cadre de protection à inclure dans notre modèle d'évaluation. Le chapitre 4 est consacré à la présentation de notre modèle d'évaluation Information Security Assurance Assessment Model (ISAAM) et la manière dont il répond aux exigences de l'évaluation telle que nous les avons préalablement présentées. Dans ce chapitre les concepts sous-jacents relatifs aux notions d'assurance et de confiance sont analysés. En se basant sur ces deux concepts, la structure du modèle d'évaluation est développée pour obtenir une plateforme qui offre un certain niveau de garantie en s'appuyant sur trois attributs d'évaluation, à savoir : « la structure de confiance », « la qualité du processus », et « la réalisation des exigences et des objectifs ». Les problématiques liées à chacun de ces attributs d'évaluation sont analysées en se basant sur l'état de l'art de la recherche et de la littérature, sur les différentes méthodes existantes ainsi que sur les normes et les standards les plus courants dans le domaine de la sécurité. Sur cette base, trois différents niveaux d'évaluation sont construits, à savoir : le niveau d'assurance, le niveau de qualité et le niveau de maturité qui constituent la base de l'évaluation de l'état global de la sécurité d'une organisation. La deuxième partie: « L'application du Modèle d'évaluation de l'assurance de la sécurité de l'information par domaine de sécurité » est elle aussi composée de quatre chapitres. Le modèle d'évaluation déjà construit et analysé est, dans cette partie, mis dans un contexte spécifique selon les quatre dimensions prédéfinies de sécurité qui sont: la dimension Organisationnelle, la dimension Fonctionnelle, la dimension Humaine, et la dimension Légale. Chacune de ces dimensions et son évaluation spécifique fait l'objet d'un chapitre distinct. Pour chacune des dimensions, une évaluation en deux phases est construite comme suit. La première phase concerne l'identification des éléments qui constituent la base de l'évaluation: ? Identification des éléments clés de l'évaluation ; ? Identification des « Focus Area » pour chaque dimension qui représentent les problématiques se trouvant dans la dimension ; ? Identification des « Specific Factors » pour chaque Focus Area qui représentent les mesures de sécurité et de contrôle qui contribuent à résoudre ou à diminuer les impacts des risques. La deuxième phase concerne l'évaluation de chaque dimension précédemment présentées. Elle est constituée d'une part, de l'implémentation du modèle général d'évaluation à la dimension concernée en : ? Se basant sur les éléments spécifiés lors de la première phase ; ? Identifiant les taches sécuritaires spécifiques, les processus, les procédures qui auraient dû être effectués pour atteindre le niveau de protection souhaité. D'autre part, l'évaluation de chaque dimension est complétée par la proposition d'un modèle de maturité spécifique à chaque dimension, qui est à considérer comme une base de référence pour le niveau global de sécurité. Pour chaque dimension nous proposons un modèle de maturité générique qui peut être utilisé par chaque organisation, afin de spécifier ses propres exigences en matière de sécurité. Cela constitue une innovation dans le domaine de l'évaluation, que nous justifions pour chaque dimension et dont nous mettons systématiquement en avant la plus value apportée. La troisième partie de notre document est relative à la validation globale de notre proposition et contient en guise de conclusion, une mise en perspective critique de notre travail et des remarques finales. Cette dernière partie est complétée par une bibliographie et des annexes. Notre modèle d'évaluation de la sécurité intègre et se base sur de nombreuses sources d'expertise, telles que les bonnes pratiques, les normes, les standards, les méthodes et l'expertise de la recherche scientifique du domaine. Notre proposition constructive répond à un véritable problème non encore résolu, auquel doivent faire face toutes les organisations, indépendamment de la taille et du profil. Cela permettrait à ces dernières de spécifier leurs exigences particulières en matière du niveau de sécurité à satisfaire, d'instancier un processus d'évaluation spécifique à leurs besoins afin qu'elles puissent s'assurer que leur sécurité de l'information soit gérée d'une manière appropriée, offrant ainsi un certain niveau de confiance dans le degré de protection fourni. Nous avons intégré dans notre modèle le meilleur du savoir faire, de l'expérience et de l'expertise disponible actuellement au niveau international, dans le but de fournir un modèle d'évaluation simple, générique et applicable à un grand nombre d'organisations publiques ou privées. La valeur ajoutée de notre modèle d'évaluation réside précisément dans le fait qu'il est suffisamment générique et facile à implémenter tout en apportant des réponses sur les besoins concrets des organisations. Ainsi notre proposition constitue un outil d'évaluation fiable, efficient et dynamique découlant d'une approche d'évaluation cohérente. De ce fait, notre système d'évaluation peut être implémenté à l'interne par l'entreprise elle-même, sans recourir à des ressources supplémentaires et lui donne également ainsi la possibilité de mieux gouverner sa sécurité de l'information.
Resumo:
Genotypic frequencies at codominant marker loci in population samples convey information on mating systems. A classical way to extract this information is to measure heterozygote deficiencies (FIS) and obtain the selfing rate s from FIS = s/(2 - s), assuming inbreeding equilibrium. A major drawback is that heterozygote deficiencies are often present without selfing, owing largely to technical artefacts such as null alleles or partial dominance. We show here that, in the absence of gametic disequilibrium, the multilocus structure can be used to derive estimates of s independent of FIS and free of technical biases. Their statistical power and precision are comparable to those of FIS, although they are sensitive to certain types of gametic disequilibria, a bias shared with progeny-array methods but not FIS. We analyse four real data sets spanning a range of mating systems. In two examples, we obtain s = 0 despite positive FIS, strongly suggesting that the latter are artefactual. In the remaining examples, all estimates are consistent. All the computations have been implemented in a open-access and user-friendly software called rmes (robust multilocus estimate of selfing) available at http://ftp.cefe.cnrs.fr, and can be used on any multilocus data. Being able to extract the reliable information from imperfect data, our method opens the way to make use of the ever-growing number of published population genetic studies, in addition to the more demanding progeny-array approaches, to investigate selfing rates.
Resumo:
For the detection and management of osteoporosis and osteoporosis-related fractures, quantitative ultrasound (QUS) is emerging as a relatively low-cost and readily accessible alternative to dual-energy X-ray absorptiometry (DXA) measurement of bone mineral density (BMD) in certain circumstances. The following is a brief, but thorough review of the existing literature with respect to the use of QUS in 6 settings: 1) assessing fragility fracture risk; 2) diagnosing osteoporosis; 3) initiating osteoporosis treatment; 4) monitoring osteoporosis treatment; 5) osteoporosis case finding; and 6) quality assurance and control. Many QUS devices exist that are quite different with respect to the parameters they measure and the strength of empirical evidence supporting their use. In general, heel QUS appears to be most tested and most effective. Overall, some, but not all, heel QUS devices are effective assessing fracture risk in some, but not all, populations, the evidence being strongest for Caucasian females over 55 years old. Otherwise, the evidence is fair with respect to certain devices allowing for the accurate diagnosis of likelihood of osteoporosis, and generally fair to poor in terms of QUS use when initiating or monitoring osteoporosis treatment. A reasonable protocol is proposed herein for case-finding purposes, which relies on a combined assessment of clinical risk factors (CR.F) and heel QUS. Finally, several recommendations are made for quality assurance and control.
Resumo:
Forensic scientists face increasingly complex inference problems for evaluating likelihood ratios (LRs) for an appropriate pair of propositions. Up to now, scientists and statisticians have derived LR formulae using an algebraic approach. However, this approach reaches its limits when addressing cases with an increasing number of variables and dependence relationships between these variables. In this study, we suggest using a graphical approach, based on the construction of Bayesian networks (BNs). We first construct a BN that captures the problem, and then deduce the expression for calculating the LR from this model to compare it with existing LR formulae. We illustrate this idea by applying it to the evaluation of an activity level LR in the context of the two-trace transfer problem. Our approach allows us to relax assumptions made in previous LR developments, produce a new LR formula for the two-trace transfer problem and generalize this scenario to n traces.
Resumo:
OBJECTIVES: Clinical staging is widespread in medicine - it informs prognosis, clinical course, and treatment, and assists individualized care. Staging places an individual on a probabilistic continuum of increasing potential disease severity, ranging from clinically at-risk or latency stage through first threshold episode of illness or recurrence, and, finally, to late or end-stage disease. The aim of the present paper was to examine and update the evidence regarding staging in bipolar disorder, and how this might inform targeted and individualized intervention approaches. METHODS: We provide a narrative review of the relevant information. RESULTS: In bipolar disorder, the validity of staging is informed by a range of findings that accompany illness progression, including neuroimaging data suggesting incremental volume loss, cognitive changes, and a declining likelihood of response to pharmacological and psychosocial treatments. Staging informs the adoption of a number of approaches, including the active promotion of both indicated prevention for at-risk individuals and early intervention strategies for newly diagnosed individuals, and the tailored implementation of treatments according to the stage of illness. CONCLUSIONS: The nature of bipolar disorder implies the presence of an active process of neuroprogression that is considered to be at least partly mediated by inflammation, oxidative stress, apoptosis, and changes in neurogenesis. It further supports the concept of neuroprotection, in that a diversity of agents have putative effects against these molecular targets. Clinically, staging suggests that the at-risk state or first episode is a period that requires particularly active and broad-based treatment, consistent with the hope that the temporal trajectory of the illness can be altered. Prompt treatment may be potentially neuroprotective and attenuate the neurostructural and neurocognitive changes that emerge with chronicity. Staging highlights the need for interventions at a service delivery level and implementing treatments at the earliest stage of illness possible.