86 resultados para Value-based pricing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Detection rates for adenoma and early colorectal cancer (CRC) are unsatisfactory due to low compliance towards invasive screening procedures such as colonoscopy. There is a large unmet screening need calling for an accurate, non-invasive and cost-effective test to screen for early neoplastic and pre-neoplastic lesions. Our goal is to identify effective biomarker combinations to develop a screening test aimed at detecting precancerous lesions and early CRC stages, based on a multigene assay performed on peripheral blood mononuclear cells (PBMC).Methods: A pilot study was conducted on 92 subjects. Colonoscopy revealed 21 CRC, 30 adenomas larger than 1 cm and 41 healthy controls. A panel of 103 biomarkers was selected by two approaches: a candidate gene approach based on literature review and whole transcriptome analysis of a subset of this cohort by Illumina TAG profiling. Blood samples were taken from each patient and PBMC purified. Total RNA was extracted and the 103 biomarkers were tested by multiplex RT-qPCR on the cohort. Different univariate and multivariate statistical methods were applied on the PCR data and 60 biomarkers, with significant p-value (< 0.01) for most of the methods, were selected.Results: The 60 biomarkers are involved in several different biological functions, such as cell adhesion, cell motility, cell signaling, cell proliferation, development and cancer. Two distinct molecular signatures derived from the biomarker combinations were established based on penalized logistic regression to separate patients without lesion from those with CRC or adenoma. These signatures were validated using bootstrapping method, leading to a separation of patients without lesion from those with CRC (Se 67%, Sp 93%, AUC 0.87) and from those with adenoma larger than 1cm (Se 63%, Sp 83%, AUC 0.77). In addition, the organ and disease specificity of these signatures was confirmed by means of patients with other cancer types and inflammatory bowel diseases.Conclusions: The two defined biomarker combinations effectively detect the presence of CRC and adenomas larger than 1 cm with high sensitivity and specificity. A prospective, multicentric, pivotal study is underway in order to validate these results in a larger cohort.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe an improved multiple-locus variable-number tandem-repeat (VNTR) analysis (MLVA) scheme for genotyping Staphylococcus aureus. We compare its performance to those of multilocus sequence typing (MLST) and spa typing in a survey of 309 strains. This collection includes 87 epidemic methicillin-resistant S. aureus (MRSA) strains of the Harmony collection, 75 clinical strains representing the major MLST clonal complexes (CCs) (50 methicillin-sensitive S. aureus [MSSA] and 25 MRSA), 135 nasal carriage strains (133 MSSA and 2 MRSA), and 13 published S. aureus genome sequences. The results show excellent concordance between the techniques' results and demonstrate that the discriminatory power of MLVA is higher than those of both MLST and spa typing. Two hundred forty-two genotypes are discriminated with 14 VNTR loci (diversity index, 0.9965; 95% confidence interval, 0.9947 to 0.9984). Using a cutoff value of 45%, 21 clusters are observed, corresponding to the CCs previously defined by MLST. The variability of the different tandem repeats allows epidemiological studies, as well as follow-up of the evolution of CCs and the identification of potential ancestors. The 14 loci can conveniently be analyzed in two steps, based upon a first-line simplified assay comprising a subset of 10 loci (panel 1) and a second subset of 4 loci (panel 2) that provides higher resolution when needed. In conclusion, the MLVA scheme proposed here, in combination with available on-line genotyping databases (including http://mlva.u-psud.fr/), multiplexing, and automatic sizing, can provide a basis for almost-real-time large-scale population monitoring of S. aureus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study looks at how increased memory utilisation affects throughput and energy consumption in scientific computing, especially in high-energy physics. Our aim is to minimise energy consumed by a set of jobs without increasing the processing time. The earlier tests indicated that, especially in data analysis, throughput can increase over 100% and energy consumption decrease 50% by processing multiple jobs in parallel per CPU core. Since jobs are heterogeneous, it is not possible to find an optimum value for the number of parallel jobs. A better solution is based on memory utilisation, but finding an optimum memory threshold is not straightforward. Therefore, a fuzzy logic-based algorithm was developed that can dynamically adapt the memory threshold based on the overall load. In this way, it is possible to keep memory consumption stable with different workloads while achieving significantly higher throughput and energy-efficiency than using a traditional fixed number of jobs or fixed memory threshold approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To cite this article: Ponvert C, Perrin Y, Bados-Albiero A, Le Bourgeois M, Karila C, Delacourt C, Scheinmann P, De Blic J. Allergy to betalactam antibiotics in children: results of a 20-year study based on clinical history, skin and challenge tests. Pediatr Allergy Immunol 2011; 22: 411-418. ABSTRACT: Studies based on skin and challenge tests have shown that 12-60% of children with suspected betalactam hypersensitivity were allergic to betalactams. Responses in skin and challenge tests were studied in 1865 children with suspected betalactam allergy (i) to confirm or rule out the suspected diagnosis; (ii) to evaluate diagnostic value of immediate and non-immediate responses in skin and challenge tests; (iii) to determine frequency of betalactam allergy in those children, and (iv) to determine potential risk factors for betalactam allergy. The work-up was completed in 1431 children, of whom 227 (15.9%) were diagnosed allergic to betalactams. Betalactam hypersensitivity was diagnosed in 50 of the 162 (30.9%) children reporting immediate reactions and in 177 of the 1087 (16.7%) children reporting non-immediate reactions (p < 0.001). The likelihood of betalactam hypersensitivity was also significantly higher in children reporting anaphylaxis, serum sickness-like reactions, and (potentially) severe skin reactions such as acute generalized exanthematic pustulosis, Stevens-Johnson syndrome, and drug reaction with systemic symptoms than in other children (p < 0.001). Skin tests diagnosed 86% of immediate and 31.6% of non-immediate sensitizations. Cross-reactivity and/or cosensitization among betalactams was diagnosed in 76% and 14.7% of the children with immediate and non-immediate hypersensitivity, respectively. The number of children diagnosed allergic to betalactams decreased with time between the reaction and the work-up, probably because the majority of children with severe and worrying reactions were referred for allergological work-up more promptly than the other children. Sex, age, and atopy were not risk factors for betalactam hypersensitivity. In conclusion, we confirm in numerous children that (i) only a few children with suspected betalactam hypersensitivity are allergic to betalactams; (ii) the likelihood of betalactam allergy increases with earliness and/or severity of the reactions; (iii) although non-immediate-reading skin tests (intradermal and patch tests) may diagnose non-immediate sensitizations in children with non-immediate reactions to betalactams (maculopapular rashes and potentially severe skin reactions especially), the diagnostic value of non-immediate-reading skin tests is far lower than the diagnostic value of immediate-reading skin tests, most non-immediate sensitizations to betalactams being diagnosed by means of challenge tests; (iv) cross-reactivity and/or cosensitizations among betalactams are much more frequent in children reporting immediate and/or anaphylactic reactions than in the other children; (v) age, sex and personal atopy are not significant risk factors for betalactam hypersensitivity; and (vi) the number of children with diagnosed allergy to betalactams (of the immediate-type hypersensitivity especially) decreases with time between the reaction and allergological work-up. Finally, based on our experience, we also propose a practical diagnostic approach in children with suspected betalactam hypersensitivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction This dissertation consists of three essays in equilibrium asset pricing. The first chapter studies the asset pricing implications of a general equilibrium model in which real investment is reversible at a cost. Firms face higher costs in contracting than in expanding their capital stock and decide to invest when their productive capital is scarce relative to the overall capital of the economy. Positive shocks to the capital of the firm increase the size of the firm and reduce the value of growth options. As a result, the firm is burdened with more unproductive capital and its value lowers with respect to the accumulated capital. The optimal consumption policy alters the optimal allocation of resources and affects firm's value, generating mean-reverting dynamics for the M/B ratios. The model (1) captures convergence of price-to-book ratios -negative for growth stocks and positive for value stocks - (firm migration), (2) generates deviations from the classic CAPM in line with the cross-sectional variation in expected stock returns and (3) generates a non-monotone relationship between Tobin's q and conditional volatility consistent with the empirical evidence. The second chapter proposes a standard portfolio-choice problem with transaction costs and mean reversion in expected returns. In the presence of transactions costs, no matter how small, arbitrage activity does not necessarily render equal all riskless rates of return. When two such rates follow stochastic processes, it is not optimal immediately to arbitrage out any discrepancy that arises between them. The reason is that immediate arbitrage would induce a definite expenditure of transactions costs whereas, without arbitrage intervention, there exists some, perhaps sufficient, probability that these two interest rates will come back together without any costs having been incurred. Hence, one can surmise that at equilibrium the financial market will permit the coexistence of two riskless rates that are not equal to each other. For analogous reasons, randomly fluctuating expected rates of return on risky assets will be allowed to differ even after correction for risk, leading to important violations of the Capital Asset Pricing Model. The combination of randomness in expected rates of return and proportional transactions costs is a serious blow to existing frictionless pricing models. Finally, in the last chapter I propose a two-countries two-goods general equilibrium economy with uncertainty about the fundamentals' growth rates to study the joint behavior of equity volatilities and correlation at the business cycle frequency. I assume that dividend growth rates jump from one state to other, while countries' switches are possibly correlated. The model is solved in closed-form and the analytical expressions for stock prices are reported. When calibrated to the empirical data of United States and United Kingdom, the results show that, given the existing degree of synchronization across these business cycles, the model captures quite well the historical patterns of stock return volatilities. Moreover, I can explain the time behavior of the correlation, but exclusively under the assumption of a global business cycle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Because data on rare species usually are sparse, it is important to have efficient ways to sample additional data. Traditional sampling approaches are of limited value for rare species because a very large proportion of randomly chosen sampling sites are unlikely to shelter the species. For these species, spatial predictions from niche-based distribution models can be used to stratify the sampling and increase sampling efficiency. New data sampled are then used to improve the initial model. Applying this approach repeatedly is an adaptive process that may allow increasing the number of new occurrences found. We illustrate the approach with a case study of a rare and endangered plant species in Switzerland and a simulation experiment. Our field survey confirmed that the method helps in the discovery of new populations of the target species in remote areas where the predicted habitat suitability is high. In our simulations the model-based approach provided a significant improvement (by a factor of 1.8 to 4 times, depending on the measure) over simple random sampling. In terms of cost this approach may save up to 70% of the time spent in the field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Puklen complex of the Mid-Proterozoic Gardar Province, South Greenland, consists of various silica-saturated to quartz-bearing syenites, which are intruded by a peralkaline granite. The primary mafic minerals in the syenites are augite +/- olivine + Fe-Ti oxide + amphibole. Ternary feldspar thermometry and phase equilibria among mafic silicates yield T = 950-750degreesC, a(SiO2) = 0.7-1 and an f(O2) of 1-3 log units below the fayalite-magnetite-quartz (FMQ) buffer at 1 kbar. In the granites, the primary mafic minerals are ilmenite and Li-bearing arfvedsonite, which crystallized at temperatures below 750degreesC and at f(O2) values around the FMQ buffer. In both rock types, a secondary post-magmatic assemblage overprints the primary magmatic phases. In syenites, primary Ca-bearing minerals are replaced by Na-rich minerals such as aegirine-augite and albite, resulting in the release of Ca. Accordingly, secondary minerals include ferro-actinolite, (calcite-siderite)(ss), titanite and andradite in equilibrium with the Na-rich minerals. Phase equilibria indicate that formation of these minerals took place over a long temperature interval from near-magmatic temperatures down to similar to300degreesC. In the course of this cooling, oxygen fugacity rose in most samples. For example, late-stage aegirine in granites formed at the expense of arfvedsonite at temperatures below 300degreesC and at an oxygen fugacity above the haematite-magnetite (HM) buffer. The calculated delta(18)O(melt) value for the syenites (+5.9 to +6.3parts per thousand) implies a mantle origin, whereas the inferred delta(18)O(melt) value of <+5.1parts per thousand for the granitic melts is significantly lower. Thus, the granites require an additional low-delta(18)O contaminant, which was not involved in the genesis of the syenites. Rb/Sr data for minerals of both rock types indicate open-system behaviour for Rb and Sr during post-magmatic metasomatism. Neodymium isotope compositions (epsilonNd(1170 Ma) = -3.8 to -6.4) of primary minerals in syenites are highly variable, and suggest that assimilation of crustal rocks occurred to variable extents. Homogeneous epsilon(Nd) values of -5.9 and -6.0 for magmatic amphibole in the granites lie within the range of the syenites. Because of the very similar neodymium isotopic compositions of magmatic and late- to post-magmatic minerals from the same syenite samples a principally closed-system behaviour during cooling is implied. In contrast, for the granites an externally derived fluid phase is required to explain the extremely low epsilon(Nd) values of about -10 and low delta(18)O between +2.0 and +0.5parts per thousand for late-stage aegirine, indicating an open system in the late-stage history. In this study we show that the combination of phase equilibria constraints with stable and radiogenic isotope data on mineral separates can provide much better constraints on magma evolution during emplacement and crystallization than conventional whole-rock studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Controversy exists regarding the usefulness of troponin testing for the risk stratification of patients with acute pulmonary embolism (PE). We conducted an updated systematic review and a metaanalysis of troponin-based risk stratification of normotensive patients with acute symptomatic PE. The sources of our data were publications listed in Medline and Embase from 1980 through April 2008 and a review of cited references in those publications. METHODS: We included all studies that estimated the relation between troponin levels and the incidence of all-cause mortality in normotensive patients with acute symptomatic PE. Two reviewers independently abstracted data and assessed study quality. From the literature search, 596 publications were screened. Nine studies that consisted of 1,366 normotensive patients with acute symptomatic PE were deemed eligible. Pooled results showed that elevated troponin levels were associated with a 4.26-fold increased odds of overall mortality (95% CI, 2.13 to 8.50; heterogeneity chi(2) = 12.64; degrees of freedom = 8; p = 0.125). Summary receiver operating characteristic curve analysis showed a relationship between the sensitivity and specificity of troponin levels to predict overall mortality (Spearman rank correlation coefficient = 0.68; p = 0.046). Pooled likelihood ratios (LRs) were not extreme (negative LR, 0.59 [95% CI, 0.39 to 0.88]; positive LR, 2.26 [95% CI, 1.66 to 3.07]). The Begg rank correlation method did not detect evidence of publication bias. CONCLUSIONS: The results of this metaanalysis indicate that elevated troponin levels do not adequately discern normotensive patients with acute symptomatic PE who are at high risk for death from those who are at low risk for death.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

EXECUTIVE SUMMARY : Evaluating Information Security Posture within an organization is becoming a very complex task. Currently, the evaluation and assessment of Information Security are commonly performed using frameworks, methodologies and standards which often consider the various aspects of security independently. Unfortunately this is ineffective because it does not take into consideration the necessity of having a global and systemic multidimensional approach to Information Security evaluation. At the same time the overall security level is globally considered to be only as strong as its weakest link. This thesis proposes a model aiming to holistically assess all dimensions of security in order to minimize the likelihood that a given threat will exploit the weakest link. A formalized structure taking into account all security elements is presented; this is based on a methodological evaluation framework in which Information Security is evaluated from a global perspective. This dissertation is divided into three parts. Part One: Information Security Evaluation issues consists of four chapters. Chapter 1 is an introduction to the purpose of this research purpose and the Model that will be proposed. In this chapter we raise some questions with respect to "traditional evaluation methods" as well as identifying the principal elements to be addressed in this direction. Then we introduce the baseline attributes of our model and set out the expected result of evaluations according to our model. Chapter 2 is focused on the definition of Information Security to be used as a reference point for our evaluation model. The inherent concepts of the contents of a holistic and baseline Information Security Program are defined. Based on this, the most common roots-of-trust in Information Security are identified. Chapter 3 focuses on an analysis of the difference and the relationship between the concepts of Information Risk and Security Management. Comparing these two concepts allows us to identify the most relevant elements to be included within our evaluation model, while clearing situating these two notions within a defined framework is of the utmost importance for the results that will be obtained from the evaluation process. Chapter 4 sets out our evaluation model and the way it addresses issues relating to the evaluation of Information Security. Within this Chapter the underlying concepts of assurance and trust are discussed. Based on these two concepts, the structure of the model is developed in order to provide an assurance related platform as well as three evaluation attributes: "assurance structure", "quality issues", and "requirements achievement". Issues relating to each of these evaluation attributes are analysed with reference to sources such as methodologies, standards and published research papers. Then the operation of the model is discussed. Assurance levels, quality levels and maturity levels are defined in order to perform the evaluation according to the model. Part Two: Implementation of the Information Security Assurance Assessment Model (ISAAM) according to the Information Security Domains consists of four chapters. This is the section where our evaluation model is put into a welldefined context with respect to the four pre-defined Information Security dimensions: the Organizational dimension, Functional dimension, Human dimension, and Legal dimension. Each Information Security dimension is discussed in a separate chapter. For each dimension, the following two-phase evaluation path is followed. The first phase concerns the identification of the elements which will constitute the basis of the evaluation: ? Identification of the key elements within the dimension; ? Identification of the Focus Areas for each dimension, consisting of the security issues identified for each dimension; ? Identification of the Specific Factors for each dimension, consisting of the security measures or control addressing the security issues identified for each dimension. The second phase concerns the evaluation of each Information Security dimension by: ? The implementation of the evaluation model, based on the elements identified for each dimension within the first phase, by identifying the security tasks, processes, procedures, and actions that should have been performed by the organization to reach the desired level of protection; ? The maturity model for each dimension as a basis for reliance on security. For each dimension we propose a generic maturity model that could be used by every organization in order to define its own security requirements. Part three of this dissertation contains the Final Remarks, Supporting Resources and Annexes. With reference to the objectives of our thesis, the Final Remarks briefly analyse whether these objectives were achieved and suggest directions for future related research. Supporting resources comprise the bibliographic resources that were used to elaborate and justify our approach. Annexes include all the relevant topics identified within the literature to illustrate certain aspects of our approach. Our Information Security evaluation model is based on and integrates different Information Security best practices, standards, methodologies and research expertise which can be combined in order to define an reliable categorization of Information Security. After the definition of terms and requirements, an evaluation process should be performed in order to obtain evidence that the Information Security within the organization in question is adequately managed. We have specifically integrated into our model the most useful elements of these sources of information in order to provide a generic model able to be implemented in all kinds of organizations. The value added by our evaluation model is that it is easy to implement and operate and answers concrete needs in terms of reliance upon an efficient and dynamic evaluation tool through a coherent evaluation system. On that basis, our model could be implemented internally within organizations, allowing them to govern better their Information Security. RÉSUMÉ : Contexte général de la thèse L'évaluation de la sécurité en général, et plus particulièrement, celle de la sécurité de l'information, est devenue pour les organisations non seulement une mission cruciale à réaliser, mais aussi de plus en plus complexe. A l'heure actuelle, cette évaluation se base principalement sur des méthodologies, des bonnes pratiques, des normes ou des standards qui appréhendent séparément les différents aspects qui composent la sécurité de l'information. Nous pensons que cette manière d'évaluer la sécurité est inefficiente, car elle ne tient pas compte de l'interaction des différentes dimensions et composantes de la sécurité entre elles, bien qu'il soit admis depuis longtemps que le niveau de sécurité globale d'une organisation est toujours celui du maillon le plus faible de la chaîne sécuritaire. Nous avons identifié le besoin d'une approche globale, intégrée, systémique et multidimensionnelle de l'évaluation de la sécurité de l'information. En effet, et c'est le point de départ de notre thèse, nous démontrons que seule une prise en compte globale de la sécurité permettra de répondre aux exigences de sécurité optimale ainsi qu'aux besoins de protection spécifiques d'une organisation. Ainsi, notre thèse propose un nouveau paradigme d'évaluation de la sécurité afin de satisfaire aux besoins d'efficacité et d'efficience d'une organisation donnée. Nous proposons alors un modèle qui vise à évaluer d'une manière holistique toutes les dimensions de la sécurité, afin de minimiser la probabilité qu'une menace potentielle puisse exploiter des vulnérabilités et engendrer des dommages directs ou indirects. Ce modèle se base sur une structure formalisée qui prend en compte tous les éléments d'un système ou programme de sécurité. Ainsi, nous proposons un cadre méthodologique d'évaluation qui considère la sécurité de l'information à partir d'une perspective globale. Structure de la thèse et thèmes abordés Notre document est structuré en trois parties. La première intitulée : « La problématique de l'évaluation de la sécurité de l'information » est composée de quatre chapitres. Le chapitre 1 introduit l'objet de la recherche ainsi que les concepts de base du modèle d'évaluation proposé. La maniéré traditionnelle de l'évaluation de la sécurité fait l'objet d'une analyse critique pour identifier les éléments principaux et invariants à prendre en compte dans notre approche holistique. Les éléments de base de notre modèle d'évaluation ainsi que son fonctionnement attendu sont ensuite présentés pour pouvoir tracer les résultats attendus de ce modèle. Le chapitre 2 se focalise sur la définition de la notion de Sécurité de l'Information. Il ne s'agit pas d'une redéfinition de la notion de la sécurité, mais d'une mise en perspectives des dimensions, critères, indicateurs à utiliser comme base de référence, afin de déterminer l'objet de l'évaluation qui sera utilisé tout au long de notre travail. Les concepts inhérents de ce qui constitue le caractère holistique de la sécurité ainsi que les éléments constitutifs d'un niveau de référence de sécurité sont définis en conséquence. Ceci permet d'identifier ceux que nous avons dénommés « les racines de confiance ». Le chapitre 3 présente et analyse la différence et les relations qui existent entre les processus de la Gestion des Risques et de la Gestion de la Sécurité, afin d'identifier les éléments constitutifs du cadre de protection à inclure dans notre modèle d'évaluation. Le chapitre 4 est consacré à la présentation de notre modèle d'évaluation Information Security Assurance Assessment Model (ISAAM) et la manière dont il répond aux exigences de l'évaluation telle que nous les avons préalablement présentées. Dans ce chapitre les concepts sous-jacents relatifs aux notions d'assurance et de confiance sont analysés. En se basant sur ces deux concepts, la structure du modèle d'évaluation est développée pour obtenir une plateforme qui offre un certain niveau de garantie en s'appuyant sur trois attributs d'évaluation, à savoir : « la structure de confiance », « la qualité du processus », et « la réalisation des exigences et des objectifs ». Les problématiques liées à chacun de ces attributs d'évaluation sont analysées en se basant sur l'état de l'art de la recherche et de la littérature, sur les différentes méthodes existantes ainsi que sur les normes et les standards les plus courants dans le domaine de la sécurité. Sur cette base, trois différents niveaux d'évaluation sont construits, à savoir : le niveau d'assurance, le niveau de qualité et le niveau de maturité qui constituent la base de l'évaluation de l'état global de la sécurité d'une organisation. La deuxième partie: « L'application du Modèle d'évaluation de l'assurance de la sécurité de l'information par domaine de sécurité » est elle aussi composée de quatre chapitres. Le modèle d'évaluation déjà construit et analysé est, dans cette partie, mis dans un contexte spécifique selon les quatre dimensions prédéfinies de sécurité qui sont: la dimension Organisationnelle, la dimension Fonctionnelle, la dimension Humaine, et la dimension Légale. Chacune de ces dimensions et son évaluation spécifique fait l'objet d'un chapitre distinct. Pour chacune des dimensions, une évaluation en deux phases est construite comme suit. La première phase concerne l'identification des éléments qui constituent la base de l'évaluation: ? Identification des éléments clés de l'évaluation ; ? Identification des « Focus Area » pour chaque dimension qui représentent les problématiques se trouvant dans la dimension ; ? Identification des « Specific Factors » pour chaque Focus Area qui représentent les mesures de sécurité et de contrôle qui contribuent à résoudre ou à diminuer les impacts des risques. La deuxième phase concerne l'évaluation de chaque dimension précédemment présentées. Elle est constituée d'une part, de l'implémentation du modèle général d'évaluation à la dimension concernée en : ? Se basant sur les éléments spécifiés lors de la première phase ; ? Identifiant les taches sécuritaires spécifiques, les processus, les procédures qui auraient dû être effectués pour atteindre le niveau de protection souhaité. D'autre part, l'évaluation de chaque dimension est complétée par la proposition d'un modèle de maturité spécifique à chaque dimension, qui est à considérer comme une base de référence pour le niveau global de sécurité. Pour chaque dimension nous proposons un modèle de maturité générique qui peut être utilisé par chaque organisation, afin de spécifier ses propres exigences en matière de sécurité. Cela constitue une innovation dans le domaine de l'évaluation, que nous justifions pour chaque dimension et dont nous mettons systématiquement en avant la plus value apportée. La troisième partie de notre document est relative à la validation globale de notre proposition et contient en guise de conclusion, une mise en perspective critique de notre travail et des remarques finales. Cette dernière partie est complétée par une bibliographie et des annexes. Notre modèle d'évaluation de la sécurité intègre et se base sur de nombreuses sources d'expertise, telles que les bonnes pratiques, les normes, les standards, les méthodes et l'expertise de la recherche scientifique du domaine. Notre proposition constructive répond à un véritable problème non encore résolu, auquel doivent faire face toutes les organisations, indépendamment de la taille et du profil. Cela permettrait à ces dernières de spécifier leurs exigences particulières en matière du niveau de sécurité à satisfaire, d'instancier un processus d'évaluation spécifique à leurs besoins afin qu'elles puissent s'assurer que leur sécurité de l'information soit gérée d'une manière appropriée, offrant ainsi un certain niveau de confiance dans le degré de protection fourni. Nous avons intégré dans notre modèle le meilleur du savoir faire, de l'expérience et de l'expertise disponible actuellement au niveau international, dans le but de fournir un modèle d'évaluation simple, générique et applicable à un grand nombre d'organisations publiques ou privées. La valeur ajoutée de notre modèle d'évaluation réside précisément dans le fait qu'il est suffisamment générique et facile à implémenter tout en apportant des réponses sur les besoins concrets des organisations. Ainsi notre proposition constitue un outil d'évaluation fiable, efficient et dynamique découlant d'une approche d'évaluation cohérente. De ce fait, notre système d'évaluation peut être implémenté à l'interne par l'entreprise elle-même, sans recourir à des ressources supplémentaires et lui donne également ainsi la possibilité de mieux gouverner sa sécurité de l'information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Forensic scientists face increasingly complex inference problems for evaluating likelihood ratios (LRs) for an appropriate pair of propositions. Up to now, scientists and statisticians have derived LR formulae using an algebraic approach. However, this approach reaches its limits when addressing cases with an increasing number of variables and dependence relationships between these variables. In this study, we suggest using a graphical approach, based on the construction of Bayesian networks (BNs). We first construct a BN that captures the problem, and then deduce the expression for calculating the LR from this model to compare it with existing LR formulae. We illustrate this idea by applying it to the evaluation of an activity level LR in the context of the two-trace transfer problem. Our approach allows us to relax assumptions made in previous LR developments, produce a new LR formula for the two-trace transfer problem and generalize this scenario to n traces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Elevated urinary calcium excretion is associated with reduced bone mineral density. Population-based data on urinary calcium excretion are scarce. We explored the association of serum calcium and circulating levels of vitamin D (including 25(OH)D2 and 25(OH)D3) with urinary calcium excretion in men and women in a population-based study. Methods: We used data from the "Swiss Survey on Salt" conducted between 2010 and 2012 and including people aged 15 years and over. Twenty-four hour urine collection, blood analysis, clinical examination and anthropometric measures were collected in 11 centres from the 3 linguistic regions of Switzerland. Vitamin D was measured centrally using liquid chromatography - tandem mass spectrometry. Hypercalciuria was defined as urinary calcium excretion >0.1 mmol/kg/24h. Multivariable linear regression was used to explore factors associated with 24-hour urinary calcium excretion (mmol/24h) squared root transformed, taken as the dependant variable. Vitamin D was divided into monthspecific tertiles with the first tertile having the lowest value and the third tertile having the highest value. Results: The 669 men and 624 women had mean (SD) age of 49.2 (18.1) and 47 (17.9) years and a prevalence of hypercalciuria of 8.9% and 8.0%, respectively. In adjusted models, the association of urinary calcium excretion with protein-corrected serum calcium was (β coefficient } standard error, according to urinary calcium squared root transformed) 1.125 } 0.184 mmol/L per square-root (mmol/24h) (P<0.001) in women and 0.374 } 0.224 (P=0.096) in men. Men in the third month-specific vitamin D tertile had higher urinary calcium excretion than men in the first tertile (0.170 } 0.05 nmol/L per mmol/24h, P=0.001) and the corresponding association was 0.048 } 0.043, P= 0.272 in women. Conclusion: About one in eleven person has hypercalciuria in the Swiss population. The positive association of serum calcium with urinary calcium excretion was steeper in women than in men, independently of menopausal status. Circulating vitamin D was associated positively with urinary calcium excretion only in men. The reasons underlying the observed sex differences in the hormonal control of urinary calcium excretion need to be explored in further studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Direct MR arthrography has a better diagnostic accuracy than MR imaging alone. However, contrast material is not always homogeneously distributed in the articular space. Lesions of cartilage surfaces or intra-articular soft tissues can thus be misdiagnosed. Concomitant application of axial traction during MR arthrography leads to articular distraction. This enables better distribution of contrast material in the joint and better delineation of intra-articular structures. Therefore, this technique improves detection of cartilage lesions. Moreover, the axial stress applied on articular structures may reveal lesions invisible on MR images without traction. Based on our clinical experience, we believe that this relatively unknown technique is promising and should be further developed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

STUDY DESIGN.: Retrospective radiologic study on a prospective patient cohort. OBJECTIVE.: To devise a qualitative grading of lumbar spinal stenosis (LSS), study its reliability and clinical relevance. SUMMARY OF BACKGROUND DATA.: Radiologic stenosis is assessed commonly by measuring dural sac cross-sectional area (DSCA). Great variation is observed though in surfaces recorded between symptomatic and asymptomatic individuals. METHODS.: We describe a 7-grade classification based on the morphology of the dural sac as observed on T2 axial magnetic resonance images based on the rootlet/cerebrospinal fluid ratio. Grades A and B show cerebrospinal fluid presence while grades C and D show none at all. The grading was applied to magnetic resonance images of 95 subjects divided in 3 groups as follows: 37 symptomatic LSS surgically treated patients; 31 symptomatic LSS conservatively treated patients (average follow-up, 2.5 and 3.1 years); and 27 low back pain (LBP) sufferers. DSCA was also digitally measured. We studied intra- and interobserver reliability, distribution of grades, relation between morphologic grading and DSCA, as well relation between grades, DSCA, and Oswestry Disability Index. RESULTS.: Average intra- and interobserver agreement was substantial and moderate, respectively (k = 0.65 and 0.44), whereas they were substantial for physicians working in the study originating unit. Surgical patients had the smallest DSCA. A larger proportion of C and D grades was observed in the surgical group. Surface measurementsresulted in overdiagnosis of stenosis in 35 patients and under diagnosis in 12. No relation could be found between stenosis grade or DSCA and baseline Oswestry Disability Index or surgical result. C and D grade patients were more likely to fail conservative treatment, whereas grades A and B were less likely to warrant surgery. CONCLUSION.: The grading defines stenosis in different subjects than surface measurements alone. Since it mainly considers impingement of neural tissue it might be a more appropriate clinical and research tool as well as carrying a prognostic value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. The time passed since the infection of a human immunodeficiency virus (HIV)-infected individual (the age of infection) is an important but often only poorly known quantity. We assessed whether the fraction of ambiguous nucleotides obtained from bulk sequencing as done for genotypic resistance testing can serve as a proxy of this parameter. Methods. We correlated the age of infection and the fraction of ambiguous nucleotides in partial pol sequences of HIV-1 sampled before initiation of antiretroviral therapy (ART). Three groups of Swiss HIV Cohort Study participants were analyzed, for whom the age of infection was estimated on the basis of Bayesian back calculation (n = 3,307), seroconversion (n = 366), or diagnoses of primary HIV infection (n = 130). In addition, we studied 124 patients for whom longitudinal genotypic resistance testing was performed while they were still ART-naive. Results. We found that the fraction of ambiguous nucleotides increased with the age of infection with a rate of .2% per year within the first 8 years but thereafter with a decreasing rate. We show that this pattern is consistent with population-genetic models for realistic parameters. Finally, we show that, in this highly representative population, a fraction of ambiguous nucleotides of >.5% provides strong evidence against a recent infection event < 1 year prior to sampling (negative predictive value, 98.7%). Conclusions. These findings show that the fraction of ambiguous nucleotides is a useful marker for the age of infection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Secondary sexual characters often signal qualities such as physiological processes associated with resistance to various sources of stress. When the expression of an ornament is not sex-limited, we can identify the costs and benefits of displaying a trait that is typical of its own sex or of the other sex. Indeed, the magnitude and sign of the covariation between physiology and the extent to which an ornament is expressed could differ between males and females if, for instance, the regulation of physiological processes is sensitive to sex hormones. Using data collected over 14 years in the nocturnal barn owl Tyto alba, we investigated how nestling body mass covaries with a heritable melanin-based sex-trait, females displaying on average larger black feather spots than males. Independently of nestling sex, year and time of the day large-spotted nestlings were heavier than small-spotted nestlings. In contrast, the magnitude and sign of the covariation between nestling body mass and the size of parental spots varied along the day in a way that depended on the year and parental gender. In poor years, offspring of smaller-spotted mothers were heavier throughout the resting period; in the morning, offspring sired by larger-spotted fathers were heavier than offspring of smaller-spotted fathers, while in the evening the opposite pattern was found. Thus, maternal and paternal coloration is differentially associated with behaviour or physiology, processes that are sensitive to time of the day and environmental factors. Interestingly, the covariation between offspring body mass and paternal coloration is more sensitive to these environmental factors than the covariation with maternal coloration. This indicates that the benefit of pairing with differently spotted males may depend on environmental conditions, which could help maintain genetic variation in the face of intense directional (sexual) selection.