865 resultados para Value-based pricing
Resumo:
In this article, the objective is to demonstrate the effects of different decision styles on strategic decisions and likewise, on an organization. The technique that was presented in the study is based on the transformation of linguistic variables to numerical value intervals. In this model, the study benefits from fuzzy logic methodology and fuzzy numbers. This fuzzy methodology approach allows us to examine the relations between decision making styles and strategic management processes when there is uncertainty. The purpose is to provide results to companies that may help them to exercise the most appropriate decision making style for its different strategic management processes. The study is leaving more research topics for further studies that may be applied to other decision making areas within the strategic management process.
Resumo:
This study looks at how increased memory utilisation affects throughput and energy consumption in scientific computing, especially in high-energy physics. Our aim is to minimise energy consumed by a set of jobs without increasing the processing time. The earlier tests indicated that, especially in data analysis, throughput can increase over 100% and energy consumption decrease 50% by processing multiple jobs in parallel per CPU core. Since jobs are heterogeneous, it is not possible to find an optimum value for the number of parallel jobs. A better solution is based on memory utilisation, but finding an optimum memory threshold is not straightforward. Therefore, a fuzzy logic-based algorithm was developed that can dynamically adapt the memory threshold based on the overall load. In this way, it is possible to keep memory consumption stable with different workloads while achieving significantly higher throughput and energy-efficiency than using a traditional fixed number of jobs or fixed memory threshold approaches.
Resumo:
To cite this article: Ponvert C, Perrin Y, Bados-Albiero A, Le Bourgeois M, Karila C, Delacourt C, Scheinmann P, De Blic J. Allergy to betalactam antibiotics in children: results of a 20-year study based on clinical history, skin and challenge tests. Pediatr Allergy Immunol 2011; 22: 411-418. ABSTRACT: Studies based on skin and challenge tests have shown that 12-60% of children with suspected betalactam hypersensitivity were allergic to betalactams. Responses in skin and challenge tests were studied in 1865 children with suspected betalactam allergy (i) to confirm or rule out the suspected diagnosis; (ii) to evaluate diagnostic value of immediate and non-immediate responses in skin and challenge tests; (iii) to determine frequency of betalactam allergy in those children, and (iv) to determine potential risk factors for betalactam allergy. The work-up was completed in 1431 children, of whom 227 (15.9%) were diagnosed allergic to betalactams. Betalactam hypersensitivity was diagnosed in 50 of the 162 (30.9%) children reporting immediate reactions and in 177 of the 1087 (16.7%) children reporting non-immediate reactions (p < 0.001). The likelihood of betalactam hypersensitivity was also significantly higher in children reporting anaphylaxis, serum sickness-like reactions, and (potentially) severe skin reactions such as acute generalized exanthematic pustulosis, Stevens-Johnson syndrome, and drug reaction with systemic symptoms than in other children (p < 0.001). Skin tests diagnosed 86% of immediate and 31.6% of non-immediate sensitizations. Cross-reactivity and/or cosensitization among betalactams was diagnosed in 76% and 14.7% of the children with immediate and non-immediate hypersensitivity, respectively. The number of children diagnosed allergic to betalactams decreased with time between the reaction and the work-up, probably because the majority of children with severe and worrying reactions were referred for allergological work-up more promptly than the other children. Sex, age, and atopy were not risk factors for betalactam hypersensitivity. In conclusion, we confirm in numerous children that (i) only a few children with suspected betalactam hypersensitivity are allergic to betalactams; (ii) the likelihood of betalactam allergy increases with earliness and/or severity of the reactions; (iii) although non-immediate-reading skin tests (intradermal and patch tests) may diagnose non-immediate sensitizations in children with non-immediate reactions to betalactams (maculopapular rashes and potentially severe skin reactions especially), the diagnostic value of non-immediate-reading skin tests is far lower than the diagnostic value of immediate-reading skin tests, most non-immediate sensitizations to betalactams being diagnosed by means of challenge tests; (iv) cross-reactivity and/or cosensitizations among betalactams are much more frequent in children reporting immediate and/or anaphylactic reactions than in the other children; (v) age, sex and personal atopy are not significant risk factors for betalactam hypersensitivity; and (vi) the number of children with diagnosed allergy to betalactams (of the immediate-type hypersensitivity especially) decreases with time between the reaction and allergological work-up. Finally, based on our experience, we also propose a practical diagnostic approach in children with suspected betalactam hypersensitivity.
Resumo:
We describe the version of the GPT planner to be used in the planning competition. This version, called mGPT, solves mdps specified in the ppddllanguage by extracting and using different classes of lower bounds, along with various heuristic-search algorithms. The lower bounds are extracted from deterministic relaxations of the mdp where alternativeprobabilistic effects of an action are mapped into different, independent, deterministic actions. The heuristic-search algorithms, on the other hand, use these lower bounds for focusing the updates and delivering a consistent value function over all states reachable from the initial state with the greedy policy.
Resumo:
In this paper, we describe several techniques for detecting tonic pitch value in Indian classical music. In Indian music, the raga is the basic melodic framework and it is built on the tonic. Tonic detection is therefore fundamental for any melodic analysis in Indian classical music. This workexplores detection of tonic by processing the pitch histograms of Indian classic music. Processing of pitch histograms using group delay functions and its ability to amplify certain traits of Indian music in the pitch histogram, is discussed. Three different strategies to detect tonic, namely, the concert method, the template matching and segmented histogram method are proposed. The concert method exploits the fact that the tonic is constant over a piece/concert.templatematchingmethod and segmented histogrammethodsuse the properties: (i) the tonic is always present in the background, (ii) some notes are less inflected and dominant, to detect the tonic of individual pieces. All the three methods yield good results for Carnatic music (90−100% accuracy), while for Hindustanimusic, the templatemethod works best, provided the v¯adi samv¯adi notes for a given piece are known (85%).
Resumo:
The aim of this research was to investigate the effects of high pressure processing (HPP) on consumer acceptance for chilled ready meals manufactured using a low-value beef cut. Three hundred consumers evaluated chilled ready meals subjected to 4 pressure treatments and a non-treated control monadically on a 9-point scale for liking for beef tenderness and juiciness, overall flavour, overall liking, and purchase intent. Data were also collected on consumers' food consumption patterns, their attitudes towards food by means of the reduced food-related lifestyle (FRL) instrument, and socio-demographics. The results indicated that a pressure treatment of 200 MPa was acceptable to most consumers. K-means cluster analysis identified 4 consumer groups with similar preferences, and the optimal pressure treatments acceptable to specific consumer groups were identified for those firms that would wish to target attitudinally differentiated consumer segments
Resumo:
Introduction This dissertation consists of three essays in equilibrium asset pricing. The first chapter studies the asset pricing implications of a general equilibrium model in which real investment is reversible at a cost. Firms face higher costs in contracting than in expanding their capital stock and decide to invest when their productive capital is scarce relative to the overall capital of the economy. Positive shocks to the capital of the firm increase the size of the firm and reduce the value of growth options. As a result, the firm is burdened with more unproductive capital and its value lowers with respect to the accumulated capital. The optimal consumption policy alters the optimal allocation of resources and affects firm's value, generating mean-reverting dynamics for the M/B ratios. The model (1) captures convergence of price-to-book ratios -negative for growth stocks and positive for value stocks - (firm migration), (2) generates deviations from the classic CAPM in line with the cross-sectional variation in expected stock returns and (3) generates a non-monotone relationship between Tobin's q and conditional volatility consistent with the empirical evidence. The second chapter proposes a standard portfolio-choice problem with transaction costs and mean reversion in expected returns. In the presence of transactions costs, no matter how small, arbitrage activity does not necessarily render equal all riskless rates of return. When two such rates follow stochastic processes, it is not optimal immediately to arbitrage out any discrepancy that arises between them. The reason is that immediate arbitrage would induce a definite expenditure of transactions costs whereas, without arbitrage intervention, there exists some, perhaps sufficient, probability that these two interest rates will come back together without any costs having been incurred. Hence, one can surmise that at equilibrium the financial market will permit the coexistence of two riskless rates that are not equal to each other. For analogous reasons, randomly fluctuating expected rates of return on risky assets will be allowed to differ even after correction for risk, leading to important violations of the Capital Asset Pricing Model. The combination of randomness in expected rates of return and proportional transactions costs is a serious blow to existing frictionless pricing models. Finally, in the last chapter I propose a two-countries two-goods general equilibrium economy with uncertainty about the fundamentals' growth rates to study the joint behavior of equity volatilities and correlation at the business cycle frequency. I assume that dividend growth rates jump from one state to other, while countries' switches are possibly correlated. The model is solved in closed-form and the analytical expressions for stock prices are reported. When calibrated to the empirical data of United States and United Kingdom, the results show that, given the existing degree of synchronization across these business cycles, the model captures quite well the historical patterns of stock return volatilities. Moreover, I can explain the time behavior of the correlation, but exclusively under the assumption of a global business cycle.
Resumo:
Because data on rare species usually are sparse, it is important to have efficient ways to sample additional data. Traditional sampling approaches are of limited value for rare species because a very large proportion of randomly chosen sampling sites are unlikely to shelter the species. For these species, spatial predictions from niche-based distribution models can be used to stratify the sampling and increase sampling efficiency. New data sampled are then used to improve the initial model. Applying this approach repeatedly is an adaptive process that may allow increasing the number of new occurrences found. We illustrate the approach with a case study of a rare and endangered plant species in Switzerland and a simulation experiment. Our field survey confirmed that the method helps in the discovery of new populations of the target species in remote areas where the predicted habitat suitability is high. In our simulations the model-based approach provided a significant improvement (by a factor of 1.8 to 4 times, depending on the measure) over simple random sampling. In terms of cost this approach may save up to 70% of the time spent in the field.
Resumo:
The Puklen complex of the Mid-Proterozoic Gardar Province, South Greenland, consists of various silica-saturated to quartz-bearing syenites, which are intruded by a peralkaline granite. The primary mafic minerals in the syenites are augite +/- olivine + Fe-Ti oxide + amphibole. Ternary feldspar thermometry and phase equilibria among mafic silicates yield T = 950-750degreesC, a(SiO2) = 0.7-1 and an f(O2) of 1-3 log units below the fayalite-magnetite-quartz (FMQ) buffer at 1 kbar. In the granites, the primary mafic minerals are ilmenite and Li-bearing arfvedsonite, which crystallized at temperatures below 750degreesC and at f(O2) values around the FMQ buffer. In both rock types, a secondary post-magmatic assemblage overprints the primary magmatic phases. In syenites, primary Ca-bearing minerals are replaced by Na-rich minerals such as aegirine-augite and albite, resulting in the release of Ca. Accordingly, secondary minerals include ferro-actinolite, (calcite-siderite)(ss), titanite and andradite in equilibrium with the Na-rich minerals. Phase equilibria indicate that formation of these minerals took place over a long temperature interval from near-magmatic temperatures down to similar to300degreesC. In the course of this cooling, oxygen fugacity rose in most samples. For example, late-stage aegirine in granites formed at the expense of arfvedsonite at temperatures below 300degreesC and at an oxygen fugacity above the haematite-magnetite (HM) buffer. The calculated delta(18)O(melt) value for the syenites (+5.9 to +6.3parts per thousand) implies a mantle origin, whereas the inferred delta(18)O(melt) value of <+5.1parts per thousand for the granitic melts is significantly lower. Thus, the granites require an additional low-delta(18)O contaminant, which was not involved in the genesis of the syenites. Rb/Sr data for minerals of both rock types indicate open-system behaviour for Rb and Sr during post-magmatic metasomatism. Neodymium isotope compositions (epsilonNd(1170 Ma) = -3.8 to -6.4) of primary minerals in syenites are highly variable, and suggest that assimilation of crustal rocks occurred to variable extents. Homogeneous epsilon(Nd) values of -5.9 and -6.0 for magmatic amphibole in the granites lie within the range of the syenites. Because of the very similar neodymium isotopic compositions of magmatic and late- to post-magmatic minerals from the same syenite samples a principally closed-system behaviour during cooling is implied. In contrast, for the granites an externally derived fluid phase is required to explain the extremely low epsilon(Nd) values of about -10 and low delta(18)O between +2.0 and +0.5parts per thousand for late-stage aegirine, indicating an open system in the late-stage history. In this study we show that the combination of phase equilibria constraints with stable and radiogenic isotope data on mineral separates can provide much better constraints on magma evolution during emplacement and crystallization than conventional whole-rock studies.
Resumo:
Recently, several anonymization algorithms have appeared for privacy preservation on graphs. Some of them are based on random-ization techniques and on k-anonymity concepts. We can use both of them to obtain an anonymized graph with a given k-anonymity value. In this paper we compare algorithms based on both techniques in orderto obtain an anonymized graph with a desired k-anonymity value. We want to analyze the complexity of these methods to generate anonymized graphs and the quality of the resulting graphs.
Resumo:
BACKGROUND: Controversy exists regarding the usefulness of troponin testing for the risk stratification of patients with acute pulmonary embolism (PE). We conducted an updated systematic review and a metaanalysis of troponin-based risk stratification of normotensive patients with acute symptomatic PE. The sources of our data were publications listed in Medline and Embase from 1980 through April 2008 and a review of cited references in those publications. METHODS: We included all studies that estimated the relation between troponin levels and the incidence of all-cause mortality in normotensive patients with acute symptomatic PE. Two reviewers independently abstracted data and assessed study quality. From the literature search, 596 publications were screened. Nine studies that consisted of 1,366 normotensive patients with acute symptomatic PE were deemed eligible. Pooled results showed that elevated troponin levels were associated with a 4.26-fold increased odds of overall mortality (95% CI, 2.13 to 8.50; heterogeneity chi(2) = 12.64; degrees of freedom = 8; p = 0.125). Summary receiver operating characteristic curve analysis showed a relationship between the sensitivity and specificity of troponin levels to predict overall mortality (Spearman rank correlation coefficient = 0.68; p = 0.046). Pooled likelihood ratios (LRs) were not extreme (negative LR, 0.59 [95% CI, 0.39 to 0.88]; positive LR, 2.26 [95% CI, 1.66 to 3.07]). The Begg rank correlation method did not detect evidence of publication bias. CONCLUSIONS: The results of this metaanalysis indicate that elevated troponin levels do not adequately discern normotensive patients with acute symptomatic PE who are at high risk for death from those who are at low risk for death.
Resumo:
Companies are under IAS 40 required to report fair values of investment properties on the balance sheet or to disclose them in the notes. The standard requires also that companies have to disclose the methods and significant assumptions applied in determining fair values of investment properties. However, IAS 40 does not include any illustrative examples or other guidance on how to apply the disclosure requirements. We use a sample with publicly traded companies from the real estate sector in the EU. We find that a majority of the companies use income based methods for the measurement of fair values but there are considerable cross-country variations in the level of disclosures about the assumptions used in determining fair values. More specifically, we find that Scandinavian and German origin companies disclose more than French and English origin companies. We also test whether disclosure quality is associated with enforcement quality measured with the “Rule of Law” index according to Kaufmann et al. (2010), and associated with a secrecy- versus transparency-measure based on Gray (1988). We find a positive association between disclosure and earnings quality and a negative association with secrecy.
Resumo:
[eng] We analyze the equilibrium of a multi-sector exogenous growth model where the introduction of minimum consumption requirements drives structural change. We show that equilibrium dynamics simultaneously exhibt structural change and balanced growth of aggregate variables as is observed in US when the initial intensity of minimum consumption requirements is sufficiently small. This intensity is measured by the ratio between the aggregate value of the minimum consumption requirements and GDP and, therefore, it is inversely related with the level of economic development. Initially rich economies benefit from an initially low intensity of the minimum consumption requirements and, as a consequence, these economies end up exhibiting balanced growth of aggregate variables, while there is structural change. In contrast, initially poor economies suffer from an initially large intensity of the minimum consumption requirements, which makes the growth of the aggregate variables unbalanced during a very large period. These economies may never exhibit simultaneously balanced growth of aggregate variables and structural change.
Resumo:
EXECUTIVE SUMMARY : Evaluating Information Security Posture within an organization is becoming a very complex task. Currently, the evaluation and assessment of Information Security are commonly performed using frameworks, methodologies and standards which often consider the various aspects of security independently. Unfortunately this is ineffective because it does not take into consideration the necessity of having a global and systemic multidimensional approach to Information Security evaluation. At the same time the overall security level is globally considered to be only as strong as its weakest link. This thesis proposes a model aiming to holistically assess all dimensions of security in order to minimize the likelihood that a given threat will exploit the weakest link. A formalized structure taking into account all security elements is presented; this is based on a methodological evaluation framework in which Information Security is evaluated from a global perspective. This dissertation is divided into three parts. Part One: Information Security Evaluation issues consists of four chapters. Chapter 1 is an introduction to the purpose of this research purpose and the Model that will be proposed. In this chapter we raise some questions with respect to "traditional evaluation methods" as well as identifying the principal elements to be addressed in this direction. Then we introduce the baseline attributes of our model and set out the expected result of evaluations according to our model. Chapter 2 is focused on the definition of Information Security to be used as a reference point for our evaluation model. The inherent concepts of the contents of a holistic and baseline Information Security Program are defined. Based on this, the most common roots-of-trust in Information Security are identified. Chapter 3 focuses on an analysis of the difference and the relationship between the concepts of Information Risk and Security Management. Comparing these two concepts allows us to identify the most relevant elements to be included within our evaluation model, while clearing situating these two notions within a defined framework is of the utmost importance for the results that will be obtained from the evaluation process. Chapter 4 sets out our evaluation model and the way it addresses issues relating to the evaluation of Information Security. Within this Chapter the underlying concepts of assurance and trust are discussed. Based on these two concepts, the structure of the model is developed in order to provide an assurance related platform as well as three evaluation attributes: "assurance structure", "quality issues", and "requirements achievement". Issues relating to each of these evaluation attributes are analysed with reference to sources such as methodologies, standards and published research papers. Then the operation of the model is discussed. Assurance levels, quality levels and maturity levels are defined in order to perform the evaluation according to the model. Part Two: Implementation of the Information Security Assurance Assessment Model (ISAAM) according to the Information Security Domains consists of four chapters. This is the section where our evaluation model is put into a welldefined context with respect to the four pre-defined Information Security dimensions: the Organizational dimension, Functional dimension, Human dimension, and Legal dimension. Each Information Security dimension is discussed in a separate chapter. For each dimension, the following two-phase evaluation path is followed. The first phase concerns the identification of the elements which will constitute the basis of the evaluation: ? Identification of the key elements within the dimension; ? Identification of the Focus Areas for each dimension, consisting of the security issues identified for each dimension; ? Identification of the Specific Factors for each dimension, consisting of the security measures or control addressing the security issues identified for each dimension. The second phase concerns the evaluation of each Information Security dimension by: ? The implementation of the evaluation model, based on the elements identified for each dimension within the first phase, by identifying the security tasks, processes, procedures, and actions that should have been performed by the organization to reach the desired level of protection; ? The maturity model for each dimension as a basis for reliance on security. For each dimension we propose a generic maturity model that could be used by every organization in order to define its own security requirements. Part three of this dissertation contains the Final Remarks, Supporting Resources and Annexes. With reference to the objectives of our thesis, the Final Remarks briefly analyse whether these objectives were achieved and suggest directions for future related research. Supporting resources comprise the bibliographic resources that were used to elaborate and justify our approach. Annexes include all the relevant topics identified within the literature to illustrate certain aspects of our approach. Our Information Security evaluation model is based on and integrates different Information Security best practices, standards, methodologies and research expertise which can be combined in order to define an reliable categorization of Information Security. After the definition of terms and requirements, an evaluation process should be performed in order to obtain evidence that the Information Security within the organization in question is adequately managed. We have specifically integrated into our model the most useful elements of these sources of information in order to provide a generic model able to be implemented in all kinds of organizations. The value added by our evaluation model is that it is easy to implement and operate and answers concrete needs in terms of reliance upon an efficient and dynamic evaluation tool through a coherent evaluation system. On that basis, our model could be implemented internally within organizations, allowing them to govern better their Information Security. RÉSUMÉ : Contexte général de la thèse L'évaluation de la sécurité en général, et plus particulièrement, celle de la sécurité de l'information, est devenue pour les organisations non seulement une mission cruciale à réaliser, mais aussi de plus en plus complexe. A l'heure actuelle, cette évaluation se base principalement sur des méthodologies, des bonnes pratiques, des normes ou des standards qui appréhendent séparément les différents aspects qui composent la sécurité de l'information. Nous pensons que cette manière d'évaluer la sécurité est inefficiente, car elle ne tient pas compte de l'interaction des différentes dimensions et composantes de la sécurité entre elles, bien qu'il soit admis depuis longtemps que le niveau de sécurité globale d'une organisation est toujours celui du maillon le plus faible de la chaîne sécuritaire. Nous avons identifié le besoin d'une approche globale, intégrée, systémique et multidimensionnelle de l'évaluation de la sécurité de l'information. En effet, et c'est le point de départ de notre thèse, nous démontrons que seule une prise en compte globale de la sécurité permettra de répondre aux exigences de sécurité optimale ainsi qu'aux besoins de protection spécifiques d'une organisation. Ainsi, notre thèse propose un nouveau paradigme d'évaluation de la sécurité afin de satisfaire aux besoins d'efficacité et d'efficience d'une organisation donnée. Nous proposons alors un modèle qui vise à évaluer d'une manière holistique toutes les dimensions de la sécurité, afin de minimiser la probabilité qu'une menace potentielle puisse exploiter des vulnérabilités et engendrer des dommages directs ou indirects. Ce modèle se base sur une structure formalisée qui prend en compte tous les éléments d'un système ou programme de sécurité. Ainsi, nous proposons un cadre méthodologique d'évaluation qui considère la sécurité de l'information à partir d'une perspective globale. Structure de la thèse et thèmes abordés Notre document est structuré en trois parties. La première intitulée : « La problématique de l'évaluation de la sécurité de l'information » est composée de quatre chapitres. Le chapitre 1 introduit l'objet de la recherche ainsi que les concepts de base du modèle d'évaluation proposé. La maniéré traditionnelle de l'évaluation de la sécurité fait l'objet d'une analyse critique pour identifier les éléments principaux et invariants à prendre en compte dans notre approche holistique. Les éléments de base de notre modèle d'évaluation ainsi que son fonctionnement attendu sont ensuite présentés pour pouvoir tracer les résultats attendus de ce modèle. Le chapitre 2 se focalise sur la définition de la notion de Sécurité de l'Information. Il ne s'agit pas d'une redéfinition de la notion de la sécurité, mais d'une mise en perspectives des dimensions, critères, indicateurs à utiliser comme base de référence, afin de déterminer l'objet de l'évaluation qui sera utilisé tout au long de notre travail. Les concepts inhérents de ce qui constitue le caractère holistique de la sécurité ainsi que les éléments constitutifs d'un niveau de référence de sécurité sont définis en conséquence. Ceci permet d'identifier ceux que nous avons dénommés « les racines de confiance ». Le chapitre 3 présente et analyse la différence et les relations qui existent entre les processus de la Gestion des Risques et de la Gestion de la Sécurité, afin d'identifier les éléments constitutifs du cadre de protection à inclure dans notre modèle d'évaluation. Le chapitre 4 est consacré à la présentation de notre modèle d'évaluation Information Security Assurance Assessment Model (ISAAM) et la manière dont il répond aux exigences de l'évaluation telle que nous les avons préalablement présentées. Dans ce chapitre les concepts sous-jacents relatifs aux notions d'assurance et de confiance sont analysés. En se basant sur ces deux concepts, la structure du modèle d'évaluation est développée pour obtenir une plateforme qui offre un certain niveau de garantie en s'appuyant sur trois attributs d'évaluation, à savoir : « la structure de confiance », « la qualité du processus », et « la réalisation des exigences et des objectifs ». Les problématiques liées à chacun de ces attributs d'évaluation sont analysées en se basant sur l'état de l'art de la recherche et de la littérature, sur les différentes méthodes existantes ainsi que sur les normes et les standards les plus courants dans le domaine de la sécurité. Sur cette base, trois différents niveaux d'évaluation sont construits, à savoir : le niveau d'assurance, le niveau de qualité et le niveau de maturité qui constituent la base de l'évaluation de l'état global de la sécurité d'une organisation. La deuxième partie: « L'application du Modèle d'évaluation de l'assurance de la sécurité de l'information par domaine de sécurité » est elle aussi composée de quatre chapitres. Le modèle d'évaluation déjà construit et analysé est, dans cette partie, mis dans un contexte spécifique selon les quatre dimensions prédéfinies de sécurité qui sont: la dimension Organisationnelle, la dimension Fonctionnelle, la dimension Humaine, et la dimension Légale. Chacune de ces dimensions et son évaluation spécifique fait l'objet d'un chapitre distinct. Pour chacune des dimensions, une évaluation en deux phases est construite comme suit. La première phase concerne l'identification des éléments qui constituent la base de l'évaluation: ? Identification des éléments clés de l'évaluation ; ? Identification des « Focus Area » pour chaque dimension qui représentent les problématiques se trouvant dans la dimension ; ? Identification des « Specific Factors » pour chaque Focus Area qui représentent les mesures de sécurité et de contrôle qui contribuent à résoudre ou à diminuer les impacts des risques. La deuxième phase concerne l'évaluation de chaque dimension précédemment présentées. Elle est constituée d'une part, de l'implémentation du modèle général d'évaluation à la dimension concernée en : ? Se basant sur les éléments spécifiés lors de la première phase ; ? Identifiant les taches sécuritaires spécifiques, les processus, les procédures qui auraient dû être effectués pour atteindre le niveau de protection souhaité. D'autre part, l'évaluation de chaque dimension est complétée par la proposition d'un modèle de maturité spécifique à chaque dimension, qui est à considérer comme une base de référence pour le niveau global de sécurité. Pour chaque dimension nous proposons un modèle de maturité générique qui peut être utilisé par chaque organisation, afin de spécifier ses propres exigences en matière de sécurité. Cela constitue une innovation dans le domaine de l'évaluation, que nous justifions pour chaque dimension et dont nous mettons systématiquement en avant la plus value apportée. La troisième partie de notre document est relative à la validation globale de notre proposition et contient en guise de conclusion, une mise en perspective critique de notre travail et des remarques finales. Cette dernière partie est complétée par une bibliographie et des annexes. Notre modèle d'évaluation de la sécurité intègre et se base sur de nombreuses sources d'expertise, telles que les bonnes pratiques, les normes, les standards, les méthodes et l'expertise de la recherche scientifique du domaine. Notre proposition constructive répond à un véritable problème non encore résolu, auquel doivent faire face toutes les organisations, indépendamment de la taille et du profil. Cela permettrait à ces dernières de spécifier leurs exigences particulières en matière du niveau de sécurité à satisfaire, d'instancier un processus d'évaluation spécifique à leurs besoins afin qu'elles puissent s'assurer que leur sécurité de l'information soit gérée d'une manière appropriée, offrant ainsi un certain niveau de confiance dans le degré de protection fourni. Nous avons intégré dans notre modèle le meilleur du savoir faire, de l'expérience et de l'expertise disponible actuellement au niveau international, dans le but de fournir un modèle d'évaluation simple, générique et applicable à un grand nombre d'organisations publiques ou privées. La valeur ajoutée de notre modèle d'évaluation réside précisément dans le fait qu'il est suffisamment générique et facile à implémenter tout en apportant des réponses sur les besoins concrets des organisations. Ainsi notre proposition constitue un outil d'évaluation fiable, efficient et dynamique découlant d'une approche d'évaluation cohérente. De ce fait, notre système d'évaluation peut être implémenté à l'interne par l'entreprise elle-même, sans recourir à des ressources supplémentaires et lui donne également ainsi la possibilité de mieux gouverner sa sécurité de l'information.
Resumo:
Introduction: Evidence suggests that citrullinated fibrin(ogen) may be a potential in vivo target of anticitrullinated protein/peptide antibodies (ACPA) in rheumatoid arthritis (RA). We compared the diagnostic yield of three enzyme-linked immunosorbent assay (ELISA) tests by using chimeric fibrin/filaggrin citrullinated synthetic peptides (CFFCP1, CFFCP2, CFFCP3) with a commercial CCP2-based test in RA and analyzed their prognostic values in early RA. Methods: Samples from 307 blood donors and patients with RA (322), psoriatic arthritis (133), systemic lupus erythematosus (119), and hepatitis C infection (84) were assayed by using CFFCP- and CCP2-based tests. Autoantibodies also were analyzed at baseline and during a 2-year follow-up in 98 early RA patients to determine their prognostic value. Results: With cutoffs giving 98% specificity for RA versus blood donors, the sensitivity was 72.1% for CFFCP1, 78.0% for CFFCP2, 71.4% for CFFCP3, and 73.9% for CCP2, with positive predictive values greater than 97% in all cases. CFFCP sensitivity in RA increased to 80.4% without losing specificity when positivity was considered as any positive anti-CFFCP status. Specificity of the three CFFCP tests versus other rheumatic populations was high (> 90%) and similar to those for the CCP2. In early RA, CFFCP1 best identified patients with a poor radiographic outcome. Radiographic progression was faster in the small subgroup of CCP2-negative and CFFCP1-positive patients than in those negative for both autoantibodies. CFFCP antibodies decreased after 1 year, but without any correlation with changes in disease activity. Conclusions: CFFCP-based assays are highly sensitive and specific for RA. Early RA patients with anti-CFFCP1 antibodies, including CCP2-negative patients, show greater radiographic progression.