192 resultados para scenario-based assessment


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigated the association between exposure to radio-frequency electromagnetic fields (RF-EMFs) from broadcast transmitters and childhood cancer. First, we conducted a time-to-event analysis including children under age 16 years living in Switzerland on December 5, 2000. Follow-up lasted until December 31, 2008. Second, all children living in Switzerland for some time between 1985 and 2008 were included in an incidence density cohort. RF-EMF exposure from broadcast transmitters was modeled. Based on 997 cancer cases, adjusted hazard ratios in the time-to-event analysis for the highest exposure category (>0.2 V/m) as compared with the reference category (<0.05 V/m) were 1.03 (95% confidence interval (CI): 0.74, 1.43) for all cancers, 0.55 (95% CI: 0.26, 1.19) for childhood leukemia, and 1.68 (95% CI: 0.98, 2.91) for childhood central nervous system (CNS) tumors. Results of the incidence density analysis, based on 4,246 cancer cases, were similar for all types of cancer and leukemia but did not indicate a CNS tumor risk (incidence rate ratio = 1.03, 95% CI: 0.73, 1.46). This large census-based cohort study did not suggest an association between predicted RF-EMF exposure from broadcasting and childhood leukemia. Results for CNS tumors were less consistent, but the most comprehensive analysis did not suggest an association.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Both, Bayesian networks and probabilistic evaluation are gaining more and more widespread use within many professional branches, including forensic science. Notwithstanding, they constitute subtle topics with definitional details that require careful study. While many sophisticated developments of probabilistic approaches to evaluation of forensic findings may readily be found in published literature, there remains a gap with respect to writings that focus on foundational aspects and on how these may be acquired by interested scientists new to these topics. This paper takes this as a starting point to report on the learning about Bayesian networks for likelihood ratio based, probabilistic inference procedures in a class of master students in forensic science. The presentation uses an example that relies on a casework scenario drawn from published literature, involving a questioned signature. A complicating aspect of that case study - proposed to students in a teaching scenario - is due to the need of considering multiple competing propositions, which is an outset that may not readily be approached within a likelihood ratio based framework without drawing attention to some additional technical details. Using generic Bayesian networks fragments from existing literature on the topic, course participants were able to track the probabilistic underpinnings of the proposed scenario correctly both in terms of likelihood ratios and of posterior probabilities. In addition, further study of the example by students allowed them to derive an alternative Bayesian network structure with a computational output that is equivalent to existing probabilistic solutions. This practical experience underlines the potential of Bayesian networks to support and clarify foundational principles of probabilistic procedures for forensic evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Exposure to combination antiretroviral therapy (cART) can lead to important metabolic changes and increased risk of coronary heart disease (CHD). Computerized clinical decision support systems have been advocated to improve the management of patients at risk for CHD but it is unclear whether such systems reduce patients' risk for CHD. METHODS: We conducted a cluster trial within the Swiss HIV Cohort Study (SHCS) of HIV-infected patients, aged 18 years or older, not pregnant and receiving cART for >3 months. We randomized 165 physicians to either guidelines for CHD risk factor management alone or guidelines plus CHD risk profiles. Risk profiles included the Framingham risk score, CHD drug prescriptions and CHD events based on biannual assessments, and were continuously updated by the SHCS data centre and integrated into patient charts by study nurses. Outcome measures were total cholesterol, systolic and diastolic blood pressure and Framingham risk score. RESULTS: A total of 3,266 patients (80% of those eligible) had a final assessment of the primary outcome at least 12 months after the start of the trial. Mean (95% confidence interval) patient differences where physicians received CHD risk profiles and guidelines, rather than guidelines alone, were total cholesterol -0.02 mmol/l (-0.09-0.06), systolic blood pressure -0.4 mmHg (-1.6-0.8), diastolic blood pressure -0.4 mmHg (-1.5-0.7) and Framingham 10-year risk score -0.2% (-0.5-0.1). CONCLUSIONS: Systemic computerized routine provision of CHD risk profiles in addition to guidelines does not significantly improve risk factors for CHD in patients on cART.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work aimed at assessing the doses delivered in Switzerland to paediatric patients during computed tomography (CT) examinations of the brain, chest and abdomen, and at establishing diagnostic reference levels (DRLs) for various age groups. Forms were sent to the ten centres performing CT on children, addressing the demographics, the indication and the scanning parameters: number of series, kilovoltage, tube current, rotation time, reconstruction slice thickness and pitch, volume CT dose index (CTDI(vol)) and dose length product (DLP). Per age group, the proposed DRLs for brain, chest and abdomen are, respectively, in terms of CTDI(vol): 20, 30, 40, 60 mGy; 5, 8, 10, 12 mGy; 7, 9, 13, 16 mGy; and in terms of DLP: 270, 420, 560, 1,000 mGy cm; 110, 200, 220, 460 mGy cm; 130, 300, 380, 500 mGy cm. An optimisation process should be initiated to reduce the spread in dose recorded in this study. A major element of this process should be the use of DRLs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Screening people without symptoms of disease is an attractive idea. Screening allows early detection of disease or elevated risk of disease, and has the potential for improved treatment and reduction of mortality. The list of future screening opportunities is set to grow because of the refinement of screening techniques, the increasing frequency of degenerative and chronic diseases, and the steadily growing body of evidence on genetic predispositions for various diseases. But how should we decide on the diseases for which screening should be done and on recommendations for how it should be implemented? We use the examples of prostate cancer and genetic screening to show the importance of considering screening as an ongoing population-based intervention with beneficial and harmful effects, and not simply the use of a test. Assessing whether screening should be recommended and implemented for any named disease is therefore a multi-dimensional task in health technology assessment. There are several countries that already use established processes and criteria to assess the appropriateness of screening. We argue that the Swiss healthcare system needs a nationwide screening commission mandated to conduct appropriate evidence-based evaluation of the impact of proposed screening interventions, to issue evidence-based recommendations, and to monitor the performance of screening programmes introduced. Without explicit processes there is a danger that beneficial screening programmes could be neglected and that ineffective, and potentially harmful, screening procedures could be introduced.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Summary Due to their conic shape and the reduction of area with increasing elevation, mountain ecosystems were early identified as potentially very sensitive to global warming. Moreover, mountain systems may experience unprecedented rates of warming during the next century, two or three times higher than that records of the 20th century. In this context, species distribution models (SDM) have become important tools for rapid assessment of the impact of accelerated land use and climate change on the distribution plant species. In my study, I developed and tested new predictor variables for species distribution models (SDM), specific to current and future geographic projections of plant species in a mountain system, using the Western Swiss Alps as model region. Since meso- and micro-topography are relevant to explain geographic patterns of plant species in mountain environments, I assessed the effect of scale on predictor variables and geographic projections of SDM. I also developed a methodological framework of space-for-time evaluation to test the robustness of SDM when projected in a future changing climate. Finally, I used a cellular automaton to run dynamic simulations of plant migration under climate change in a mountain landscape, including realistic distance of seed dispersal. Results of future projections for the 21st century were also discussed in perspective of vegetation changes monitored during the 20th century. Overall, I showed in this study that, based on the most severe A1 climate change scenario and realistic dispersal simulations of plant dispersal, species extinctions in the Western Swiss Alps could affect nearly one third (28.5%) of the 284 species modeled by 2100. With the less severe 61 scenario, only 4.6% of species are predicted to become extinct. However, even with B1, 54% (153 species) may still loose more than 80% of their initial surface. Results of monitoring of past vegetation changes suggested that plant species can react quickly to the warmer conditions as far as competition is low However, in subalpine grasslands, competition of already present species is probably important and limit establishment of newly arrived species. Results from future simulations also showed that heavy extinctions of alpine plants may start already in 2040, but the latest in 2080. My study also highlighted the importance of fine scale and regional. assessments of climate change impact on mountain vegetation, using more direct predictor variables. Indeed, predictions at the continental scale may fail to predict local refugees or local extinctions, as well as loss of connectivity between local populations. On the other hand, migrations of low-elevation species to higher altitude may be difficult to predict at the local scale. Résumé La forme conique des montagnes ainsi que la diminution de surface dans les hautes altitudes sont reconnues pour exposer plus sensiblement les écosystèmes de montagne au réchauffement global. En outre, les systèmes de montagne seront sans doute soumis durant le 21ème siècle à un réchauffement deux à trois fois plus rapide que celui mesuré durant le 20ème siècle. Dans ce contexte, les modèles prédictifs de distribution géographique de la végétation se sont imposés comme des outils puissants pour de rapides évaluations de l'impact des changements climatiques et de la transformation du paysage par l'homme sur la végétation. Dans mon étude, j'ai développé de nouvelles variables prédictives pour les modèles de distribution, spécifiques à la projection géographique présente et future des plantes dans un système de montagne, en utilisant les Préalpes vaudoises comme zone d'échantillonnage. La méso- et la microtopographie étant particulièrement adaptées pour expliquer les patrons de distribution géographique des plantes dans un environnement montagneux, j'ai testé les effets d'échelle sur les variables prédictives et sur les projections des modèles de distribution. J'ai aussi développé un cadre méthodologique pour tester la robustesse potentielle des modèles lors de projections pour le futur. Finalement, j'ai utilisé un automate cellulaire pour simuler de manière dynamique la migration future des plantes dans le paysage et dans quatre scénarios de changement climatique pour le 21ème siècle. J'ai intégré dans ces simulations des mécanismes et des distances plus réalistes de dispersion de graines. J'ai pu montrer, avec les simulations les plus réalistes, que près du tiers des 284 espèces considérées (28.5%) pourraient être menacées d'extinction en 2100 dans le cas du plus sévère scénario de changement climatique A1. Pour le moins sévère des scénarios B1, seulement 4.6% des espèces sont menacées d'extinctions, mais 54% (153 espèces) risquent de perdre plus 80% de leur habitat initial. Les résultats de monitoring des changements de végétation dans le passé montrent que les plantes peuvent réagir rapidement au réchauffement climatique si la compétition est faible. Dans les prairies subalpines, les espèces déjà présentes limitent certainement l'arrivée de nouvelles espèces par effet de compétition. Les résultats de simulation pour le futur prédisent le début d'extinctions massives dans les Préalpes à partir de 2040, au plus tard en 2080. Mon travail démontre aussi l'importance d'études régionales à échelle fine pour évaluer l'impact des changements climatiques sur la végétation, en intégrant des variables plus directes. En effet, les études à échelle continentale ne tiennent pas compte des micro-refuges, des extinctions locales ni des pertes de connectivité entre populations locales. Malgré cela, la migration des plantes de basses altitudes reste difficile à prédire à l'échelle locale sans modélisation plus globale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: In this study we evaluated the validity of garment-based quadriceps stimulation (GQS) for assessment of muscle inactivation in comparison with femoral nerve stimulation (FNS). METHODS: Inactivation estimates (superimposed doublet torque), self-reported discomfort, and twitch and doublet contractile properties were compared between GQS and FNS in 15 healthy subjects. RESULTS: Superimposed doublet torque was significantly lower for GQS than for FNS at 20% and 40% maximum voluntary contraction (MVC) (P < 0.01), but not at 60%, 80%, and 100% MVC. Discomfort scores were systematically lower for GQS than for FNS (P < 0.05). Resting twitch and doublet peak torque were lower for GQS, and time to peak torque was shorter for GQS than for FNS (P < 0.01). CONCLUSIONS: GQS can be used with confidence for straightforward evaluation of quadriceps muscle inactivation, whereas its validity for assessment of contractile properties remains to be determined. Muscle Nerve 51: 117-124, 2015.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: Our aim was to evaluate a fluorescence-based enhanced-reality system to assess intestinal viability in a laparoscopic mesenteric ischemia model. MATERIALS AND METHODS: A small bowel loop was exposed, and 3 to 4 mesenteric vessels were clipped in 6 pigs. Indocyanine green (ICG) was administered intravenously 15 minutes later. The bowel was illuminated with an incoherent light source laparoscope (D-light-P, KarlStorz). The ICG fluorescence signal was analyzed with Ad Hoc imaging software (VR-RENDER), which provides a digital perfusion cartography that was superimposed to the intraoperative laparoscopic image [augmented reality (AR) synthesis]. Five regions of interest (ROIs) were marked under AR guidance (1, 2a-2b, 3a-3b corresponding to the ischemic, marginal, and vascularized zones, respectively). One hour later, capillary blood samples were obtained by puncturing the bowel serosa at the identified ROIs and lactates were measured using the EDGE analyzer. A surgical biopsy of each intestinal ROI was sent for mitochondrial respiratory rate assessment and for metabolites quantification. RESULTS: Mean capillary lactate levels were 3.98 (SD = 1.91) versus 1.05 (SD = 0.46) versus 0.74 (SD = 0.34) mmol/L at ROI 1 versus 2a-2b (P = 0.0001) versus 3a-3b (P = 0.0001), respectively. Mean maximal mitochondrial respiratory rate was 104.4 (±21.58) pmolO2/second/mg at the ROI 1 versus 191.1 ± 14.48 (2b, P = 0.03) versus 180.4 ± 16.71 (3a, P = 0.02) versus 199.2 ± 25.21 (3b, P = 0.02). Alanine, choline, ethanolamine, glucose, lactate, myoinositol, phosphocholine, sylloinositol, and valine showed statistically significant different concentrations between ischemic and nonischemic segments. CONCLUSIONS: Fluorescence-based AR may effectively detect the boundary between the ischemic and the vascularized zones in this experimental model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND AND OBJECTIVES: Evaluation of glomerular hyperfiltration (GH) is difficult; the variable reported definitions impede comparisons between studies. A clear and universal definition of GH would help in comparing results of trials aimed at reducing GH. This study assessed how GH is measured and defined in the literature. DESIGN, SETTING, PARTICIPANTS, & MEASUREMENTS: Three databases (Embase, MEDLINE, CINAHL) were systematically searched using the terms "hyperfiltration" or "glomerular hyperfiltration". All studies reporting a GH threshold or studying the effect of a high GFR in a continuous manner against another outcome of interest were included. RESULTS: The literature search was performed from November 2012 to February 2013 and updated in August 2014. From 2013 retrieved studies, 405 studies were included. Threshold use to define GH was reported in 55.6% of studies. Of these, 88.4% used a single threshold and 11.6% used numerous thresholds adapted to participant sex or age. In 29.8% of the studies, the choice of a GH threshold was not based on a control group or literature references. After 2004, the use of GH threshold use increased (P<0.001), but the use of a control group to precisely define that GH threshold decreased significantly (P<0.001); the threshold did not differ among pediatric, adult, or mixed-age studies. The GH threshold ranged from 90.7 to 175 ml/min per 1.73 m(2) (median, 135 ml/min per 1.73 m(2)). CONCLUSION: Thirty percent of studies did not justify the choice of threshold values. The decrease of GFR in the elderly was rarely considered in defining GH. From a methodologic point of view, an age- and sex-matched control group should be used to define a GH threshold.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Health assessment and medical surveillance of workers exposed to combustion nanoparticles are challenging. The aim was to evaluate the feasibility of using exhaled breath condensate (EBC) from healthy volunteers for (1) assessing the lung deposited dose of combustion nanoparticles and (2) determining the resulting oxidative stress by measuring hydrogen peroxide (H2O2) and malondialdehyde (MDA). Methods: Fifteen healthy nonsmoker volunteers were exposed to three different levels of sidestream cigarette smoke under controlled conditions. EBC was repeatedly collected before, during, and 1 and 2 hr after exposure. Exposure variables were measured by direct reading instruments and by active sampling. The different EBC samples were analyzed for particle number concentration (light-scattering-based method) and for selected compounds considered oxidative stress markers. Results: Subjects were exposed to an average airborne concentration up to 4.3×10(5) particles/cm(3) (average geometric size ∼60-80 nm). Up to 10×10(8) particles/mL could be measured in the collected EBC with a broad size distribution (50(th) percentile ∼160 nm), but these biological concentrations were not related to the exposure level of cigarette smoke particles. Although H2O2 and MDA concentrations in EBC increased during exposure, only H2O2 showed a transient normalization 1 hr after exposure and increased afterward. In contrast, MDA levels stayed elevated during the 2 hr post exposure. Conclusions: The use of diffusion light scattering for particle counting proved to be sufficiently sensitive to detect objects in EBC, but lacked the specificity for carbonaceous tobacco smoke particles. Our results suggest two phases of oxidation markers in EBC: first, the initial deposition of particles and gases in the lung lining liquid, and later the start of oxidative stress with associated cell membrane damage. Future studies should extend the follow-up time and should remove gases or particles from the air to allow differentiation between the different sources of H2O2 and MDA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new indicator taxa approach to the prediction of climate change effects on biodiversity at the national level in Switzerland. As indicators, we select a set of the most widely distributed species that account for 95% of geographical variation in sampled species richness of birds, butterflies, and vascular plants. Species data come from a national program designed to monitor spatial and temporal trends in species richness. We examine some opportunities and limitations in using these data. We develop ecological niche models for the species as functions of both climate and land cover variables. We project these models to the future using climate predictions that correspond to two IPCC 3rd assessment scenarios for the development of 'greenhouse' gas emissions. We find that models that are calibrated with Swiss national monitoring data perform well in 10-fold cross-validation, but can fail to capture the hot-dry end of environmental gradients that constrain some species distributions. Models for indicator species in all three higher taxa predict that climate change will result in turnover in species composition even where there is little net change in predicted species richness. Indicator species from high elevations lose most areas of suitable climate even under the relatively mild B2 scenario. We project some areas to increase in the number of species for which climate conditions are suitable early in the current century, but these areas become less suitable for a majority of species by the end of the century. Selection of indicator species based on rank prevalence results in a set of models that predict observed species richness better than a similar set of species selected based on high rank of model AUC values. An indicator species approach based on selected species that are relatively common may facilitate the use of national monitoring data for predicting climate change effects on the distribution of biodiversity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Assessment of image quality for digital x-ray mammography systems used in European screening programs relies mainly on contrast-detail CDMAM phantom scoring and requires the acquisition and analysis of many images in order to reduce variability in threshold detectability. Part II of this study proposes an alternative method based on the detectability index (d') calculated for a non-prewhitened model observer with an eye filter (NPWE). The detectability index was calculated from the normalized noise power spectrum and image contrast, both measured from an image of a 5 cm poly(methyl methacrylate) phantom containing a 0.2 mm thick aluminium square, and the pre-sampling modulation transfer function. This was performed as a function of air kerma at the detector for 11 different digital mammography systems. These calculated d' values were compared against threshold gold thickness (T) results measured with the CDMAM test object and against derived theoretical relationships. A simple relationship was found between T and d', as a function of detector air kerma; a linear relationship was found between d' and contrast-to-noise ratio. The values of threshold thickness used to specify acceptable performance in the European Guidelines for 0.10 and 0.25 mm diameter discs were equivalent to threshold calculated detectability indices of 1.05 and 6.30, respectively. The NPWE method is a validated alternative to CDMAM scoring for use in the image quality specification, quality control and optimization of digital x-ray systems for screening mammography.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

EXECUTIVE SUMMARY : Evaluating Information Security Posture within an organization is becoming a very complex task. Currently, the evaluation and assessment of Information Security are commonly performed using frameworks, methodologies and standards which often consider the various aspects of security independently. Unfortunately this is ineffective because it does not take into consideration the necessity of having a global and systemic multidimensional approach to Information Security evaluation. At the same time the overall security level is globally considered to be only as strong as its weakest link. This thesis proposes a model aiming to holistically assess all dimensions of security in order to minimize the likelihood that a given threat will exploit the weakest link. A formalized structure taking into account all security elements is presented; this is based on a methodological evaluation framework in which Information Security is evaluated from a global perspective. This dissertation is divided into three parts. Part One: Information Security Evaluation issues consists of four chapters. Chapter 1 is an introduction to the purpose of this research purpose and the Model that will be proposed. In this chapter we raise some questions with respect to "traditional evaluation methods" as well as identifying the principal elements to be addressed in this direction. Then we introduce the baseline attributes of our model and set out the expected result of evaluations according to our model. Chapter 2 is focused on the definition of Information Security to be used as a reference point for our evaluation model. The inherent concepts of the contents of a holistic and baseline Information Security Program are defined. Based on this, the most common roots-of-trust in Information Security are identified. Chapter 3 focuses on an analysis of the difference and the relationship between the concepts of Information Risk and Security Management. Comparing these two concepts allows us to identify the most relevant elements to be included within our evaluation model, while clearing situating these two notions within a defined framework is of the utmost importance for the results that will be obtained from the evaluation process. Chapter 4 sets out our evaluation model and the way it addresses issues relating to the evaluation of Information Security. Within this Chapter the underlying concepts of assurance and trust are discussed. Based on these two concepts, the structure of the model is developed in order to provide an assurance related platform as well as three evaluation attributes: "assurance structure", "quality issues", and "requirements achievement". Issues relating to each of these evaluation attributes are analysed with reference to sources such as methodologies, standards and published research papers. Then the operation of the model is discussed. Assurance levels, quality levels and maturity levels are defined in order to perform the evaluation according to the model. Part Two: Implementation of the Information Security Assurance Assessment Model (ISAAM) according to the Information Security Domains consists of four chapters. This is the section where our evaluation model is put into a welldefined context with respect to the four pre-defined Information Security dimensions: the Organizational dimension, Functional dimension, Human dimension, and Legal dimension. Each Information Security dimension is discussed in a separate chapter. For each dimension, the following two-phase evaluation path is followed. The first phase concerns the identification of the elements which will constitute the basis of the evaluation: ? Identification of the key elements within the dimension; ? Identification of the Focus Areas for each dimension, consisting of the security issues identified for each dimension; ? Identification of the Specific Factors for each dimension, consisting of the security measures or control addressing the security issues identified for each dimension. The second phase concerns the evaluation of each Information Security dimension by: ? The implementation of the evaluation model, based on the elements identified for each dimension within the first phase, by identifying the security tasks, processes, procedures, and actions that should have been performed by the organization to reach the desired level of protection; ? The maturity model for each dimension as a basis for reliance on security. For each dimension we propose a generic maturity model that could be used by every organization in order to define its own security requirements. Part three of this dissertation contains the Final Remarks, Supporting Resources and Annexes. With reference to the objectives of our thesis, the Final Remarks briefly analyse whether these objectives were achieved and suggest directions for future related research. Supporting resources comprise the bibliographic resources that were used to elaborate and justify our approach. Annexes include all the relevant topics identified within the literature to illustrate certain aspects of our approach. Our Information Security evaluation model is based on and integrates different Information Security best practices, standards, methodologies and research expertise which can be combined in order to define an reliable categorization of Information Security. After the definition of terms and requirements, an evaluation process should be performed in order to obtain evidence that the Information Security within the organization in question is adequately managed. We have specifically integrated into our model the most useful elements of these sources of information in order to provide a generic model able to be implemented in all kinds of organizations. The value added by our evaluation model is that it is easy to implement and operate and answers concrete needs in terms of reliance upon an efficient and dynamic evaluation tool through a coherent evaluation system. On that basis, our model could be implemented internally within organizations, allowing them to govern better their Information Security. RÉSUMÉ : Contexte général de la thèse L'évaluation de la sécurité en général, et plus particulièrement, celle de la sécurité de l'information, est devenue pour les organisations non seulement une mission cruciale à réaliser, mais aussi de plus en plus complexe. A l'heure actuelle, cette évaluation se base principalement sur des méthodologies, des bonnes pratiques, des normes ou des standards qui appréhendent séparément les différents aspects qui composent la sécurité de l'information. Nous pensons que cette manière d'évaluer la sécurité est inefficiente, car elle ne tient pas compte de l'interaction des différentes dimensions et composantes de la sécurité entre elles, bien qu'il soit admis depuis longtemps que le niveau de sécurité globale d'une organisation est toujours celui du maillon le plus faible de la chaîne sécuritaire. Nous avons identifié le besoin d'une approche globale, intégrée, systémique et multidimensionnelle de l'évaluation de la sécurité de l'information. En effet, et c'est le point de départ de notre thèse, nous démontrons que seule une prise en compte globale de la sécurité permettra de répondre aux exigences de sécurité optimale ainsi qu'aux besoins de protection spécifiques d'une organisation. Ainsi, notre thèse propose un nouveau paradigme d'évaluation de la sécurité afin de satisfaire aux besoins d'efficacité et d'efficience d'une organisation donnée. Nous proposons alors un modèle qui vise à évaluer d'une manière holistique toutes les dimensions de la sécurité, afin de minimiser la probabilité qu'une menace potentielle puisse exploiter des vulnérabilités et engendrer des dommages directs ou indirects. Ce modèle se base sur une structure formalisée qui prend en compte tous les éléments d'un système ou programme de sécurité. Ainsi, nous proposons un cadre méthodologique d'évaluation qui considère la sécurité de l'information à partir d'une perspective globale. Structure de la thèse et thèmes abordés Notre document est structuré en trois parties. La première intitulée : « La problématique de l'évaluation de la sécurité de l'information » est composée de quatre chapitres. Le chapitre 1 introduit l'objet de la recherche ainsi que les concepts de base du modèle d'évaluation proposé. La maniéré traditionnelle de l'évaluation de la sécurité fait l'objet d'une analyse critique pour identifier les éléments principaux et invariants à prendre en compte dans notre approche holistique. Les éléments de base de notre modèle d'évaluation ainsi que son fonctionnement attendu sont ensuite présentés pour pouvoir tracer les résultats attendus de ce modèle. Le chapitre 2 se focalise sur la définition de la notion de Sécurité de l'Information. Il ne s'agit pas d'une redéfinition de la notion de la sécurité, mais d'une mise en perspectives des dimensions, critères, indicateurs à utiliser comme base de référence, afin de déterminer l'objet de l'évaluation qui sera utilisé tout au long de notre travail. Les concepts inhérents de ce qui constitue le caractère holistique de la sécurité ainsi que les éléments constitutifs d'un niveau de référence de sécurité sont définis en conséquence. Ceci permet d'identifier ceux que nous avons dénommés « les racines de confiance ». Le chapitre 3 présente et analyse la différence et les relations qui existent entre les processus de la Gestion des Risques et de la Gestion de la Sécurité, afin d'identifier les éléments constitutifs du cadre de protection à inclure dans notre modèle d'évaluation. Le chapitre 4 est consacré à la présentation de notre modèle d'évaluation Information Security Assurance Assessment Model (ISAAM) et la manière dont il répond aux exigences de l'évaluation telle que nous les avons préalablement présentées. Dans ce chapitre les concepts sous-jacents relatifs aux notions d'assurance et de confiance sont analysés. En se basant sur ces deux concepts, la structure du modèle d'évaluation est développée pour obtenir une plateforme qui offre un certain niveau de garantie en s'appuyant sur trois attributs d'évaluation, à savoir : « la structure de confiance », « la qualité du processus », et « la réalisation des exigences et des objectifs ». Les problématiques liées à chacun de ces attributs d'évaluation sont analysées en se basant sur l'état de l'art de la recherche et de la littérature, sur les différentes méthodes existantes ainsi que sur les normes et les standards les plus courants dans le domaine de la sécurité. Sur cette base, trois différents niveaux d'évaluation sont construits, à savoir : le niveau d'assurance, le niveau de qualité et le niveau de maturité qui constituent la base de l'évaluation de l'état global de la sécurité d'une organisation. La deuxième partie: « L'application du Modèle d'évaluation de l'assurance de la sécurité de l'information par domaine de sécurité » est elle aussi composée de quatre chapitres. Le modèle d'évaluation déjà construit et analysé est, dans cette partie, mis dans un contexte spécifique selon les quatre dimensions prédéfinies de sécurité qui sont: la dimension Organisationnelle, la dimension Fonctionnelle, la dimension Humaine, et la dimension Légale. Chacune de ces dimensions et son évaluation spécifique fait l'objet d'un chapitre distinct. Pour chacune des dimensions, une évaluation en deux phases est construite comme suit. La première phase concerne l'identification des éléments qui constituent la base de l'évaluation: ? Identification des éléments clés de l'évaluation ; ? Identification des « Focus Area » pour chaque dimension qui représentent les problématiques se trouvant dans la dimension ; ? Identification des « Specific Factors » pour chaque Focus Area qui représentent les mesures de sécurité et de contrôle qui contribuent à résoudre ou à diminuer les impacts des risques. La deuxième phase concerne l'évaluation de chaque dimension précédemment présentées. Elle est constituée d'une part, de l'implémentation du modèle général d'évaluation à la dimension concernée en : ? Se basant sur les éléments spécifiés lors de la première phase ; ? Identifiant les taches sécuritaires spécifiques, les processus, les procédures qui auraient dû être effectués pour atteindre le niveau de protection souhaité. D'autre part, l'évaluation de chaque dimension est complétée par la proposition d'un modèle de maturité spécifique à chaque dimension, qui est à considérer comme une base de référence pour le niveau global de sécurité. Pour chaque dimension nous proposons un modèle de maturité générique qui peut être utilisé par chaque organisation, afin de spécifier ses propres exigences en matière de sécurité. Cela constitue une innovation dans le domaine de l'évaluation, que nous justifions pour chaque dimension et dont nous mettons systématiquement en avant la plus value apportée. La troisième partie de notre document est relative à la validation globale de notre proposition et contient en guise de conclusion, une mise en perspective critique de notre travail et des remarques finales. Cette dernière partie est complétée par une bibliographie et des annexes. Notre modèle d'évaluation de la sécurité intègre et se base sur de nombreuses sources d'expertise, telles que les bonnes pratiques, les normes, les standards, les méthodes et l'expertise de la recherche scientifique du domaine. Notre proposition constructive répond à un véritable problème non encore résolu, auquel doivent faire face toutes les organisations, indépendamment de la taille et du profil. Cela permettrait à ces dernières de spécifier leurs exigences particulières en matière du niveau de sécurité à satisfaire, d'instancier un processus d'évaluation spécifique à leurs besoins afin qu'elles puissent s'assurer que leur sécurité de l'information soit gérée d'une manière appropriée, offrant ainsi un certain niveau de confiance dans le degré de protection fourni. Nous avons intégré dans notre modèle le meilleur du savoir faire, de l'expérience et de l'expertise disponible actuellement au niveau international, dans le but de fournir un modèle d'évaluation simple, générique et applicable à un grand nombre d'organisations publiques ou privées. La valeur ajoutée de notre modèle d'évaluation réside précisément dans le fait qu'il est suffisamment générique et facile à implémenter tout en apportant des réponses sur les besoins concrets des organisations. Ainsi notre proposition constitue un outil d'évaluation fiable, efficient et dynamique découlant d'une approche d'évaluation cohérente. De ce fait, notre système d'évaluation peut être implémenté à l'interne par l'entreprise elle-même, sans recourir à des ressources supplémentaires et lui donne également ainsi la possibilité de mieux gouverner sa sécurité de l'information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To evaluate the public health impact of statin prescribing strategies based on the Justification for the Use of Statins in Primary Prevention: An Intervention Trial Evaluating Rosuvastatin Study (JUPITER). METHODS: We studied 2268 adults aged 35-75 without cardiovascular disease in a population-based study in Switzerland in 2003-2006. We assessed the eligibility for statins according to the Adult Treatment Panel III (ATPIII) guidelines, and by adding "strict" (hs-CRP≥2.0mg/L and LDL-cholesterol <3.4mmol/L), and "extended" (hs-CRP≥2.0mg/L alone) JUPITER-like criteria. We estimated the proportion of CHD deaths potentially prevented over 10years in the Swiss population. RESULTS: Fifteen % were already taking statins, 42% were eligible by ATPIII guidelines, 53% by adding "strict", and 62% by adding "extended" criteria, with a total of 19% newly eligible. The number needed to treat with statins to avoid one CHD death over 10years was 38 for ATPIII, 84 for "strict" and 92 for "extended" JUPITER-like criteria. ATPIII would prevent 17% of CHD deaths, compared with 20% for ATPIII+"strict" and 23% for ATPIII + "extended" criteria (+6%). CONCLUSION: Implementing JUPITER-like strategies would make statin prescribing for primary prevention more common and less efficient than it is with current guidelines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: In this study, we investigated the structural plasticity of the contralesional motor network in ischemic stroke patients using diffusion magnetic resonance imaging (MRI) and explored a model that combines a MRI-based metric of contralesional network integrity and clinical data to predict functional outcome at 6 months after stroke. METHODS: MRI and clinical examinations were performed in 12 patients in the acute phase, at 1 and 6 months after stroke. Twelve age- and gender-matched controls underwent 2 MRIs 1 month apart. Structural remodeling after stroke was assessed using diffusion MRI with an automated measurement of generalized fractional anisotropy (GFA), which was calculated along connections between contralesional cortical motor areas. The predictive model of poststroke functional outcome was computed using a linear regression of acute GFA measures and the clinical assessment. RESULTS: GFA changes in the contralesional motor tracts were found in all patients and differed significantly from controls (0.001 ≤ p < 0.05). GFA changes in intrahemispheric and interhemispheric motor tracts correlated with age (p ≤ 0.01); those in intrahemispheric motor tracts correlated strongly with clinical scores and stroke sizes (p ≤ 0.001). GFA measured in the acute phase together with a routine motor score and age were a strong predictor of motor outcome at 6 months (r(2) = 0.96, p = 0.0002). CONCLUSION: These findings represent a proof of principle that contralesional diffusion MRI measures may provide reliable information for personalized rehabilitation planning after ischemic motor stroke. Neurology® 2012;79:39-46.