170 resultados para explicit categorization
Resumo:
The 2009 International Society of Urological Pathology Consensus Conference in Boston made recommendations regarding the standardization of pathology reporting of radical prostatectomy specimens. Issues relating to the infiltration of tumor into the seminal vesicles and regional lymph nodes were coordinated by working group 4. There was a consensus that complete blocking of the seminal vesicles was not necessary, although sampling of the junction of the seminal vesicles and prostate was mandatory. There was consensus that sampling of the vas deferens margins was not obligatory. There was also consensus that muscular wall invasion of the extraprostatic seminal vesicle only should be regarded as seminal vesicle invasion. Categorization into types of seminal vesicle spread was agreed by consensus to be not necessary. For examination of lymph nodes, there was consensus that special techniques such as frozen sectioning were of use only in high-risk cases. There was no consensus on the optimal sampling method for pelvic lymph node dissection specimens, although there was consensus that all lymph nodes should be completely blocked as a minimum. There was also a consensus that a count of the number of lymph nodes harvested should be attempted. In view of recent evidence, there was consensus that the diameter of the largest lymph node metastasis should be measured. These consensus decisions will hopefully clarify the difficult areas of pathological assessment in radical prostatectomy evaluation and improve the concordance of research series to allow more accurate assessment of patient prognosis.
Resumo:
ABSTRACT: BACKGROUND: Decision curve analysis has been introduced as a method to evaluate prediction models in terms of their clinical consequences if used for a binary classification of subjects into a group who should and into a group who should not be treated. The key concept for this type of evaluation is the "net benefit", a concept borrowed from utility theory. METHODS: We recall the foundations of decision curve analysis and discuss some new aspects. First, we stress the formal distinction between the net benefit for the treated and for the untreated and define the concept of the "overall net benefit". Next, we revisit the important distinction between the concept of accuracy, as typically assessed using the Youden index and a receiver operating characteristic (ROC) analysis, and the concept of utility of a prediction model, as assessed using decision curve analysis. Finally, we provide an explicit implementation of decision curve analysis to be applied in the context of case-control studies. RESULTS: We show that the overall net benefit, which combines the net benefit for the treated and the untreated, is a natural alternative to the benefit achieved by a model, being invariant with respect to the coding of the outcome, and conveying a more comprehensive picture of the situation. Further, within the framework of decision curve analysis, we illustrate the important difference between the accuracy and the utility of a model, demonstrating how poor an accurate model may be in terms of its net benefit. Eventually, we expose that the application of decision curve analysis to case-control studies, where an accurate estimate of the true prevalence of a disease cannot be obtained from the data, is achieved with a few modifications to the original calculation procedure. CONCLUSIONS: We present several interrelated extensions to decision curve analysis that will both facilitate its interpretation and broaden its potential area of application.
Resumo:
We investigate the selective pressures on a social trait when evolution occurs in a population of constant size. We show that any social trait that is spiteful simultaneously qualifies as altruistic. In other words, any trait that reduces the fitness of less related individuals necessarily increases that of related ones. Our analysis demonstrates that the distinction between "Hamiltonian spite" and "Wilsonian spite" is not justified on the basis of fitness effects. We illustrate this general result with an explicit model for the evolution of a social act that reduces the recipient's survival ("harming trait"). This model shows that the evolution of harming is favoured if local demes are of small size and migration is low (philopatry). Further, deme size and migration rate determine whether harming evolves as a selfish strategy by increasing the fitness of the actor, or as a spiteful/altruistic strategy through its positive effect on the fitness of close kin.
Resumo:
Aim To assess the geographical transferability of niche-based species distribution models fitted with two modelling techniques. Location Two distinct geographical study areas in Switzerland and Austria, in the subalpine and alpine belts. Methods Generalized linear and generalized additive models (GLM and GAM) with a binomial probability distribution and a logit link were fitted for 54 plant species, based on topoclimatic predictor variables. These models were then evaluated quantitatively and used for spatially explicit predictions within (internal evaluation and prediction) and between (external evaluation and prediction) the two regions. Comparisons of evaluations and spatial predictions between regions and models were conducted in order to test if species and methods meet the criteria of full transferability. By full transferability, we mean that: (1) the internal evaluation of models fitted in region A and B must be similar; (2) a model fitted in region A must at least retain a comparable external evaluation when projected into region B, and vice-versa; and (3) internal and external spatial predictions have to match within both regions. Results The measures of model fit are, on average, 24% higher for GAMs than for GLMs in both regions. However, the differences between internal and external evaluations (AUC coefficient) are also higher for GAMs than for GLMs (a difference of 30% for models fitted in Switzerland and 54% for models fitted in Austria). Transferability, as measured with the AUC evaluation, fails for 68% of the species in Switzerland and 55% in Austria for GLMs (respectively for 67% and 53% of the species for GAMs). For both GAMs and GLMs, the agreement between internal and external predictions is rather weak on average (Kulczynski's coefficient in the range 0.3-0.4), but varies widely among individual species. The dominant pattern is an asymmetrical transferability between the two study regions (a mean decrease of 20% for the AUC coefficient when the models are transferred from Switzerland and 13% when they are transferred from Austria). Main conclusions The large inter-specific variability observed among the 54 study species underlines the need to consider more than a few species to test properly the transferability of species distribution models. The pronounced asymmetry in transferability between the two study regions may be due to peculiarities of these regions, such as differences in the ranges of environmental predictors or the varied impact of land-use history, or to species-specific reasons like differential phenotypic plasticity, existence of ecotypes or varied dependence on biotic interactions that are not properly incorporated into niche-based models. The lower variation between internal and external evaluation of GLMs compared to GAMs further suggests that overfitting may reduce transferability. Overall, a limited geographical transferability calls for caution when projecting niche-based models for assessing the fate of species in future environments.
Resumo:
The cytotoxic T-cell and natural killer (NK)-cell lymphomas and related disorders are important but relatively rare lymphoid neoplasms that frequently are a challenge for practicing pathologists. This selective review, based on a meeting of the International Lymphoma Study Group, briefly reviews T-cell and NK-cell development and addresses questions related to the importance of precise cell lineage (αβ-type T cell, γδ T cell, or NK cell), the implications of Epstein-Barr virus infection, the significance of anatomic location including nodal disease, and the question of further categorization of enteropathy-associated T-cell lymphomas. Finally, developments subsequent to the 2008 World Health Organization Classification, including the recognition of indolent NK-cell and T-cell disorders of the gastrointestinal tract are presented.
Resumo:
Introduction: Biological. therapy has dramatically changed management of Crohn's disease (CD). New data have confirmed the benefit and relative long-term safety of anti-TNF alpha inhibition as part of a regular scheduled administration programme. The EPACT appropriateness criteria for maintenance treatment after medically-induced remission (MIR) or surgically-induced remission (SIR) of CD thus required updating. Methods: A multidisciplinary international expert panel (EPACT II, Geneva, Switzerland) discussed and anonymously rated detailed, explicit clinical indications based on evidence in the literature and personal expertise. Median ratings (on a 9-point scale) were stratified into three assessment categories: appropriate (7-9), uncertain (4-6 and/or disagreement) and inappropriate (1-3). Experts ranked appropriate medication according to their own clinical practice, without any consideration of cost. Results: Three hundred and ninety-two specific indications for maintenance treatment of CD were rated (200 for MIR and 192 for SIR). Azathioprine, methotrexate and/or anti-TNF alpha antibodies were considered appropriate in 42 indications, corresponding to 68% of all appropriate interventions (97% of MIR and 39% of SIR). The remaining appropriate interventions consisted of mesalazine and a "wait-and-see" strategy. Factors that influenced the panel's voting were patient characteristics and outcome of previous treatment. Results favour use of anti-TNF alpha agents after failure of any immunosuppressive therapy, while earlier primary use remains controversial. Conclusion: Detailed explicit appropriateness criteria (EPACT) have been updated for maintenance treatment of CD. New expert recommendations for use of the classic immunosuppressors as well as anti-TNF alpha agents are now freely available online (www.epact.ch). The validity of these criteria should now be tested by prospective evaluation. (C) 2009 European Crohn's and Colitis Organisation. Published by Elsevier B.V. All rights reserved.
Resumo:
BACKGROUND: In Switzerland, 30% of HIV-infected individuals are diagnosed late. To optimize HIV testing, the Swiss Federal Office of Public Health (FOPH) updated 'Provider Induced Counseling and Testing' (PICT) recommendations in 2010. These permit doctors to test patients if HIV infection is suspected, without explicit consent or pre-test counseling; patients should nonetheless be informed that testing will be performed. We examined awareness of these updated recommendations among emergency department (ED) doctors. METHODS: We conducted a questionnaire-based survey among 167 ED doctors at five teaching hospitals in French-Speaking Switzerland between 1(st) May and 31(st) July 2011. For 25 clinical scenarios, participants had to state whether HIV testing was indicated or whether patient consent or pre-test counseling was required. We asked how many HIV tests participants had requested in the previous month, and whether they were aware of the FOPH testing recommendations. RESULTS: 144/167 doctors (88%) returned the questionnaire. Median postgraduate experience was 6.5 years (interquartile range [IQR] 3; 12). Mean percentage of correct answers was 59 ± 11%, senior doctors scoring higher (P=0.001). Lowest-scoring questions pertained to acute HIV infection and scenarios where patient consent was not required. Median number of test requests was 1 (IQR 0-2, range 0-10). Only 26/144 (18%) of participants were aware of the updated FOPH recommendations. Those aware had higher scores (P=0.001) but did not perform more HIV tests. CONCLUSIONS: Swiss ED doctors are not aware of the national HIV testing recommendations and rarely perform HIV tests. Improved recommendation dissemination and adherence is required if ED doctors are to contribute to earlier HIV diagnoses.
Resumo:
INTRODUCTION: urinary incontinence (UI) is a phenomenon with high prevalence in hospitalized elderly patients, effecting up to 70% of patients requiring long term care. However, despite the discomfort it causes and its association with functional decline, it seems to be given insufficient attention by nurses in geriatric care. OBJECTIVES: to assess the prevalence of urinary incontinence in geriatric patients at admission and the level of nurse involvement as characterized by the explicit documentation of UI diagnosis in the patient's record, prescription of nursing intervention, or nursing actions related to UI. METHODS: cross-sectional retrospective chart review. One hundred cases were randomly selected from those patients 65 years or older admitted to the geriatric ward of a university hospital. The variables examined included: total and continence scores on the Measure of Functional Independence (MIF), socio-demographic variables, presence of a nursing diagnosis in the medical record, prescription of or documentation of a nursing intervention related to UI. RESULTS: the prevalence of urinary incontinence was 72 % and UI was positively correlated with a low MIF score, age and status of awaiting placement. Of the examined cases, nursing diagnosis of UI was only documented in 1.4 % of cases, nursing interventions were prescribed in 54 % of cases, and at least one nursing intervention was performed in 72 % of cases. The vast majority of the interventions were palliative. DISCUSSION: the results on the prevalence of IU are similar to those reported in several other studies. This is also the case in relation to nursing interventions. In this study, people with UI were given the same care regardless of their MIF score MIF, age or gender. One limitation of this study is that it is retrospective and therefore dependent on the quality of the nursing documentation. CONCLUSIONS: this study is novel because it examines UI in relation to nursing interventions. It demonstrates that despite a high prevalence of UI, the general level of concern for nurses remains relatively low. Individualized care is desirable and clinical innovations must be developed for primary and secondary prevention of UI during hospitalization.
Resumo:
Screening people without symptoms of disease is an attractive idea. Screening allows early detection of disease or elevated risk of disease, and has the potential for improved treatment and reduction of mortality. The list of future screening opportunities is set to grow because of the refinement of screening techniques, the increasing frequency of degenerative and chronic diseases, and the steadily growing body of evidence on genetic predispositions for various diseases. But how should we decide on the diseases for which screening should be done and on recommendations for how it should be implemented? We use the examples of prostate cancer and genetic screening to show the importance of considering screening as an ongoing population-based intervention with beneficial and harmful effects, and not simply the use of a test. Assessing whether screening should be recommended and implemented for any named disease is therefore a multi-dimensional task in health technology assessment. There are several countries that already use established processes and criteria to assess the appropriateness of screening. We argue that the Swiss healthcare system needs a nationwide screening commission mandated to conduct appropriate evidence-based evaluation of the impact of proposed screening interventions, to issue evidence-based recommendations, and to monitor the performance of screening programmes introduced. Without explicit processes there is a danger that beneficial screening programmes could be neglected and that ineffective, and potentially harmful, screening procedures could be introduced.
Resumo:
Do our brains implicitly track the energetic content of the foods we see? Using electrical neuroimaging of visual evoked potentials (VEPs) we show that the human brain can rapidly discern food's energetic value, vis à vis its fat content, solely from its visual presentation. Responses to images of high-energy and low-energy food differed over two distinct time periods. The first period, starting at approximately 165 ms post-stimulus onset, followed from modulations in VEP topography and by extension in the configuration of the underlying brain network. Statistical comparison of source estimations identified differences distributed across a wide network including both posterior occipital regions and temporo-parietal cortices typically associated with object processing, and also inferior frontal cortices typically associated with decision-making. During a successive processing stage (starting at approximately 300 ms), responses differed both topographically and in terms of strength, with source estimations differing predominantly within prefrontal cortical regions implicated in reward assessment and decision-making. These effects occur orthogonally to the task that is actually being performed and suggest that reward properties such as a food's energetic content are treated rapidly and in parallel by a distributed network of brain regions involved in object categorization, reward assessment, and decision-making.
Resumo:
This paper discusses social representations in scientific communications and private ones that are linked to the individual imagination. Social representations, in a limited sense, are useful for the development of preventive messages, but of little benefit to clinical work. We highlight some non-explicit aspects of scientific discourse that impact on treatment: projected beliefs and values. We tackle the relationship between the concepts of representation, imagination, identity and temporality in the individual approach of the cancer patient.
Resumo:
Risk theory has been a very active research area over the last decades. The main objectives of the theory are to find adequate stochastic processes which can model the surplus of a (non-life) insurance company and to analyze the risk related quantities such as ruin time, ruin probability, expected discounted penalty function and expected discounted dividend/tax payments. The study of these ruin related quantities provides crucial information for actuaries and decision makers. This thesis consists of the study of four different insurance risk models which are essentially related. The ruin and related quantities are investigated by using different techniques, resulting in explicit or asymptotic expressions for the ruin time, the ruin probability, the expected discounted penalty function and the expected discounted tax payments. - La recherche en théorie du risque a été très dynamique au cours des dernières décennies. D'un point de vue théorique, les principaux objectifs sont de trouver des processus stochastiques adéquats permettant de modéliser le surplus d'une compagnie d'assurance non vie et d'analyser les mesures de risque, notamment le temps de ruine, la probabilité de ruine, l'espérance de la valeur actuelle de la fonction de pénalité et l'espérance de la valeur actuelle des dividendes et taxes. L'étude de ces mesures associées à la ruine fournit des informations cruciales pour les actuaires et les décideurs. Cette thèse consiste en l'étude des quatre différents modèles de risque d'assurance qui sont essentiellement liés. La ruine et les mesures qui y sont associées sont examinées à l'aide de différentes techniques, ce qui permet d'induire des expressions explicites ou asymptotiques du temps de ruine, de la probabilité de ruine, de l'espérance de la valeur actuelle de la fonction de pénalité et l'espérance de la valeur actuelle des dividendes et taxes.
Resumo:
EXECUTIVE SUMMARY : Evaluating Information Security Posture within an organization is becoming a very complex task. Currently, the evaluation and assessment of Information Security are commonly performed using frameworks, methodologies and standards which often consider the various aspects of security independently. Unfortunately this is ineffective because it does not take into consideration the necessity of having a global and systemic multidimensional approach to Information Security evaluation. At the same time the overall security level is globally considered to be only as strong as its weakest link. This thesis proposes a model aiming to holistically assess all dimensions of security in order to minimize the likelihood that a given threat will exploit the weakest link. A formalized structure taking into account all security elements is presented; this is based on a methodological evaluation framework in which Information Security is evaluated from a global perspective. This dissertation is divided into three parts. Part One: Information Security Evaluation issues consists of four chapters. Chapter 1 is an introduction to the purpose of this research purpose and the Model that will be proposed. In this chapter we raise some questions with respect to "traditional evaluation methods" as well as identifying the principal elements to be addressed in this direction. Then we introduce the baseline attributes of our model and set out the expected result of evaluations according to our model. Chapter 2 is focused on the definition of Information Security to be used as a reference point for our evaluation model. The inherent concepts of the contents of a holistic and baseline Information Security Program are defined. Based on this, the most common roots-of-trust in Information Security are identified. Chapter 3 focuses on an analysis of the difference and the relationship between the concepts of Information Risk and Security Management. Comparing these two concepts allows us to identify the most relevant elements to be included within our evaluation model, while clearing situating these two notions within a defined framework is of the utmost importance for the results that will be obtained from the evaluation process. Chapter 4 sets out our evaluation model and the way it addresses issues relating to the evaluation of Information Security. Within this Chapter the underlying concepts of assurance and trust are discussed. Based on these two concepts, the structure of the model is developed in order to provide an assurance related platform as well as three evaluation attributes: "assurance structure", "quality issues", and "requirements achievement". Issues relating to each of these evaluation attributes are analysed with reference to sources such as methodologies, standards and published research papers. Then the operation of the model is discussed. Assurance levels, quality levels and maturity levels are defined in order to perform the evaluation according to the model. Part Two: Implementation of the Information Security Assurance Assessment Model (ISAAM) according to the Information Security Domains consists of four chapters. This is the section where our evaluation model is put into a welldefined context with respect to the four pre-defined Information Security dimensions: the Organizational dimension, Functional dimension, Human dimension, and Legal dimension. Each Information Security dimension is discussed in a separate chapter. For each dimension, the following two-phase evaluation path is followed. The first phase concerns the identification of the elements which will constitute the basis of the evaluation: ? Identification of the key elements within the dimension; ? Identification of the Focus Areas for each dimension, consisting of the security issues identified for each dimension; ? Identification of the Specific Factors for each dimension, consisting of the security measures or control addressing the security issues identified for each dimension. The second phase concerns the evaluation of each Information Security dimension by: ? The implementation of the evaluation model, based on the elements identified for each dimension within the first phase, by identifying the security tasks, processes, procedures, and actions that should have been performed by the organization to reach the desired level of protection; ? The maturity model for each dimension as a basis for reliance on security. For each dimension we propose a generic maturity model that could be used by every organization in order to define its own security requirements. Part three of this dissertation contains the Final Remarks, Supporting Resources and Annexes. With reference to the objectives of our thesis, the Final Remarks briefly analyse whether these objectives were achieved and suggest directions for future related research. Supporting resources comprise the bibliographic resources that were used to elaborate and justify our approach. Annexes include all the relevant topics identified within the literature to illustrate certain aspects of our approach. Our Information Security evaluation model is based on and integrates different Information Security best practices, standards, methodologies and research expertise which can be combined in order to define an reliable categorization of Information Security. After the definition of terms and requirements, an evaluation process should be performed in order to obtain evidence that the Information Security within the organization in question is adequately managed. We have specifically integrated into our model the most useful elements of these sources of information in order to provide a generic model able to be implemented in all kinds of organizations. The value added by our evaluation model is that it is easy to implement and operate and answers concrete needs in terms of reliance upon an efficient and dynamic evaluation tool through a coherent evaluation system. On that basis, our model could be implemented internally within organizations, allowing them to govern better their Information Security. RÉSUMÉ : Contexte général de la thèse L'évaluation de la sécurité en général, et plus particulièrement, celle de la sécurité de l'information, est devenue pour les organisations non seulement une mission cruciale à réaliser, mais aussi de plus en plus complexe. A l'heure actuelle, cette évaluation se base principalement sur des méthodologies, des bonnes pratiques, des normes ou des standards qui appréhendent séparément les différents aspects qui composent la sécurité de l'information. Nous pensons que cette manière d'évaluer la sécurité est inefficiente, car elle ne tient pas compte de l'interaction des différentes dimensions et composantes de la sécurité entre elles, bien qu'il soit admis depuis longtemps que le niveau de sécurité globale d'une organisation est toujours celui du maillon le plus faible de la chaîne sécuritaire. Nous avons identifié le besoin d'une approche globale, intégrée, systémique et multidimensionnelle de l'évaluation de la sécurité de l'information. En effet, et c'est le point de départ de notre thèse, nous démontrons que seule une prise en compte globale de la sécurité permettra de répondre aux exigences de sécurité optimale ainsi qu'aux besoins de protection spécifiques d'une organisation. Ainsi, notre thèse propose un nouveau paradigme d'évaluation de la sécurité afin de satisfaire aux besoins d'efficacité et d'efficience d'une organisation donnée. Nous proposons alors un modèle qui vise à évaluer d'une manière holistique toutes les dimensions de la sécurité, afin de minimiser la probabilité qu'une menace potentielle puisse exploiter des vulnérabilités et engendrer des dommages directs ou indirects. Ce modèle se base sur une structure formalisée qui prend en compte tous les éléments d'un système ou programme de sécurité. Ainsi, nous proposons un cadre méthodologique d'évaluation qui considère la sécurité de l'information à partir d'une perspective globale. Structure de la thèse et thèmes abordés Notre document est structuré en trois parties. La première intitulée : « La problématique de l'évaluation de la sécurité de l'information » est composée de quatre chapitres. Le chapitre 1 introduit l'objet de la recherche ainsi que les concepts de base du modèle d'évaluation proposé. La maniéré traditionnelle de l'évaluation de la sécurité fait l'objet d'une analyse critique pour identifier les éléments principaux et invariants à prendre en compte dans notre approche holistique. Les éléments de base de notre modèle d'évaluation ainsi que son fonctionnement attendu sont ensuite présentés pour pouvoir tracer les résultats attendus de ce modèle. Le chapitre 2 se focalise sur la définition de la notion de Sécurité de l'Information. Il ne s'agit pas d'une redéfinition de la notion de la sécurité, mais d'une mise en perspectives des dimensions, critères, indicateurs à utiliser comme base de référence, afin de déterminer l'objet de l'évaluation qui sera utilisé tout au long de notre travail. Les concepts inhérents de ce qui constitue le caractère holistique de la sécurité ainsi que les éléments constitutifs d'un niveau de référence de sécurité sont définis en conséquence. Ceci permet d'identifier ceux que nous avons dénommés « les racines de confiance ». Le chapitre 3 présente et analyse la différence et les relations qui existent entre les processus de la Gestion des Risques et de la Gestion de la Sécurité, afin d'identifier les éléments constitutifs du cadre de protection à inclure dans notre modèle d'évaluation. Le chapitre 4 est consacré à la présentation de notre modèle d'évaluation Information Security Assurance Assessment Model (ISAAM) et la manière dont il répond aux exigences de l'évaluation telle que nous les avons préalablement présentées. Dans ce chapitre les concepts sous-jacents relatifs aux notions d'assurance et de confiance sont analysés. En se basant sur ces deux concepts, la structure du modèle d'évaluation est développée pour obtenir une plateforme qui offre un certain niveau de garantie en s'appuyant sur trois attributs d'évaluation, à savoir : « la structure de confiance », « la qualité du processus », et « la réalisation des exigences et des objectifs ». Les problématiques liées à chacun de ces attributs d'évaluation sont analysées en se basant sur l'état de l'art de la recherche et de la littérature, sur les différentes méthodes existantes ainsi que sur les normes et les standards les plus courants dans le domaine de la sécurité. Sur cette base, trois différents niveaux d'évaluation sont construits, à savoir : le niveau d'assurance, le niveau de qualité et le niveau de maturité qui constituent la base de l'évaluation de l'état global de la sécurité d'une organisation. La deuxième partie: « L'application du Modèle d'évaluation de l'assurance de la sécurité de l'information par domaine de sécurité » est elle aussi composée de quatre chapitres. Le modèle d'évaluation déjà construit et analysé est, dans cette partie, mis dans un contexte spécifique selon les quatre dimensions prédéfinies de sécurité qui sont: la dimension Organisationnelle, la dimension Fonctionnelle, la dimension Humaine, et la dimension Légale. Chacune de ces dimensions et son évaluation spécifique fait l'objet d'un chapitre distinct. Pour chacune des dimensions, une évaluation en deux phases est construite comme suit. La première phase concerne l'identification des éléments qui constituent la base de l'évaluation: ? Identification des éléments clés de l'évaluation ; ? Identification des « Focus Area » pour chaque dimension qui représentent les problématiques se trouvant dans la dimension ; ? Identification des « Specific Factors » pour chaque Focus Area qui représentent les mesures de sécurité et de contrôle qui contribuent à résoudre ou à diminuer les impacts des risques. La deuxième phase concerne l'évaluation de chaque dimension précédemment présentées. Elle est constituée d'une part, de l'implémentation du modèle général d'évaluation à la dimension concernée en : ? Se basant sur les éléments spécifiés lors de la première phase ; ? Identifiant les taches sécuritaires spécifiques, les processus, les procédures qui auraient dû être effectués pour atteindre le niveau de protection souhaité. D'autre part, l'évaluation de chaque dimension est complétée par la proposition d'un modèle de maturité spécifique à chaque dimension, qui est à considérer comme une base de référence pour le niveau global de sécurité. Pour chaque dimension nous proposons un modèle de maturité générique qui peut être utilisé par chaque organisation, afin de spécifier ses propres exigences en matière de sécurité. Cela constitue une innovation dans le domaine de l'évaluation, que nous justifions pour chaque dimension et dont nous mettons systématiquement en avant la plus value apportée. La troisième partie de notre document est relative à la validation globale de notre proposition et contient en guise de conclusion, une mise en perspective critique de notre travail et des remarques finales. Cette dernière partie est complétée par une bibliographie et des annexes. Notre modèle d'évaluation de la sécurité intègre et se base sur de nombreuses sources d'expertise, telles que les bonnes pratiques, les normes, les standards, les méthodes et l'expertise de la recherche scientifique du domaine. Notre proposition constructive répond à un véritable problème non encore résolu, auquel doivent faire face toutes les organisations, indépendamment de la taille et du profil. Cela permettrait à ces dernières de spécifier leurs exigences particulières en matière du niveau de sécurité à satisfaire, d'instancier un processus d'évaluation spécifique à leurs besoins afin qu'elles puissent s'assurer que leur sécurité de l'information soit gérée d'une manière appropriée, offrant ainsi un certain niveau de confiance dans le degré de protection fourni. Nous avons intégré dans notre modèle le meilleur du savoir faire, de l'expérience et de l'expertise disponible actuellement au niveau international, dans le but de fournir un modèle d'évaluation simple, générique et applicable à un grand nombre d'organisations publiques ou privées. La valeur ajoutée de notre modèle d'évaluation réside précisément dans le fait qu'il est suffisamment générique et facile à implémenter tout en apportant des réponses sur les besoins concrets des organisations. Ainsi notre proposition constitue un outil d'évaluation fiable, efficient et dynamique découlant d'une approche d'évaluation cohérente. De ce fait, notre système d'évaluation peut être implémenté à l'interne par l'entreprise elle-même, sans recourir à des ressources supplémentaires et lui donne également ainsi la possibilité de mieux gouverner sa sécurité de l'information.
Resumo:
Much of the analytical modeling of morphogen profiles is based on simplistic scenarios, where the source is abstracted to be point-like and fixed in time, and where only the steady state solution of the morphogen gradient in one dimension is considered. Here we develop a general formalism allowing to model diffusive gradient formation from an arbitrary source. This mathematical framework, based on the Green's function method, applies to various diffusion problems. In this paper, we illustrate our theory with the explicit example of the Bicoid gradient establishment in Drosophila embryos. The gradient formation arises by protein translation from a mRNA distribution followed by morphogen diffusion with linear degradation. We investigate quantitatively the influence of spatial extension and time evolution of the source on the morphogen profile. For different biologically meaningful cases, we obtain explicit analytical expressions for both the steady state and time-dependent 1D problems. We show that extended sources, whether of finite size or normally distributed, give rise to more realistic gradients compared to a single point-source at the origin. Furthermore, the steady state solutions are fully compatible with a decreasing exponential behavior of the profile. We also consider the case of a dynamic source (e.g. bicoid mRNA diffusion) for which a protein profile similar to the ones obtained from static sources can be achieved.
Resumo:
Using a large prospective cohort of over 12,000 women, we determined 2 thresholds (high risk and low risk of hip fracture) to use in a 10-yr hip fracture probability model that we had previously described, a model combining the heel stiffness index measured by quantitative ultrasound (QUS) and a set of easily determined clinical risk factors (CRFs). The model identified a higher percentage of women with fractures as high risk than a previously reported risk score that combined QUS and CRF. In addition, it categorized women in a way that was quite consistent with the categorization that occurred using dual X-ray absorptiometry (DXA) and the World Health Organization (WHO) classification system; the 2 methods identified similar percentages of women with and without fractures in each of their 3 categories, but the 2 identified only in part the same women. Nevertheless, combining our composite probability model with DXA in a case findings strategy will likely further improve the detection of women at high risk of fragility hip fracture. We conclude that the currently proposed model may be of some use as an alternative to the WHO classification criteria for osteoporosis, at least when access to DXA is limited.