57 resultados para Industry relationship model
Resumo:
The construct of cognitive errors is clinically relevant for cognitive therapy of mood disorders. Beck's universality hypothesis postulates the relevance of negative cognitions in all subtypes of mood disorders, as well as positive cognitions for manic states. This hypothesis has rarely been empirically addressed for patients presenting bipolar affective disorder (BD). In-patients (n = 30) presenting with BD were interviewed, as were 30 participants of a matched control group. Valid and reliable observer-rater methodology for cognitive errors was applied to the session transcripts. Overall, patients make more cognitive errors than controls. When manic and depressive patients were compared, parts of the universality hypothesis were confirmed. Manic symptoms are related to positive and negative cognitive errors. These results are discussed with regard to the main assumptions of the cognitive model for depression; thus adding an argument for extending it to the BD diagnostic group, taking into consideration specificities in terms of cognitive errors. Clinical implications for cognitive therapy of BD are suggested.
Resumo:
PURPOSE: The objective of this experiment is to establish a continuous postmortem circulation in the vascular system of porcine lungs and to evaluate the pulmonary distribution of the perfusate. This research is performed in the bigger scope of a revascularization project of Thiel embalmed specimens. This technique enables teaching anatomy, practicing surgical procedures and doing research under lifelike circumstances. METHODS: After cannulation of the pulmonary trunk and the left atrium, the vascular system was flushed with paraffinum perliquidum (PP) through a heart-lung machine. A continuous circulation was then established using red PP, during which perfusion parameters were measured. The distribution of contrast-containing PP in the pulmonary circulation was visualized on computed tomography. Finally, the amount of leak from the vascular system was calculated. RESULTS: A reperfusion of the vascular system was initiated for 37 min. The flow rate ranged between 80 and 130 ml/min throughout the experiment with acceptable perfusion pressures (range: 37-78 mm Hg). Computed tomography imaging and 3D reconstruction revealed a diffuse vascular distribution of PP and a decreasing vascularization ratio in cranial direction. A self-limiting leak (i.e. 66.8% of the circulating volume) towards the tracheobronchial tree due to vessel rupture was also measured. CONCLUSIONS: PP enables circulation in an isolated porcine lung model with an acceptable pressure-flow relationship resulting in an excellent recruitment of the vascular system. Despite these promising results, rupture of vessel walls may cause leaks. Further exploration of the perfusion capacities of PP in other organs is necessary. Eventually, this could lead to the development of reperfused Thiel embalmed human bodies, which have several applications.
Resumo:
Background: Elevated levels of g-glutamyl transferase (GGT) have been associated with subsequent risk of elevated blood pressure (BP), hypertension and diabetes. However, the causality of these relationships has not been addressed. Mendelian randomization refers to the random allocation of alleles at the time of gamete formation. Such allocation is expected to be independent of any behavioural and environmental factors (known or unknown), allowing the analysis of largely unconfounded risk associations that are not due to reverse causation. Methods: We performed a cross-sectional analysis among 4361 participants to the population based CoLaus study. Associations of sex-specific GGT quartiles with systolic BP, diastolic BP and insulin levels were assessed using multivariable linear regression analyses. The rs2017869 GGT1 variant, which explained 1.6% of the variance in GGT levels, was used as an instrument to perform a Mendelian randomization analysis. Results: Median age of the study population was 53 years. After age and sex adjustment, GGT quartiles were strongly associated with systolic and diastolic BP (all p for linear trend <0.0001). After multivariable adjustment, these relationships were significantly attenuated, but remained significant for systolic (b(95%CI)¼1.30 (0.32;2.03), p¼0.007) and diastolic BP (b (95%CI)¼0.57 (0.02;1.13), p¼0.04). Using Mendelian randomization, we observed no positive association of GGT with either systolic BP (b (95%CI)¼-5.68 (-11.51-0.16), p¼0.06) or diastolic BP (b (95%CI)¼ -2.24 (-5.98;1.49) p¼0.24). The association of GGT with insulin was also attenuated after multivariable adjustment. Nevertheless, a strong linear trend persisted in the fully adjusted model (b (95%CI)¼0.07 (0.04;0.09), p<0.0001). Using Mendelian randomization, we observed a similar positive association of GGT with insulin (b (95%CI)¼0.19 (0.01-0.37), p¼0.04). Conclusion: In this study, we found evidence for a direct causal relationship between GGT and insulin, suggesting that oxidative stress may be causally implicated in the pathogenesis of type 2 diabetes mellitus.
Resumo:
The purpose of this ex post facto study is to analyze the personality profile of outpatients who met criteria for borderline personality disorder according to the Five-Factor Model of personality. All patients (N = 52) completed the International Personality Disorder Examination (IPDE) Screening Questionnaire, the Big Five Questionnaire (BFQ), the Beck Depression Inventory (BDI), and the Beck Hopelessness Scale (BHS). The results show a high comorbidity with other DSM-IV-TR Axis II disorders, in particular with those from Cluster C. The BFQ average score indicates that the outpatients who met borderline criteria score lower than controls on all five dimensions, and especially on emotional stability. Correlations were computed between the BFQ and the IPDE scales in our sample. These results suggest that specific personality profile are linked to different comorbidity patterns. More than a half of our sample has clinically significant scores on Beck's scales. Surprisingly, depression and hopelessness are neither correlated with the borderline scale, nor have an effect in the relationship between personality and personality disorders.
Resumo:
One third of all stroke survivors develop post-stroke depression (PSD). Depressive symptoms adversely affect rehabilitation and significantly increase risk of death in the post-stroke period. One of the theoretical views on the determinants of PSD focuses on psychosocial factors like disability and social support. Others emphasize biologic mechanisms such as disruption of biogenic amine neurotransmission and release of proinflammatory cytokines. The "lesion location" perspective attempts to establish a relationship between localization of stroke and occurrence of depression, but empirical results remain contradictory. These divergences are partly related to the fact that neuroimaging methods, unlike neuropathology, are not able to assess precisely the full extent of stroke-affected areas and do not specify the different types of vascular lesions. We provide here an overview of the known phenomenological profile and current pathogenic hypotheses of PSD and present neuropathological data challenging the classic "single-stroke"-based neuroanatomical model of PSD. We suggest that vascular burden due to the chronic accumulation of small macrovascular and microvascular lesions may be a crucial determinant of the development and evolution of PSD.
Resumo:
BACKGROUND: It is well established that high adherence to HIV-infected patients on highly active antiretroviral treatment (HAART) is a major determinant of virological and immunologic success. Furthermore, psychosocial research has identified a wide range of adherence factors including patients' subjective beliefs about the effectiveness of HAART. Current statistical approaches, mainly based on the separate identification either of factors associated with treatment effectiveness or of those associated with adherence, fail to properly explore the true relationship between adherence and treatment effectiveness. Adherence behavior may be influenced not only by perceived benefits-which are usually the focus of related studies-but also by objective treatment benefits reflected in biological outcomes. METHODS: Our objective was to assess the bidirectional relationship between adherence and response to treatment among patients enrolled in the ANRS CO8 APROCO-COPILOTE study. We compared a conventional statistical approach based on the separate estimations of an adherence and an effectiveness equation to an econometric approach using a 2-equation simultaneous system based on the same 2 equations. RESULTS: Our results highlight a reciprocal relationship between adherence and treatment effectiveness. After controlling for endogeneity, adherence was positively associated with treatment effectiveness. Furthermore, CD4 count gain after baseline was found to have a positive significant effect on adherence at each observation period. This immunologic parameter was not significant when the adherence equation was estimated separately. In the 2-equation model, the covariances between disturbances of both equations were found to be significant, thus confirming the statistical appropriacy of studying adherence and treatment effectiveness jointly. CONCLUSIONS: Our results, which suggest that positive biological results arising as a result of high adherence levels, in turn reinforce continued adherence and strengthen the argument that patients who do not experience rapid improvement in their immunologic and clinical statuses after HAART initiation should be prioritized when developing adherence support interventions. Furthermore, they invalidate the hypothesis that HAART leads to "false reassurance" among HIV-infected patients.
Resumo:
This paper presents a statistical model for the quantification of the weight of fingerprint evidence. Contrarily to previous models (generative and score-based models), our model proposes to estimate the probability distributions of spatial relationships, directions and types of minutiae observed on fingerprints for any given fingermark. Our model is relying on an AFIS algorithm provided by 3M Cogent and on a dataset of more than 4,000,000 fingerprints to represent a sample from a relevant population of potential sources. The performance of our model was tested using several hundreds of minutiae configurations observed on a set of 565 fingermarks. In particular, the effects of various sub-populations of fingers (i.e., finger number, finger general pattern) on the expected evidential value of our test configurations were investigated. The performance of our model indicates that the spatial relationship between minutiae carries more evidential weight than their type or direction. Our results also indicate that the AFIS component of our model directly enables us to assign weight to fingerprint evidence without the need for the additional layer of complex statistical modeling involved by the estimation of the probability distributions of fingerprint features. In fact, it seems that the AFIS component is more sensitive to the sub-population effects than the other components of the model. Overall, the data generated during this research project contributes to support the idea that fingerprint evidence is a valuable forensic tool for the identification of individuals.
Resumo:
BACKGROUND: To understand cancer-related modifications to transcriptional programs requires detailed knowledge about the activation of signal-transduction pathways and gene expression programs. To investigate the mechanisms of target gene regulation by human estrogen receptor alpha (hERalpha), we combine extensive location and expression datasets with genomic sequence analysis. In particular, we study the influence of patterns of DNA occupancy by hERalpha on expression phenotypes. RESULTS: We find that strong ChIP-chip sites co-localize with strong hERalpha consensus sites and detect nucleotide bias near hERalpha sites. The localization of ChIP-chip sites relative to annotated genes shows that weak sites are enriched near transcription start sites, while stronger sites show no positional bias. Assessing the relationship between binding configurations and expression phenotypes, we find binding sites downstream of the transcription start site (TSS) to be equally good or better predictors of hERalpha-mediated expression as upstream sites. The study of FOX and SP1 cofactor sites near hERalpha ChIP sites shows that induced genes frequently have FOX or SP1 sites. Finally we integrate these multiple datasets to define a high confidence set of primary hERalpha target genes. CONCLUSION: Our results support the model of long-range interactions of hERalpha with the promoter-bound cofactor SP1 residing at the promoter of hERalpha target genes. FOX motifs co-occur with hERalpha motifs along responsive genes. Importantly we show that the spatial arrangement of sites near the start sites and within the full transcript is important in determining response to estrogen signaling.
Resumo:
The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.
Resumo:
We investigate the selective pressures on a social trait when evolution occurs in a population of constant size. We show that any social trait that is spiteful simultaneously qualifies as altruistic. In other words, any trait that reduces the fitness of less related individuals necessarily increases that of related ones. Our analysis demonstrates that the distinction between "Hamiltonian spite" and "Wilsonian spite" is not justified on the basis of fitness effects. We illustrate this general result with an explicit model for the evolution of a social act that reduces the recipient's survival ("harming trait"). This model shows that the evolution of harming is favoured if local demes are of small size and migration is low (philopatry). Further, deme size and migration rate determine whether harming evolves as a selfish strategy by increasing the fitness of the actor, or as a spiteful/altruistic strategy through its positive effect on the fitness of close kin.
Resumo:
An Adobe (R) animation is presented for use in undergraduate Biochemistry courses, illustrating the mechanism of Na+ and K+ translocation coupled to ATP hydrolysis by the (Na, K)-ATPase, a P-2c-type ATPase, or ATP-powered ion pump that actively translocates cations across plasma membranes. The enzyme is also known as an E-1/E-2-ATPase as it undergoes conformational changes between the E-1 and E-2 forms during the pumping cycle, altering the affinity and accessibility of the transmembrane ion-binding sites. The animation is based on Horisberger's scheme that incorporates the most recent significant findings to have improved our understanding of the (Na, K)-ATPase structure function relationship. The movements of the various domains within the (Na, K)-ATPase alpha-subunit illustrate the conformational changes that occur during Na+ and K+ translocation across the membrane and emphasize involvement of the actuator, nucleotide, and phosphorylation domains, that is, the "core engine" of the pump, with respect to ATP binding, cation transport, and ADP and P-i release.
Resumo:
The hyperpolarization-activated cyclic nucleotide-gated (HCN) channels are expressed in pacemaker cells very early during cardiogenesis. This work aimed at determining to what extent these channels are implicated in the electromechanical disturbances induced by a transient oxygen lack which may occur in utero. Spontaneously beating hearts or isolated ventricles and outflow tracts dissected from 4-day-old chick embryos were exposed to a selective inhibitor of HCN channels (ivabradine 0.1-10microM) to establish a dose-response relationship. The effects of ivabradine on electrocardiogram, excitation-contraction coupling and contractility of hearts submitted to anoxia (30min) and reoxygenation (60min) were also determined. The distribution of the predominant channel isoform, HCN4, was established in atria, ventricle and outflow tract by immunoblotting. Intrinsic beating rate of atria, ventricle and outflow tract was 164+/-22 (n=10), 78+/-24 (n=8) and 40+/-12bpm (n=23, mean+/-SD), respectively. In the whole heart, ivabradine (0.3microM) slowed the firing rate of atria by 16% and stabilized PR interval. These effects persisted throughout anoxia-reoxygenation, whereas the variations of QT duration, excitation-contraction coupling and contractility, as well as the types and duration of arrhythmias were not altered. Ivabradine (10microM) reduced the intrinsic rate of atria and isolated ventricle by 27% and 52%, respectively, whereas it abolished activity of the isolated outflow tract. Protein expression of HCN4 channels was higher in atria and ventricle than in the outflow tract. Thus, HCN channels are specifically distributed and control finely atrial, ventricular and outflow tract pacemakers as well as conduction in the embryonic heart under normoxia and throughout anoxia-reoxygenation.
Resumo:
EXECUTIVE SUMMARY : Evaluating Information Security Posture within an organization is becoming a very complex task. Currently, the evaluation and assessment of Information Security are commonly performed using frameworks, methodologies and standards which often consider the various aspects of security independently. Unfortunately this is ineffective because it does not take into consideration the necessity of having a global and systemic multidimensional approach to Information Security evaluation. At the same time the overall security level is globally considered to be only as strong as its weakest link. This thesis proposes a model aiming to holistically assess all dimensions of security in order to minimize the likelihood that a given threat will exploit the weakest link. A formalized structure taking into account all security elements is presented; this is based on a methodological evaluation framework in which Information Security is evaluated from a global perspective. This dissertation is divided into three parts. Part One: Information Security Evaluation issues consists of four chapters. Chapter 1 is an introduction to the purpose of this research purpose and the Model that will be proposed. In this chapter we raise some questions with respect to "traditional evaluation methods" as well as identifying the principal elements to be addressed in this direction. Then we introduce the baseline attributes of our model and set out the expected result of evaluations according to our model. Chapter 2 is focused on the definition of Information Security to be used as a reference point for our evaluation model. The inherent concepts of the contents of a holistic and baseline Information Security Program are defined. Based on this, the most common roots-of-trust in Information Security are identified. Chapter 3 focuses on an analysis of the difference and the relationship between the concepts of Information Risk and Security Management. Comparing these two concepts allows us to identify the most relevant elements to be included within our evaluation model, while clearing situating these two notions within a defined framework is of the utmost importance for the results that will be obtained from the evaluation process. Chapter 4 sets out our evaluation model and the way it addresses issues relating to the evaluation of Information Security. Within this Chapter the underlying concepts of assurance and trust are discussed. Based on these two concepts, the structure of the model is developed in order to provide an assurance related platform as well as three evaluation attributes: "assurance structure", "quality issues", and "requirements achievement". Issues relating to each of these evaluation attributes are analysed with reference to sources such as methodologies, standards and published research papers. Then the operation of the model is discussed. Assurance levels, quality levels and maturity levels are defined in order to perform the evaluation according to the model. Part Two: Implementation of the Information Security Assurance Assessment Model (ISAAM) according to the Information Security Domains consists of four chapters. This is the section where our evaluation model is put into a welldefined context with respect to the four pre-defined Information Security dimensions: the Organizational dimension, Functional dimension, Human dimension, and Legal dimension. Each Information Security dimension is discussed in a separate chapter. For each dimension, the following two-phase evaluation path is followed. The first phase concerns the identification of the elements which will constitute the basis of the evaluation: ? Identification of the key elements within the dimension; ? Identification of the Focus Areas for each dimension, consisting of the security issues identified for each dimension; ? Identification of the Specific Factors for each dimension, consisting of the security measures or control addressing the security issues identified for each dimension. The second phase concerns the evaluation of each Information Security dimension by: ? The implementation of the evaluation model, based on the elements identified for each dimension within the first phase, by identifying the security tasks, processes, procedures, and actions that should have been performed by the organization to reach the desired level of protection; ? The maturity model for each dimension as a basis for reliance on security. For each dimension we propose a generic maturity model that could be used by every organization in order to define its own security requirements. Part three of this dissertation contains the Final Remarks, Supporting Resources and Annexes. With reference to the objectives of our thesis, the Final Remarks briefly analyse whether these objectives were achieved and suggest directions for future related research. Supporting resources comprise the bibliographic resources that were used to elaborate and justify our approach. Annexes include all the relevant topics identified within the literature to illustrate certain aspects of our approach. Our Information Security evaluation model is based on and integrates different Information Security best practices, standards, methodologies and research expertise which can be combined in order to define an reliable categorization of Information Security. After the definition of terms and requirements, an evaluation process should be performed in order to obtain evidence that the Information Security within the organization in question is adequately managed. We have specifically integrated into our model the most useful elements of these sources of information in order to provide a generic model able to be implemented in all kinds of organizations. The value added by our evaluation model is that it is easy to implement and operate and answers concrete needs in terms of reliance upon an efficient and dynamic evaluation tool through a coherent evaluation system. On that basis, our model could be implemented internally within organizations, allowing them to govern better their Information Security. RÉSUMÉ : Contexte général de la thèse L'évaluation de la sécurité en général, et plus particulièrement, celle de la sécurité de l'information, est devenue pour les organisations non seulement une mission cruciale à réaliser, mais aussi de plus en plus complexe. A l'heure actuelle, cette évaluation se base principalement sur des méthodologies, des bonnes pratiques, des normes ou des standards qui appréhendent séparément les différents aspects qui composent la sécurité de l'information. Nous pensons que cette manière d'évaluer la sécurité est inefficiente, car elle ne tient pas compte de l'interaction des différentes dimensions et composantes de la sécurité entre elles, bien qu'il soit admis depuis longtemps que le niveau de sécurité globale d'une organisation est toujours celui du maillon le plus faible de la chaîne sécuritaire. Nous avons identifié le besoin d'une approche globale, intégrée, systémique et multidimensionnelle de l'évaluation de la sécurité de l'information. En effet, et c'est le point de départ de notre thèse, nous démontrons que seule une prise en compte globale de la sécurité permettra de répondre aux exigences de sécurité optimale ainsi qu'aux besoins de protection spécifiques d'une organisation. Ainsi, notre thèse propose un nouveau paradigme d'évaluation de la sécurité afin de satisfaire aux besoins d'efficacité et d'efficience d'une organisation donnée. Nous proposons alors un modèle qui vise à évaluer d'une manière holistique toutes les dimensions de la sécurité, afin de minimiser la probabilité qu'une menace potentielle puisse exploiter des vulnérabilités et engendrer des dommages directs ou indirects. Ce modèle se base sur une structure formalisée qui prend en compte tous les éléments d'un système ou programme de sécurité. Ainsi, nous proposons un cadre méthodologique d'évaluation qui considère la sécurité de l'information à partir d'une perspective globale. Structure de la thèse et thèmes abordés Notre document est structuré en trois parties. La première intitulée : « La problématique de l'évaluation de la sécurité de l'information » est composée de quatre chapitres. Le chapitre 1 introduit l'objet de la recherche ainsi que les concepts de base du modèle d'évaluation proposé. La maniéré traditionnelle de l'évaluation de la sécurité fait l'objet d'une analyse critique pour identifier les éléments principaux et invariants à prendre en compte dans notre approche holistique. Les éléments de base de notre modèle d'évaluation ainsi que son fonctionnement attendu sont ensuite présentés pour pouvoir tracer les résultats attendus de ce modèle. Le chapitre 2 se focalise sur la définition de la notion de Sécurité de l'Information. Il ne s'agit pas d'une redéfinition de la notion de la sécurité, mais d'une mise en perspectives des dimensions, critères, indicateurs à utiliser comme base de référence, afin de déterminer l'objet de l'évaluation qui sera utilisé tout au long de notre travail. Les concepts inhérents de ce qui constitue le caractère holistique de la sécurité ainsi que les éléments constitutifs d'un niveau de référence de sécurité sont définis en conséquence. Ceci permet d'identifier ceux que nous avons dénommés « les racines de confiance ». Le chapitre 3 présente et analyse la différence et les relations qui existent entre les processus de la Gestion des Risques et de la Gestion de la Sécurité, afin d'identifier les éléments constitutifs du cadre de protection à inclure dans notre modèle d'évaluation. Le chapitre 4 est consacré à la présentation de notre modèle d'évaluation Information Security Assurance Assessment Model (ISAAM) et la manière dont il répond aux exigences de l'évaluation telle que nous les avons préalablement présentées. Dans ce chapitre les concepts sous-jacents relatifs aux notions d'assurance et de confiance sont analysés. En se basant sur ces deux concepts, la structure du modèle d'évaluation est développée pour obtenir une plateforme qui offre un certain niveau de garantie en s'appuyant sur trois attributs d'évaluation, à savoir : « la structure de confiance », « la qualité du processus », et « la réalisation des exigences et des objectifs ». Les problématiques liées à chacun de ces attributs d'évaluation sont analysées en se basant sur l'état de l'art de la recherche et de la littérature, sur les différentes méthodes existantes ainsi que sur les normes et les standards les plus courants dans le domaine de la sécurité. Sur cette base, trois différents niveaux d'évaluation sont construits, à savoir : le niveau d'assurance, le niveau de qualité et le niveau de maturité qui constituent la base de l'évaluation de l'état global de la sécurité d'une organisation. La deuxième partie: « L'application du Modèle d'évaluation de l'assurance de la sécurité de l'information par domaine de sécurité » est elle aussi composée de quatre chapitres. Le modèle d'évaluation déjà construit et analysé est, dans cette partie, mis dans un contexte spécifique selon les quatre dimensions prédéfinies de sécurité qui sont: la dimension Organisationnelle, la dimension Fonctionnelle, la dimension Humaine, et la dimension Légale. Chacune de ces dimensions et son évaluation spécifique fait l'objet d'un chapitre distinct. Pour chacune des dimensions, une évaluation en deux phases est construite comme suit. La première phase concerne l'identification des éléments qui constituent la base de l'évaluation: ? Identification des éléments clés de l'évaluation ; ? Identification des « Focus Area » pour chaque dimension qui représentent les problématiques se trouvant dans la dimension ; ? Identification des « Specific Factors » pour chaque Focus Area qui représentent les mesures de sécurité et de contrôle qui contribuent à résoudre ou à diminuer les impacts des risques. La deuxième phase concerne l'évaluation de chaque dimension précédemment présentées. Elle est constituée d'une part, de l'implémentation du modèle général d'évaluation à la dimension concernée en : ? Se basant sur les éléments spécifiés lors de la première phase ; ? Identifiant les taches sécuritaires spécifiques, les processus, les procédures qui auraient dû être effectués pour atteindre le niveau de protection souhaité. D'autre part, l'évaluation de chaque dimension est complétée par la proposition d'un modèle de maturité spécifique à chaque dimension, qui est à considérer comme une base de référence pour le niveau global de sécurité. Pour chaque dimension nous proposons un modèle de maturité générique qui peut être utilisé par chaque organisation, afin de spécifier ses propres exigences en matière de sécurité. Cela constitue une innovation dans le domaine de l'évaluation, que nous justifions pour chaque dimension et dont nous mettons systématiquement en avant la plus value apportée. La troisième partie de notre document est relative à la validation globale de notre proposition et contient en guise de conclusion, une mise en perspective critique de notre travail et des remarques finales. Cette dernière partie est complétée par une bibliographie et des annexes. Notre modèle d'évaluation de la sécurité intègre et se base sur de nombreuses sources d'expertise, telles que les bonnes pratiques, les normes, les standards, les méthodes et l'expertise de la recherche scientifique du domaine. Notre proposition constructive répond à un véritable problème non encore résolu, auquel doivent faire face toutes les organisations, indépendamment de la taille et du profil. Cela permettrait à ces dernières de spécifier leurs exigences particulières en matière du niveau de sécurité à satisfaire, d'instancier un processus d'évaluation spécifique à leurs besoins afin qu'elles puissent s'assurer que leur sécurité de l'information soit gérée d'une manière appropriée, offrant ainsi un certain niveau de confiance dans le degré de protection fourni. Nous avons intégré dans notre modèle le meilleur du savoir faire, de l'expérience et de l'expertise disponible actuellement au niveau international, dans le but de fournir un modèle d'évaluation simple, générique et applicable à un grand nombre d'organisations publiques ou privées. La valeur ajoutée de notre modèle d'évaluation réside précisément dans le fait qu'il est suffisamment générique et facile à implémenter tout en apportant des réponses sur les besoins concrets des organisations. Ainsi notre proposition constitue un outil d'évaluation fiable, efficient et dynamique découlant d'une approche d'évaluation cohérente. De ce fait, notre système d'évaluation peut être implémenté à l'interne par l'entreprise elle-même, sans recourir à des ressources supplémentaires et lui donne également ainsi la possibilité de mieux gouverner sa sécurité de l'information.
Resumo:
BACKGROUND: Regulatory T cells (Tregs) are key players in controlling the development of airway inflammation. However, their role in the mechanisms leading to tolerance in established allergic asthma is unclear. OBJECTIVE: To examine the role of Tregs in tolerance induction in a murine model of asthma. METHODS: Ovalbumin (OVA) sensitized asthmatic mice were depleted or not of CD25(+) T cells by anti-CD25 PC61 monoclonal antibody (mAb) before intranasal treatment (INT) with OVA, then challenged with OVA aerosol. To further evaluate the respective regulatory activity of CD4(+)CD25(+) and CD4(+)CD25(-) T cells, both T cell subsets were transferred from tolerized or non-tolerized animals to asthmatic recipients. Bronchoalveolar lavage fluid (BALF), T cell proliferation and cytokine secretion were examined. RESULTS: Intranasal treatment with OVA led to increased levels of IL-10, TGF-beta and IL-17 in lung homogenates, inhibition of eosinophil recruitment into the BALF and antigen specific T cell hyporesponsiveness. CD4(+)CD25(+)Foxp3(+) T cells were markedly upregulated in lungs and suppressed in vitro and in vivo OVA-specific T cell responses. Depletion of CD25(+) cells before OVA INT severely hampered tolerance induction as indicated by a strong recruitment of eosinophils into BALF and a vigorous T cell response to OVA upon challenge. However, the transfer of CD4(+)CD25(-) T cells not only suppressed antigen specific T cell responsiveness but also significantly reduced eosinophil recruitment as opposed to CD4(+)CD25(+) T cells. As compared with control mice, a significantly higher proportion of CD4(+)CD25(-) T cells from OVA treated mice expressed mTGF-beta. CONCLUSION: Both CD4(+)CD25(+) and CD4(+)CD25(-) T cells appear to be essential to tolerance induction. The relationship between both subsets and the mechanisms of their regulatory activity will have to be further analyzed.
Resumo:
The potential pathogenicity of selected (potentially) probiotic and clinical isolates of Lactobacillus rhamnosus and Lactobacillus paracasei was investigated in a rat model of experimental endocarditis. In addition, adhesion properties of the lactobacilli for fibrinogen, fibronectin, collagen and laminin, as well as the killing activity of the platelet-microbicidal proteins fibrinopeptide A (FP-A) and connective tissue activating peptide 3 (CTAP-3), were assessed. The 90 % infective dose (ID(90)) of the L. rhamnosus endocarditis isolates varied between 10(6) and 10(7) c.f.u., whereas four of the six (potentially) probiotic L. rhamnosus isolates showed an ID(90) that was at least 10-fold higher (10(8) c.f.u.) (P<0.001). In contrast, the two other probiotic L. rhamnosus isolates exhibited an ID(90) (10(6) and 10(7) c.f.u.) comparable to the ID(90) of the clinical isolates of this species investigated (P>0.05). Importantly, these two probiotic isolates shared the same fluorescent amplified fragment length polymorphism cluster type as the clinical isolate showing the lowest ID(90) (10(6) c.f.u.). L. paracasei tended to have a lower infectivity than L. rhamnosus (ID(90) of 10(7) to > or =10(8) c.f.u.). All isolates had comparable bacterial counts in cardiac vegetations (P>0.05). Except for one L. paracasei strain adhering to all substrates, all tested lactobacilli adhered only weakly or not at all. The platelet peptide FP-A did not show any microbicidal activity against the tested lactobacilli, whereas CTAP-3 killed the majority of the isolates. In general, these results indicate that probiotic lactobacilli display a lower infectivity in experimental endocarditis compared with true endocarditis pathogens. However, the difference in infectivity between L. rhamnosus endocarditis and (potentially) probiotic isolates could not be explained by differences in adherence or platelet microbicidal protein susceptibility. Other disease-promoting factors may exist in these organisms and warrant further investigation.