86 resultados para generic hybridity
Resumo:
A sensitive and selective ultra-high performance liquid chromatography (UHPLC) tandem mass spectrometry (MS/MS) method was developed for the fast quantification of ten psychotropic drugs and metabolites in human plasma for the needs of our laboratory (amisulpride, asenapine, desmethyl-mirtazapine, iloperidone, mirtazapine, norquetiapine, olanzapine, paliperidone, quetiapine and risperidone). Stable isotope-labeled internal standards were used for all analytes, to compensate for the global method variability, including extraction and ionization variations. Sample preparation was performed by generic protein precipitation with acetonitrile. Chromatographic separation was achieved in less than 3.0min on an Acquity UPLC BEH Shield RP18 column (2.1mm×50mm; 1.7μm), using a gradient elution of 10mM ammonium formate buffer pH 3.0 and acetonitrile at a flow rate of 0.4ml/min. The compounds were quantified on a tandem quadrupole mass spectrometer operating in positive electrospray ionization mode, using multiple reaction monitoring. The method was fully validated according to the latest recommendations of international guidelines. Eight point calibration curves were used to cover a large concentration range 0.5-200ng/ml for asenapine, desmethyl-mirtazapine, iloperidone, mirtazapine, olanzapine, paliperidone and risperidone, and 1-1500ng/ml for amisulpride, norquetiapine and quetiapine. Good quantitative performances were achieved in terms of trueness (93.1-111.2%), repeatability (1.3-8.6%) and intermediate precision (1.8-11.5%). Internal standard-normalized matrix effects ranged between 95 and 105%, with a variability never exceeding 6%. The accuracy profiles (total error) were included in the acceptance limits of ±30% for biological samples. This method is therefore suitable for both therapeutic drug monitoring and pharmacokinetic studies.
Resumo:
This article describes the application of a recently developed general unknown screening (GUS) strategy based on LC coupled to a hybrid linear IT-triple quadrupole mass spectrometer (LC-MS/MS-LIT) for the simultaneous detection and identification of drug metabolites following in vitro incubation with human liver microsomes. The histamine H1 receptor antagonist loratadine was chosen as a model compound to demonstrate the interest of such approach, because of its previously described complex and extensive metabolism. Detection and mass spectral characterization were based on data-dependent acquisition, switching between a survey scan acquired in the ion-trapping Q3 scan mode with dynamic subtraction of background noise, and a dependent scan in the ion-trapping product ion scan mode of automatically selected parent ions. In addition, the MS(3) mode was used in a second step to confirm the structure of a few fragment ions. The sensitivity of the ion-trapping modes combined with the selectivity of the triple quadrupole modes allowed, with only one injection, the detection and identification of 17 phase I metabolites of loratadine. The GUS procedure used in this study may be applicable as a generic technique for the characterization of drug metabolites after in vitro incubation, as well as probably in vivo experiments.
Resumo:
Regulatory gene networks contain generic modules, like those involving feedback loops, which are essential for the regulation of many biological functions (Guido et al. in Nature 439:856-860, 2006). We consider a class of self-regulated genes which are the building blocks of many regulatory gene networks, and study the steady-state distribution of the associated Gillespie algorithm by providing efficient numerical algorithms. We also study a regulatory gene network of interest in gene therapy, using mean-field models with time delays. Convergence of the related time-nonhomogeneous Markov chain is established for a class of linear catalytic networks with feedback loops.
Resumo:
OBJECTIVE: To examine the incremental cost effectiveness of the five first line pharmacological smoking cessation therapies in the Seychelles and other developing countries. DESIGN: A Markov chain cohort simulation. SUBJECTS: Two simulated cohorts of smokers: (1) a reference cohort given physician counselling only; (2) a treatment cohort given counselling plus cessation therapy. INTERVENTION: Addition of each of the five pharmacological cessation therapies to physician provided smoking cessation counselling. MAIN OUTCOME MEASURES: Cost per life-year saved (LYS) associated with the five pharmacotherapies. Effectiveness expressed as odds ratios for quitting associated with pharmacotherapies. Costs based on the additional physician time required and retail prices of the medications. RESULTS: Based on prices for currently available generic medications on the global market, the incremental cost per LYS for a 45 year old in the Seychelles was 599 US dollars for gum and 227 dollars for bupropion. Assuming US treatment prices as a conservative estimate, the incremental cost per LYS was significantly higher, though still favourable in comparison to other common medical interventions: 3712 dollars for nicotine gum, 1982 dollars for nicotine patch, 4597 dollars for nicotine spray, 4291 dollars for nicotine inhaler, and 1324 dollars for bupropion. Cost per LYS increased significantly upon application of higher discount rates, which may be used to reflect relatively high opportunity costs for health expenditures in developing countries with highly constrained resources and high overall mortality. CONCLUSION: Pharmacological cessation therapy can be highly cost effective as compared to other common medical interventions in low mortality, middle income countries, particularly if medications can be procured at low prices.
Resumo:
Rhythmic activity plays a central role in neural computations and brain functions ranging from homeostasis to attention, as well as in neurological and neuropsychiatric disorders. Despite this pervasiveness, little is known about the mechanisms whereby the frequency and power of oscillatory activity are modulated, and how they reflect the inputs received by neurons. Numerous studies have reported input-dependent fluctuations in peak frequency and power (as well as couplings across these features). However, it remains unresolved what mediates these spectral shifts among neural populations. Extending previous findings regarding stochastic nonlinear systems and experimental observations, we provide analytical insights regarding oscillatory responses of neural populations to stimulation from either endogenous or exogenous origins. Using a deceptively simple yet sparse and randomly connected network of neurons, we show how spiking inputs can reliably modulate the peak frequency and power expressed by synchronous neural populations without any changes in circuitry. Our results reveal that a generic, non-nonlinear and input-induced mechanism can robustly mediate these spectral fluctuations, and thus provide a framework in which inputs to the neurons bidirectionally regulate both the frequency and power expressed by synchronous populations. Theoretical and computational analysis of the ensuing spectral fluctuations was found to reflect the underlying dynamics of the input stimuli driving the neurons. Our results provide insights regarding a generic mechanism supporting spectral transitions observed across cortical networks and spanning multiple frequency bands.
Resumo:
The general strategy to perform anti-doping analyses of urine samples starts with the screening for a wide range of compounds. This step should be fast, generic and able to detect any sample that may contain a prohibited substance while avoiding false negatives and reducing false positive results. The experiments presented in this work were based on ultra-high-pressure liquid chromatography coupled to hybrid quadrupole time-of-flight mass spectrometry. Thanks to the high sensitivity of the method, urine samples could be diluted 2-fold prior to injection. One hundred and three forbidden substances from various classes (such as stimulants, diuretics, narcotics, anti-estrogens) were analysed on a C(18) reversed-phase column in two gradients of 9min (including two 3min equilibration periods) for positive and negative electrospray ionisation and detected in the MS full scan mode. The automatic identification of analytes was based on retention time and mass accuracy, with an automated tool for peak picking. The method was validated according to the International Standard for Laboratories described in the World Anti-Doping Code and was selective enough to comply with the World Anti-Doping Agency recommendations. In addition, the matrix effect on MS response was measured on all investigated analytes spiked in urine samples. The limits of detection ranged from 1 to 500ng/mL, allowing the identification of all tested compounds in urine. When a sample was reported positive during the screening, a fast additional pre-confirmatory step was performed to reduce the number of confirmatory analyses.
Resumo:
Ute Heidmann Le dialogisme intertextuel des contes des Grimm Préalables pour une enquête à mener « Le caractère le plus important de l'énoncé, ou en tous les cas le plus ignoré, est son dialogisme, c'est-à-dire sa dimension intertextuelle », constate Todorov en référence à la conception dialogique du langage proposée par Bakthine. Cet article introductif postule que ce constat s'applique aussi aux contes des Grimm. En partant des recherches déjà menées sur Apulée, Straporola, Basile, Perrault, La Fontaine et Lhéritier*, il présente des concepts (réponse intertextuelle, reconfiguration générique et scénographie en trompe-l'oeil) dont il illustre l'efficacité pour l'analyse des Kinder- und Hausmärchen. L'analyse de la préface de 1812 montre que les Grimm créent une scénographie pour légitimer le genre des Kinder- und Hausmärchen en les présentant comme des contes "d'origine" qui auraient "poussé" comme des plantes dans leur région et qu'ils n'auraient fait que "collecter". Cette scénographie en trompe-l'oeil permet de dissimuler le fort impact des contes européens et notamment français sur les Kinder- und Hausmärchen. Leurs commentaires paratextuels permettent en revanche de retracer ces dialogues intertextuels qui ne se limitent pas à imiter les "voix déjà présentes dans le choeur complexe" des narrateurs des contes déjà racontés, mais qui créent des effets de sens nouveaux et significativement différents en guise de réponse aux "histoires ou contes du passé", comme l'avaient déjà fait Charles Perrault avant eux. *(dans Féeries 8 et Textualité et intertextualité des contes, Editions Classiques Garnier 2010) "The most important feature of the utterance, or at least the most neglected, is its dialogism, that is, its intertextual dimension" states Todorov in reference to Bakthin's dialogical conception of human speech. Ute Heidmann's introductory essay argues that this applies also to the Grimm's tales. Extending her former theoretical and intertextual investigation on Apuleius, Straporala, Basile, Perrault, La Fontaine and Lhéritier*, she proposes a series of conceptual options (as intertextual response, scenography, trompe l'oeil, generic reconfiguration, discursive strategy) that can efficiently be used for the work on the Kinder- und Hausmärchen, gesammelt durch die Brüder Grimm. The article shows how the Grimms skilfully construct a highly suggestive scenography and topography for the new generic form thus creating the idea of a genuine tale, having grown naturally in the earth of their own region and how it is efficiently used to dissimulate the strong impact of the European and namely the French fairy tales on the Grimm's tales. The extensive paratextual commentaries are shown to serve the same purpose. Once these strategies are "deconstructed" as such, the way is free to trace the very complex intertextual dialogues with already existing Italian, French, German tales, that underlie the Kinder- und Hausmärchen. Comparative textual analysis can then make us discover, that these dialogues are from just "imitating" "the many other voices already present in the complex chorus" of fairy tale writers and narrators: they actually create new and different meaning by responding to them. * (in Féeries 8, Textualité et intertextualité des contes, Classiques Garnier 2010)
Resumo:
Abstract:This article illustrates Angela Carter's literary practice through her utilization of "Sleeping Beauty" in the radio play Vampirella and its prose variation The Lady of the House of Love. It argues that she vampirised European culture as she transfused old stories into new bodies to give them new life and bite. Carter's experiments with forms, genres and mediums in her vampire fiction capture the inherent hybridity of the fairy tale as it sheds new light on her main source, Charles Perrault's La Belle au bois dormant, bringing to the fore the horror and terror as well as the textual ambiguities of the French conte that were gradually obscured in favor of the romance element. Carter's vampire stories thus trace the 'dark' underside of the reception of the tale in Gothic fiction and in the subculture of comic books and Hammer films so popular in the 1970s, where the Sleeping Beauty figure is revived as a femme fatale or vamp who takes her fate in her own hands.Résumé:Cet article s'attache à montrer comment l'utilisation de La Belle au bois dormant dans deux histoires de vampire d'Angela Carter, la pièce radiophonique Vampirella et sa réécriture en prose The Lady of the House of Love, illustre la pratique littéraire de l'auteur, qui consiste à vampiriser la culture européenne et à transfuser les vieilles histoires dans de nouvelles formes, genres, et médias afin de leur donner une nouvelle vie. Le traitement du conte de fée permet d'aborder un aspect essentiel de la démarche créative de l'auteur, tout en offrant un éclairage inédit sur le conte de Perrault. En effet, Carter met en évidence les éléments inquiétants et l'atmosphère de menace qui caractérisent la deuxième partie du conte, tout en jouant sur les ambiguités du texte français souvent négligés au profit de la veine romanesque. A cet égard, ses histoires de vampire peuvent se lire comme une réflexion sur la réception 'obscure' du conte de fées dans la culture populaire, qui voit le personnage de la Belle au bois dormant prendre son destin en main et se réinventer en femme fatale ou vamp dans la bande dessinée et les séries B des années 1970.
Resumo:
Abstract In this thesis we present the design of a systematic integrated computer-based approach for detecting potential disruptions from an industry perspective. Following the design science paradigm, we iteratively develop several multi-actor multi-criteria artifacts dedicated to environment scanning. The contributions of this thesis are both theoretical and practical. We demonstrate the successful use of multi-criteria decision-making methods for technology foresight. Furthermore, we illustrate the design of our artifacts using build and-evaluate loops supported with a field study of the Swiss mobile payment industry. To increase the relevance of this study, we systematically interview key Swiss experts for each design iteration. As a result, our research provides a realistic picture of the current situation in the Swiss mobile payment market and reveals previously undiscovered weak signals for future trends. Finally, we suggest a generic design process for environment scanning.
Resumo:
The original cefepime product was withdrawn from the Swiss market in January 2007, and replaced by a generic 10 months later. The goals of the study were to assess the impact of this cefepime shortage on the use and costs of alternative broad-spectrum antibiotics, on antibiotic policy, and on resistance of Pseudomonas aeruginosa towards carbapenems, ceftazidime and piperacillin-tazobactam. A generalized regression-based interrupted time series model assessed how much the shortage changed the monthly use and costs of cefepime and of selected alternative broad-spectrum antibiotics (ceftazidime, imipenem-cilastatin, meropenem, piperacillin-tazobactam) in 15 Swiss acute care hospitals from January 2005 to December 2008. Resistance of P. aeruginosa was compared before and after the cefepime shortage. There was a statistically significant increase in the consumption of piperacillin-tazobactam in hospitals with definitive interruption of cefepime supply, and of meropenem in hospitals with transient interruption of cefepime supply. Consumption of each alternative antibiotic tended to increase during the cefepime shortage and to decrease when the cefepime generic was released. These shifts were associated with significantly higher overall costs. There was no significant change in hospitals with uninterrupted cefepime supply. The alternative antibiotics for which an increase in consumption showed the strongest association with a progression of resistance were the carbapenems. The use of alternative antibiotics after cefepime withdrawal was associated with a significant increase in piperacillin-tazobactam and meropenem use and in overall costs, and with a decrease in susceptibility of P. aeruginosa in hospitals. This warrants caution with regard to shortages and withdrawals of antibiotics.
Resumo:
Both, Bayesian networks and probabilistic evaluation are gaining more and more widespread use within many professional branches, including forensic science. Notwithstanding, they constitute subtle topics with definitional details that require careful study. While many sophisticated developments of probabilistic approaches to evaluation of forensic findings may readily be found in published literature, there remains a gap with respect to writings that focus on foundational aspects and on how these may be acquired by interested scientists new to these topics. This paper takes this as a starting point to report on the learning about Bayesian networks for likelihood ratio based, probabilistic inference procedures in a class of master students in forensic science. The presentation uses an example that relies on a casework scenario drawn from published literature, involving a questioned signature. A complicating aspect of that case study - proposed to students in a teaching scenario - is due to the need of considering multiple competing propositions, which is an outset that may not readily be approached within a likelihood ratio based framework without drawing attention to some additional technical details. Using generic Bayesian networks fragments from existing literature on the topic, course participants were able to track the probabilistic underpinnings of the proposed scenario correctly both in terms of likelihood ratios and of posterior probabilities. In addition, further study of the example by students allowed them to derive an alternative Bayesian network structure with a computational output that is equivalent to existing probabilistic solutions. This practical experience underlines the potential of Bayesian networks to support and clarify foundational principles of probabilistic procedures for forensic evaluation.
Resumo:
BACKGROUND: Six pioneer physicians-pharmacists quality circles (PPQCs) located in the Swiss canton of Fribourg (administratively corresponding to a state in the US) were under the responsibility of 6 trained community pharmacists moderating the prescribing process of 24 general practitioners (GPs). PPQCs are based on a multifaceted collaborative process mediated by community pharmacists for improving compliance with clinical guidelines within GPs' prescribing practices. OBJECTIVE: To assess, over a 9-year period (1999-2007), the cost-containment impact of the PPQCs. METHODS: The key elements of PPQCs are a structured continuous quality improvement and education process; local networking; feedback of comparative and detailed data regarding costs, drug choice, and frequency of prescribed drugs; and structured independent literature review for interdisciplinary continuing education. The data are issued from the community pharmacy invoices to the health insurance companies. The study analyzed the cost-containment impact of the PPQCs in comparison with GPs working in similar conditions of care without particular collaboration with pharmacists, the percentage of generic prescriptions for specific cardiovascular drug classes, and the percentage of drug costs or units prescribed for specific cardiovascular drugs. RESULTS: For the 9-year period, there was a 42% decrease in the drug costs in the PPQC group as compared to the control group, representing a $225,000 (USD) savings per GP only in 2007. These results are explained by better compliance with clinical and pharmacovigilance guidelines, larger distribution of generic drugs, a more balanced attitude toward marketing strategies, and interdisciplinary continuing education on the rational use of drugs. CONCLUSIONS: The PPQC work process has yielded sustainable results, such as significant cost savings, higher penetration of generics and reflection on patient safety, and the place of "new" drugs in therapy. The PPQCs may also constitute a solid basis for implementing more comprehensive collaborative programs, such as medication reviews, adherence-enhancing interventions, or disease management approaches.
Resumo:
EXECUTIVE SUMMARY : Evaluating Information Security Posture within an organization is becoming a very complex task. Currently, the evaluation and assessment of Information Security are commonly performed using frameworks, methodologies and standards which often consider the various aspects of security independently. Unfortunately this is ineffective because it does not take into consideration the necessity of having a global and systemic multidimensional approach to Information Security evaluation. At the same time the overall security level is globally considered to be only as strong as its weakest link. This thesis proposes a model aiming to holistically assess all dimensions of security in order to minimize the likelihood that a given threat will exploit the weakest link. A formalized structure taking into account all security elements is presented; this is based on a methodological evaluation framework in which Information Security is evaluated from a global perspective. This dissertation is divided into three parts. Part One: Information Security Evaluation issues consists of four chapters. Chapter 1 is an introduction to the purpose of this research purpose and the Model that will be proposed. In this chapter we raise some questions with respect to "traditional evaluation methods" as well as identifying the principal elements to be addressed in this direction. Then we introduce the baseline attributes of our model and set out the expected result of evaluations according to our model. Chapter 2 is focused on the definition of Information Security to be used as a reference point for our evaluation model. The inherent concepts of the contents of a holistic and baseline Information Security Program are defined. Based on this, the most common roots-of-trust in Information Security are identified. Chapter 3 focuses on an analysis of the difference and the relationship between the concepts of Information Risk and Security Management. Comparing these two concepts allows us to identify the most relevant elements to be included within our evaluation model, while clearing situating these two notions within a defined framework is of the utmost importance for the results that will be obtained from the evaluation process. Chapter 4 sets out our evaluation model and the way it addresses issues relating to the evaluation of Information Security. Within this Chapter the underlying concepts of assurance and trust are discussed. Based on these two concepts, the structure of the model is developed in order to provide an assurance related platform as well as three evaluation attributes: "assurance structure", "quality issues", and "requirements achievement". Issues relating to each of these evaluation attributes are analysed with reference to sources such as methodologies, standards and published research papers. Then the operation of the model is discussed. Assurance levels, quality levels and maturity levels are defined in order to perform the evaluation according to the model. Part Two: Implementation of the Information Security Assurance Assessment Model (ISAAM) according to the Information Security Domains consists of four chapters. This is the section where our evaluation model is put into a welldefined context with respect to the four pre-defined Information Security dimensions: the Organizational dimension, Functional dimension, Human dimension, and Legal dimension. Each Information Security dimension is discussed in a separate chapter. For each dimension, the following two-phase evaluation path is followed. The first phase concerns the identification of the elements which will constitute the basis of the evaluation: ? Identification of the key elements within the dimension; ? Identification of the Focus Areas for each dimension, consisting of the security issues identified for each dimension; ? Identification of the Specific Factors for each dimension, consisting of the security measures or control addressing the security issues identified for each dimension. The second phase concerns the evaluation of each Information Security dimension by: ? The implementation of the evaluation model, based on the elements identified for each dimension within the first phase, by identifying the security tasks, processes, procedures, and actions that should have been performed by the organization to reach the desired level of protection; ? The maturity model for each dimension as a basis for reliance on security. For each dimension we propose a generic maturity model that could be used by every organization in order to define its own security requirements. Part three of this dissertation contains the Final Remarks, Supporting Resources and Annexes. With reference to the objectives of our thesis, the Final Remarks briefly analyse whether these objectives were achieved and suggest directions for future related research. Supporting resources comprise the bibliographic resources that were used to elaborate and justify our approach. Annexes include all the relevant topics identified within the literature to illustrate certain aspects of our approach. Our Information Security evaluation model is based on and integrates different Information Security best practices, standards, methodologies and research expertise which can be combined in order to define an reliable categorization of Information Security. After the definition of terms and requirements, an evaluation process should be performed in order to obtain evidence that the Information Security within the organization in question is adequately managed. We have specifically integrated into our model the most useful elements of these sources of information in order to provide a generic model able to be implemented in all kinds of organizations. The value added by our evaluation model is that it is easy to implement and operate and answers concrete needs in terms of reliance upon an efficient and dynamic evaluation tool through a coherent evaluation system. On that basis, our model could be implemented internally within organizations, allowing them to govern better their Information Security. RÉSUMÉ : Contexte général de la thèse L'évaluation de la sécurité en général, et plus particulièrement, celle de la sécurité de l'information, est devenue pour les organisations non seulement une mission cruciale à réaliser, mais aussi de plus en plus complexe. A l'heure actuelle, cette évaluation se base principalement sur des méthodologies, des bonnes pratiques, des normes ou des standards qui appréhendent séparément les différents aspects qui composent la sécurité de l'information. Nous pensons que cette manière d'évaluer la sécurité est inefficiente, car elle ne tient pas compte de l'interaction des différentes dimensions et composantes de la sécurité entre elles, bien qu'il soit admis depuis longtemps que le niveau de sécurité globale d'une organisation est toujours celui du maillon le plus faible de la chaîne sécuritaire. Nous avons identifié le besoin d'une approche globale, intégrée, systémique et multidimensionnelle de l'évaluation de la sécurité de l'information. En effet, et c'est le point de départ de notre thèse, nous démontrons que seule une prise en compte globale de la sécurité permettra de répondre aux exigences de sécurité optimale ainsi qu'aux besoins de protection spécifiques d'une organisation. Ainsi, notre thèse propose un nouveau paradigme d'évaluation de la sécurité afin de satisfaire aux besoins d'efficacité et d'efficience d'une organisation donnée. Nous proposons alors un modèle qui vise à évaluer d'une manière holistique toutes les dimensions de la sécurité, afin de minimiser la probabilité qu'une menace potentielle puisse exploiter des vulnérabilités et engendrer des dommages directs ou indirects. Ce modèle se base sur une structure formalisée qui prend en compte tous les éléments d'un système ou programme de sécurité. Ainsi, nous proposons un cadre méthodologique d'évaluation qui considère la sécurité de l'information à partir d'une perspective globale. Structure de la thèse et thèmes abordés Notre document est structuré en trois parties. La première intitulée : « La problématique de l'évaluation de la sécurité de l'information » est composée de quatre chapitres. Le chapitre 1 introduit l'objet de la recherche ainsi que les concepts de base du modèle d'évaluation proposé. La maniéré traditionnelle de l'évaluation de la sécurité fait l'objet d'une analyse critique pour identifier les éléments principaux et invariants à prendre en compte dans notre approche holistique. Les éléments de base de notre modèle d'évaluation ainsi que son fonctionnement attendu sont ensuite présentés pour pouvoir tracer les résultats attendus de ce modèle. Le chapitre 2 se focalise sur la définition de la notion de Sécurité de l'Information. Il ne s'agit pas d'une redéfinition de la notion de la sécurité, mais d'une mise en perspectives des dimensions, critères, indicateurs à utiliser comme base de référence, afin de déterminer l'objet de l'évaluation qui sera utilisé tout au long de notre travail. Les concepts inhérents de ce qui constitue le caractère holistique de la sécurité ainsi que les éléments constitutifs d'un niveau de référence de sécurité sont définis en conséquence. Ceci permet d'identifier ceux que nous avons dénommés « les racines de confiance ». Le chapitre 3 présente et analyse la différence et les relations qui existent entre les processus de la Gestion des Risques et de la Gestion de la Sécurité, afin d'identifier les éléments constitutifs du cadre de protection à inclure dans notre modèle d'évaluation. Le chapitre 4 est consacré à la présentation de notre modèle d'évaluation Information Security Assurance Assessment Model (ISAAM) et la manière dont il répond aux exigences de l'évaluation telle que nous les avons préalablement présentées. Dans ce chapitre les concepts sous-jacents relatifs aux notions d'assurance et de confiance sont analysés. En se basant sur ces deux concepts, la structure du modèle d'évaluation est développée pour obtenir une plateforme qui offre un certain niveau de garantie en s'appuyant sur trois attributs d'évaluation, à savoir : « la structure de confiance », « la qualité du processus », et « la réalisation des exigences et des objectifs ». Les problématiques liées à chacun de ces attributs d'évaluation sont analysées en se basant sur l'état de l'art de la recherche et de la littérature, sur les différentes méthodes existantes ainsi que sur les normes et les standards les plus courants dans le domaine de la sécurité. Sur cette base, trois différents niveaux d'évaluation sont construits, à savoir : le niveau d'assurance, le niveau de qualité et le niveau de maturité qui constituent la base de l'évaluation de l'état global de la sécurité d'une organisation. La deuxième partie: « L'application du Modèle d'évaluation de l'assurance de la sécurité de l'information par domaine de sécurité » est elle aussi composée de quatre chapitres. Le modèle d'évaluation déjà construit et analysé est, dans cette partie, mis dans un contexte spécifique selon les quatre dimensions prédéfinies de sécurité qui sont: la dimension Organisationnelle, la dimension Fonctionnelle, la dimension Humaine, et la dimension Légale. Chacune de ces dimensions et son évaluation spécifique fait l'objet d'un chapitre distinct. Pour chacune des dimensions, une évaluation en deux phases est construite comme suit. La première phase concerne l'identification des éléments qui constituent la base de l'évaluation: ? Identification des éléments clés de l'évaluation ; ? Identification des « Focus Area » pour chaque dimension qui représentent les problématiques se trouvant dans la dimension ; ? Identification des « Specific Factors » pour chaque Focus Area qui représentent les mesures de sécurité et de contrôle qui contribuent à résoudre ou à diminuer les impacts des risques. La deuxième phase concerne l'évaluation de chaque dimension précédemment présentées. Elle est constituée d'une part, de l'implémentation du modèle général d'évaluation à la dimension concernée en : ? Se basant sur les éléments spécifiés lors de la première phase ; ? Identifiant les taches sécuritaires spécifiques, les processus, les procédures qui auraient dû être effectués pour atteindre le niveau de protection souhaité. D'autre part, l'évaluation de chaque dimension est complétée par la proposition d'un modèle de maturité spécifique à chaque dimension, qui est à considérer comme une base de référence pour le niveau global de sécurité. Pour chaque dimension nous proposons un modèle de maturité générique qui peut être utilisé par chaque organisation, afin de spécifier ses propres exigences en matière de sécurité. Cela constitue une innovation dans le domaine de l'évaluation, que nous justifions pour chaque dimension et dont nous mettons systématiquement en avant la plus value apportée. La troisième partie de notre document est relative à la validation globale de notre proposition et contient en guise de conclusion, une mise en perspective critique de notre travail et des remarques finales. Cette dernière partie est complétée par une bibliographie et des annexes. Notre modèle d'évaluation de la sécurité intègre et se base sur de nombreuses sources d'expertise, telles que les bonnes pratiques, les normes, les standards, les méthodes et l'expertise de la recherche scientifique du domaine. Notre proposition constructive répond à un véritable problème non encore résolu, auquel doivent faire face toutes les organisations, indépendamment de la taille et du profil. Cela permettrait à ces dernières de spécifier leurs exigences particulières en matière du niveau de sécurité à satisfaire, d'instancier un processus d'évaluation spécifique à leurs besoins afin qu'elles puissent s'assurer que leur sécurité de l'information soit gérée d'une manière appropriée, offrant ainsi un certain niveau de confiance dans le degré de protection fourni. Nous avons intégré dans notre modèle le meilleur du savoir faire, de l'expérience et de l'expertise disponible actuellement au niveau international, dans le but de fournir un modèle d'évaluation simple, générique et applicable à un grand nombre d'organisations publiques ou privées. La valeur ajoutée de notre modèle d'évaluation réside précisément dans le fait qu'il est suffisamment générique et facile à implémenter tout en apportant des réponses sur les besoins concrets des organisations. Ainsi notre proposition constitue un outil d'évaluation fiable, efficient et dynamique découlant d'une approche d'évaluation cohérente. De ce fait, notre système d'évaluation peut être implémenté à l'interne par l'entreprise elle-même, sans recourir à des ressources supplémentaires et lui donne également ainsi la possibilité de mieux gouverner sa sécurité de l'information.
Resumo:
The generic concept of the artificial meteorite experiment STONE is to fix rock samples bearing microorganisms on the heat shield of a recoverable space capsule and to study their modifications during atmospheric re-entry. The STONE-5 experiment was performed mainly to answer astrobiological questions. The rock samples mounted on the heat shield were used (i) as a carrier for microorganisms and (ii) as internal control to verify whether physical conditions during atmospheric re-entry were comparable to those experienced by "real" meteorites. Samples of dolerite (an igneous rock), sandstone (a sedimentary rock), and gneiss impactite from Haughton Crater carrying endolithic cyanobacteria were fixed to the heat shield of the unmanned recoverable capsule FOTON-M2. Holes drilled on the back side of each rock sample were loaded with bacterial and fungal spores and with dried vegetative cryptoendoliths. The front of the gneissic sample was also soaked with cryptoendoliths. <p>The mineralogical differences between pre- and post-flight samples are detailed. Despite intense ablation resulting in deeply eroded samples, all rocks in part survived atmospheric re-entry. Temperatures attained during re-entry were high enough to melt dolerite, silica, and the gneiss impactite sample. The formation of fusion crusts in STONE-5 was a real novelty and strengthens the link with real meteorites. The exposed part of the dolerite is covered by a fusion crust consisting of silicate glass formed from the rock sample with an admixture of holder material (silica). Compositionally, the fusion crust varies from silica-rich areas (undissolved silica fibres of the holder material) to areas whose composition is "basaltic". Likewise, the fusion crust on the exposed gneiss surface was formed from gneiss with an admixture of holder material. The corresponding composition of the fusion crust varies from silica-rich areas to areas with "gneiss" composition (main component potassium-rich feldspar). The sandstone sample was retrieved intact and did not develop a fusion crust. Thermal decomposition of the calcite matrix followed by disintegration and liberation of the silicate grains prevented the formation of a melt.</p> <p>Furthermore, the non-exposed surface of all samples experienced strong thermal alterations. Hot gases released during ablation pervaded the empty space between sample and sample holder leading to intense local heating. The intense heating below the protective sample holder led to surface melting of the dolerite rock and to the formation of calcium-silicate rims on quartz grains in the sandstone sample. (c) 2008 Elsevier Ltd. All rights reserved.</p>
Resumo:
The theory of language has occupied a special place in the history of Indian thought. Indian philosophers give particular attention to the analysis of the cognition obtained from language, known under the generic name of śābdabodha. This term is used to denote, among other things, the cognition episode of the hearer, the content of which is described in the form of a paraphrase of a sentence represented as a hierarchical structure. Philosophers submit the meaning of the component items of a sentence and their relationship to a thorough examination, and represent the content of the resulting cognition as a paraphrase centred on a meaning element, that is taken as principal qualificand (mukhyaviśesya) which is qualified by the other meaning elements. This analysis is the object of continuous debate over a period of more than a thousand years between the philosophers of the schools of Mimāmsā, Nyāya (mainly in its Navya form) and Vyākarana. While these philosophers are in complete agreement on the idea that the cognition of sentence meaning has a hierarchical structure and share the concept of a single principal qualificand (qualified by other meaning elements), they strongly disagree on the question which meaning element has this role and by which morphological item it is expressed. This disagreement is the central point of their debate and gives rise to competing versions of this theory. The Mïmāmsakas argue that the principal qualificand is what they call bhāvanā ̒bringing into being̒, ̒efficient force̒ or ̒productive operation̒, expressed by the verbal affix, and distinct from the specific procedures signified by the verbal root; the Naiyāyikas generally take it to be the meaning of the word with the first case ending, while the Vaiyākaranas take it to be the operation expressed by the verbal root. All the participants rely on the Pāninian grammar, insofar as the Mimāmsakas and Naiyāyikas do not compose a new grammar of Sanskrit, but use different interpretive strategies in order to justify their views, that are often in overt contradiction with the interpretation of the Pāninian rules accepted by the Vaiyākaranas. In each of the three positions, weakness in one area is compensated by strength in another, and the cumulative force of the total argumentation shows that no position can be declared as correct or overall superior to the others. This book is an attempt to understand this debate, and to show that, to make full sense of the irreconcilable positions of the three schools, one must go beyond linguistic factors and consider the very beginnings of each school's concern with the issue under scrutiny. The texts, and particularly the late texts of each school present very complex versions of the theory, yet the key to understanding why these positions remain irreconcilable seems to lie elsewhere, this in spite of extensive argumentation involving a great deal of linguistic and logical technicalities. Historically, this theory arises in Mimāmsā (with Sabara and Kumārila), then in Nyāya (with Udayana), in a doctrinal and theological context, as a byproduct of the debate over Vedic authority. The Navya-Vaiyākaranas enter this debate last (with Bhattoji Dïksita and Kaunda Bhatta), with the declared aim of refuting the arguments of the Mïmāmsakas and Naiyāyikas by bringing to light the shortcomings in their understanding of Pāninian grammar. The central argument has focused on the capacity of the initial contexts, with the network of issues to which the principal qualificand theory is connected, to render intelligible the presuppositions and aims behind the complex linguistic justification of the classical and late stages of this debate. Reading the debate in this light not only reveals the rationality and internal coherence of each position beyond the linguistic arguments, but makes it possible to understand why the thinkers of the three schools have continued to hold on to three mutually exclusive positions. They are defending not only their version of the principal qualificand theory, but (though not openly acknowledged) the entire network of arguments, linguistic and/or extra-linguistic, to which this theory is connected, as well as the presuppositions and aims underlying these arguments.