53 resultados para Resolution in azimuth direction
Resumo:
Originally invented for topographic imaging, atomic force microscopy (AFM) has evolved into a multifunctional biological toolkit, enabling to measure structural and functional details of cells and molecules. Its versatility and the large scope of information it can yield make it an invaluable tool in any biologically oriented laboratory, where researchers need to perform characterizations of living samples as well as single molecules in quasi-physiological conditions and with nanoscale resolution. In the last 20 years, AFM has revolutionized the characterization of microbial cells by allowing a better understanding of their cell wall and of the mechanism of action of drugs and by becoming itself a powerful diagnostic tool to study bacteria. Indeed, AFM is much more than a high-resolution microscopy technique. It can reconstruct force maps that can be used to explore the nanomechanical properties of microorganisms and probe at the same time the morphological and mechanical modifications induced by external stimuli. Furthermore it can be used to map chemical species or specific receptors with nanometric resolution directly on the membranes of living organisms. In summary, AFM offers new capabilities and a more in-depth insight in the structure and mechanics of biological specimens with an unrivaled spatial and force resolution. Its application to the study of bacteria is extremely significant since it has already delivered important information on the metabolism of these small microorganisms and, through new and exciting technical developments, will shed more light on the real-time interaction of antimicrobial agents and bacteria.
Resumo:
Although traditionally thought to be sterile, accumulating evidence now supports the concept that our airways harbor a microbiome. Thus far, studies have focused upon characterizing the bacterial constituents of the airway microbiome in both healthy and diseased lungs, but what perhaps provides the greatest impetus for the exploration of the airway microbiome is that different bacterial phyla appear to dominate diseased as compared with healthy lungs. As yet, there is very limited evidence supporting a functional role for the airway microbiome, but continued research in this direction is likely to provide such evidence, particularly considering the progress that has been made in understanding host-microbe mutualism in the intestinal tract. In this review, we highlight the major advances that have been made discovering and describing the airway microbiome, discuss the experimental evidence that supports a functional role for the microbiome in health and disease, and propose how this emerging field is going to impact clinical practice.
Resumo:
Instead of standard rigid thoracoscopes, we used a modified gastroscope for video assistance during 12 minimally invasive left internal mammary harvesting. Flexibility and remote control of its last centimeters give to the gastroscope a total freedom of movements, and perfect positioning in every direction. The scope is equipped with cold light, a suction canal and an irrigation canal, which allow for in situ washing without needing to remove it from the thoracic cavity. Thanks to these advantages, vision and lighting are always perfect.
Resumo:
The diagnosis of muscular dystrophies or the assessment of the functional benefit of gene or cell therapies can be difficult, especially for poorly accessible muscles, and it often lacks a singlefiber resolution. In the present study, we evaluated whether muscle diseases can be diagnosed from small biopsies using atomic force microscopy (AFM). AFM was shown to provide a sensitive and quantitative description of the resistance of normal and dystrophic myofibers within live muscle tissues explanted from Duchenne mdx mice. The rescue of dystrophin expression by gene therapy approaches led to the functional recovery of treated dystrophic muscle fibers, as probed using AFM and by in situ wholemuscle strength measurements. Comparison of muscles treated with viral or non-viral vectors indicated that the efficacy of the gene transfer approaches could be distinguished with a single myofiber resolution. This indicated full correction of the resistance to deformation in nearly all of the muscle fibers treated with an adeno-associated viral vector that mediates exon-skipping on the dystrophin mRNA. Having shown that AFM can provide a quantitative assessment of the expression of muscle proteins and of the muscular function in animal models, we assessed myofiber resistance in the context of human muscular dystrophies and myopathies. Thus, various forms of human Becker syndrome can also be detected using AFM in blind studies of small frozen biopsies from human patients. Interestingly, it also allowed the detection of anomalies in a fraction of the muscle fibers from patients showing a muscle weakness that could not be attributed to a known molecular or genetic defect. Overall, we conclude that AFM may provide a useful method to complement current diagnosis tools of known and unknown muscular diseases, in research and in a clinical context.
Resumo:
AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.
Resumo:
Most of the novel targeted anticancer agents share classical characteristics that define drugs as candidates for blood concentration monitoring: long-term therapy; high interindividual but restricted intraindividual variability; significant drug-drug and drug- food interactions; correlations between concentration and efficacy/ toxicity with rather narrow therapeutic index; reversibility of effects; and absence of early markers of response. Surprisingly though, therapeutic concentration monitoring has received little attention for these drugs despite reiterated suggestions from clinical pharmacologists. Several issues explain the lack of clinical research and development in this field: global tradition of empiricism regarding treatment monitoring, lack of formal conceptual framework, ethical difficulties in the elaboration of controlled clinical trials, disregard from both drug manufacturers and public funders, limited encouragement from regulatory authorities, and practical hurdles making dosage adjustment based on concentration monitoring a difficult task for prescribers. However, new technologies are soon to help us overcome these obstacles, with the advent of miniaturized measurement devices able to quantify circulating drug concentrations at the point-of-care, to evaluate their plausibility given actual dosage and sampling time, to determine their appropriateness with reference to therapeutic targets, and to advise on suitable dosage adjustment. Such evolutions could bring conceptual changes into the clinical development of drugs such as anticancer agents, while increasing the therapeutic impact of population PK-PD studies and systematic reviews. Research efforts in that direction from the clinical pharmacology community will be essential for patients to receive the greatest benefits and the least harm from new anticancer treatments. The example of imatinib, the first commercialized tyrosine kinase inhibitor, will be outlined to illustrate a potential research agenda for the rational development of therapeutic concentration monitoring.
Resumo:
1. Aim - Concerns over how global change will influence species distributions, in conjunction with increased emphasis on understanding niche dynamics in evolutionary and community contexts, highlight the growing need for robust methods to quantify niche differences between or within taxa. We propose a statistical framework to describe and compare environmental niches from occurrence and spatial environmental data.¦2. Location - Europe, North America, South America¦3. Methods - The framework applies kernel smoothers to densities of species occurrence in gridded environmental space to calculate metrics of niche overlap and test hypotheses regarding niche conservatism. We use this framework and simulated species with predefined distributions and amounts of niche overlap to evaluate several ordination and species distribution modeling techniques for quantifying niche overlap. We illustrate the approach with data on two well-studied invasive species.¦4. Results - We show that niche overlap can be accurately detected with the framework when variables driving the distributions are known. The method is robust to known and previously undocumented biases related to the dependence of species occurrences on the frequency of environmental conditions that occur across geographic space. The use of a kernel smoother makes the process of moving from geographical space to multivariate environmental space independent of both sampling effort and arbitrary choice of resolution in environmental space. However, the use of ordination and species distribution model techniques for selecting, combining and weighting variables on which niche overlap is calculated provide contrasting results.¦5. Main conclusions - The framework meets the increasing need for robust methods to quantify niche differences. It is appropriate to study niche differences between species, subspecies or intraspecific lineages that differ in their geographical distributions. Alternatively, it can be used to measure the degree to which the environmental niche of a species or intraspecific lineage has changed over time.
Resumo:
PREMISE OF THE STUDY: Numerous long-term studies in seasonal habitats have tracked interannual variation in first flowering date (FFD) in relation to climate, documenting the effect of warming on the FFD of many species. Despite these efforts, long-term phenological observations are still lacking for many species. If we could forecast responses based on taxonomic affinity, however, then we could leverage existing data to predict the climate-related phenological shifts of many taxa not yet studied. METHODS: We examined phenological time series of 1226 species occurrences (1031 unique species in 119 families) across seven sites in North America and England to determine whether family membership (or family mean FFD) predicts the sensitivity of FFD to standardized interannual changes in temperature and precipitation during seasonal periods before flowering and whether families differ significantly in the direction of their phenological shifts. KEY RESULTS: Patterns observed among species within and across sites are mirrored among family means across sites; early-flowering families advance their FFD in response to warming more than late-flowering families. By contrast, we found no consistent relationships among taxa between mean FFD and sensitivity to precipitation as measured here. CONCLUSIONS: Family membership can be used to identify taxa of high and low sensitivity to temperature within the seasonal, temperate zone plant communities analyzed here. The high sensitivity of early-flowering families (and the absence of early-flowering families not sensitive to temperature) may reflect plasticity in flowering time, which may be adaptive in environments where early-season conditions are highly variable among years.
Resumo:
Introduction: The primary somatosensory cortex (SI) contains Brodmann areas (BA) 1, 2, 3a, and 3b. Research in non-human primates showed that BAs 3b, 1, and 2 each contain one full representation of the hand with separate representations for each finger. This research also showed that the finger representation in BA3b has larger and clearer finger somatotopy than BA1 and 2. Although several efforts to map finger somatotopy in SI by fMRI have been made at 1.5 and 3T these studies have yielded variable results and were not able to detect single subject finger somatotopy, probably due to the limited spatial extent of the cortical areas representing a digit (close to the resolution in most fMRI experiments), complications due to acquisition of consistent maps for individual subjects (Schweizer et al 2008), or inter-individual variability in sulcal anatomy impeding group studies. Here, we used 7T fMRI to investigate finger somatotopy in SI, some of its functional characteristics, and its reproducibility. Methods: Eight right-handed male subjects were scanned on a 7T scanner (Siemens Medical, Germany) with an 8-channel Tx/Rx rf-coil (Rapid Biomedical, Germany). 1.3x1.3x1.3mm3 resolution fMRI data were acquired using a sinusoidal readout EPI sequence (Speck et al, 2008) and FOV=210mm, TE/TR=27ms/2.5s, GRAPPA=2. Each volume contained 28 transverse slices covering SI. A single EPI volume with 64 slices was acquired to aid coregistration. 1x1x1mm3 anatomical data were acquire using the MP2RAGE sequence (Marques et al, 2009; TE/TR/TI1,2/TRmprage=2.63ms/7.2ms/0.9,3.2s/5s). Subjects were positioned supine in the scanner with their right arm comfortably against the magnet bore. An experimenter was positioned at the entrance of the bore where he could easily reach and stroke successively the two distal phalanxes of each digit. The order of stroked digit was D1 (thumb)-D3-D5-D2-D4, with 20s ON, 10s OFF alternated. This sequence was repeated four times per run and two functional runs were acquired per subject. Realignment, smoothing (FWHM 2 mm), coregistration of the anatomical to the fMRI data and calculation of t-statistics were done using SPM8. An SI mask was obtained via an F-contrast (p<0.001) over all digits. Within the mask, voxels were labeled with the number of the digit demonstrating the highest t-value for that particular voxel. Results: For all subjects, areas corresponding to the five digits were identified in contralateral SI. BA3b showed the most consistent somatotopic finger representation (see an example in Fig.1). The five digits were localized in a consecutive order in the cortex, with D1 most anterior, inferior and distal and D5, most posterior, superior and medial (mean distance between centres of mass of digit representations ±stderr: 4.2±0.7mm; see Fig. 2). The analysis of average beta values within each finger representation region revealed the specificity of the somatotopic region to the tactile input for each tested finger (except digit 4 and 5). Five of these subjects also presented an orderly and consecutive representation of the five digits in BA1 and 2. Conclusions: Our data reveal that the increased BOLD sensitivity at 7T and the high spatial resolution used in this study allow consistent somatotopic mapping using human touch as a stimulus and that human SI contains at least three separate regions that contain five separate representations of all single contralateral fingers. Moreover, adjacent fingers were represented at adjacent cortical regions across the three SI regions. The spatial organization of SI as reflected in individual subject topography corresponds well with previous electrophysiological data in non-human primates. The small distance between digit representations highlights the need for the high spatial resolution available at 7T.
Resumo:
It has been suggested that Ménière's disease is part of a polyganglionitis in which symptoms result from the reactivation of neurotropic virus within the internal auditory canal, and that intratympanic applications of an antiviral agent might be an efficient therapy. In 2002, we performed a pilot study ending with encouraging results. Control of vertigo was achieved in 80% of the 17 patients included. We present here a prospective, double-blind study, with a 2-year follow-up, in 29 patients referred by ENT practitioners for a surgical treatment after failure of a medical therapy. The participation in the study was offered to patients prior to surgery. A solution of ganciclovir 50 mg/ml or of NaCl 9% was delivered for 10 consecutive days via a microwick inserted into the tympanic membrane in the direction of the round window or through a ventilation tube. One patient was withdrawn from the study immediately after the end of the injections. He could not complete the follow-up period, because of persisting vertigo. As he had received the placebo, he was then treated with the solution of ganciclovir. Symptoms persisted and he underwent a vestibular neurectomy. Among the remaining 28 patients, surgery could be postponed in 22 (81%). Surgery remained necessary to control vertigo in 3 patients from the group that received the antiviral agent, and in 3 from the control group. Using an analogical scale, patients of both groups indicated a similar improvement of their health immediately after the intratympanic injections. The scores obtained with a 36-item short-form health survey quality of life questionnaire and the Dizziness Handicap Inventory were also similar for both groups. In conclusion, most patients were improved after the intratympanic injections, but there was no obvious difference between the treated and control groups. The benefit might be due to the middle ear ventilation or reflect an improvement in the patients' emotional state.
Resumo:
PURPOSE: The objective of this experiment is to establish a continuous postmortem circulation in the vascular system of porcine lungs and to evaluate the pulmonary distribution of the perfusate. This research is performed in the bigger scope of a revascularization project of Thiel embalmed specimens. This technique enables teaching anatomy, practicing surgical procedures and doing research under lifelike circumstances. METHODS: After cannulation of the pulmonary trunk and the left atrium, the vascular system was flushed with paraffinum perliquidum (PP) through a heart-lung machine. A continuous circulation was then established using red PP, during which perfusion parameters were measured. The distribution of contrast-containing PP in the pulmonary circulation was visualized on computed tomography. Finally, the amount of leak from the vascular system was calculated. RESULTS: A reperfusion of the vascular system was initiated for 37 min. The flow rate ranged between 80 and 130 ml/min throughout the experiment with acceptable perfusion pressures (range: 37-78 mm Hg). Computed tomography imaging and 3D reconstruction revealed a diffuse vascular distribution of PP and a decreasing vascularization ratio in cranial direction. A self-limiting leak (i.e. 66.8% of the circulating volume) towards the tracheobronchial tree due to vessel rupture was also measured. CONCLUSIONS: PP enables circulation in an isolated porcine lung model with an acceptable pressure-flow relationship resulting in an excellent recruitment of the vascular system. Despite these promising results, rupture of vessel walls may cause leaks. Further exploration of the perfusion capacities of PP in other organs is necessary. Eventually, this could lead to the development of reperfused Thiel embalmed human bodies, which have several applications.
Resumo:
Astrocyte Ca(2+) signalling has been proposed to link neuronal information in different spatial-temporal dimensions to achieve a higher level of brain integration. However, some discrepancies in the results of recent studies challenge this view and highlight key insufficiencies in our current understanding. In parallel, new experimental approaches that enable the study of astrocyte physiology at higher spatial-temporal resolution in intact brain preparations are beginning to reveal an unexpected level of compartmentalization and sophistication in astrocytic Ca(2+) dynamics. This newly revealed complexity needs to be attentively considered in order to understand how astrocytes may contribute to brain information processing.
Resumo:
The purpose of this study was to assess the spatial resolution of a computed tomography (CT) scanner with an automatic approach developed for routine quality controls when varying CT parameters. The methods available to assess the modulation transfer functions (MTF) with the automatic approach were Droege's and the bead point source (BPS) methods. These MTFs were compared with presampled ones obtained using Boone's method. The results show that Droege's method is not accurate in the low-frequency range, whereas the BPS method is highly sensitive to image noise. While both methods are well adapted to routine stability controls, it was shown that they are not able to provide absolute measurements. On the other hand, Boone's method, which is robust with respect to aliasing, more resilient to noise and provides absolute measurements, satisfies the commissioning requirements perfectly. Thus, Boone's method combined with a modified Catphan 600 phantom could be a good solution to assess CT spatial resolution in the different CT planes.
Resumo:
EXECUTIVE SUMMARY : Evaluating Information Security Posture within an organization is becoming a very complex task. Currently, the evaluation and assessment of Information Security are commonly performed using frameworks, methodologies and standards which often consider the various aspects of security independently. Unfortunately this is ineffective because it does not take into consideration the necessity of having a global and systemic multidimensional approach to Information Security evaluation. At the same time the overall security level is globally considered to be only as strong as its weakest link. This thesis proposes a model aiming to holistically assess all dimensions of security in order to minimize the likelihood that a given threat will exploit the weakest link. A formalized structure taking into account all security elements is presented; this is based on a methodological evaluation framework in which Information Security is evaluated from a global perspective. This dissertation is divided into three parts. Part One: Information Security Evaluation issues consists of four chapters. Chapter 1 is an introduction to the purpose of this research purpose and the Model that will be proposed. In this chapter we raise some questions with respect to "traditional evaluation methods" as well as identifying the principal elements to be addressed in this direction. Then we introduce the baseline attributes of our model and set out the expected result of evaluations according to our model. Chapter 2 is focused on the definition of Information Security to be used as a reference point for our evaluation model. The inherent concepts of the contents of a holistic and baseline Information Security Program are defined. Based on this, the most common roots-of-trust in Information Security are identified. Chapter 3 focuses on an analysis of the difference and the relationship between the concepts of Information Risk and Security Management. Comparing these two concepts allows us to identify the most relevant elements to be included within our evaluation model, while clearing situating these two notions within a defined framework is of the utmost importance for the results that will be obtained from the evaluation process. Chapter 4 sets out our evaluation model and the way it addresses issues relating to the evaluation of Information Security. Within this Chapter the underlying concepts of assurance and trust are discussed. Based on these two concepts, the structure of the model is developed in order to provide an assurance related platform as well as three evaluation attributes: "assurance structure", "quality issues", and "requirements achievement". Issues relating to each of these evaluation attributes are analysed with reference to sources such as methodologies, standards and published research papers. Then the operation of the model is discussed. Assurance levels, quality levels and maturity levels are defined in order to perform the evaluation according to the model. Part Two: Implementation of the Information Security Assurance Assessment Model (ISAAM) according to the Information Security Domains consists of four chapters. This is the section where our evaluation model is put into a welldefined context with respect to the four pre-defined Information Security dimensions: the Organizational dimension, Functional dimension, Human dimension, and Legal dimension. Each Information Security dimension is discussed in a separate chapter. For each dimension, the following two-phase evaluation path is followed. The first phase concerns the identification of the elements which will constitute the basis of the evaluation: ? Identification of the key elements within the dimension; ? Identification of the Focus Areas for each dimension, consisting of the security issues identified for each dimension; ? Identification of the Specific Factors for each dimension, consisting of the security measures or control addressing the security issues identified for each dimension. The second phase concerns the evaluation of each Information Security dimension by: ? The implementation of the evaluation model, based on the elements identified for each dimension within the first phase, by identifying the security tasks, processes, procedures, and actions that should have been performed by the organization to reach the desired level of protection; ? The maturity model for each dimension as a basis for reliance on security. For each dimension we propose a generic maturity model that could be used by every organization in order to define its own security requirements. Part three of this dissertation contains the Final Remarks, Supporting Resources and Annexes. With reference to the objectives of our thesis, the Final Remarks briefly analyse whether these objectives were achieved and suggest directions for future related research. Supporting resources comprise the bibliographic resources that were used to elaborate and justify our approach. Annexes include all the relevant topics identified within the literature to illustrate certain aspects of our approach. Our Information Security evaluation model is based on and integrates different Information Security best practices, standards, methodologies and research expertise which can be combined in order to define an reliable categorization of Information Security. After the definition of terms and requirements, an evaluation process should be performed in order to obtain evidence that the Information Security within the organization in question is adequately managed. We have specifically integrated into our model the most useful elements of these sources of information in order to provide a generic model able to be implemented in all kinds of organizations. The value added by our evaluation model is that it is easy to implement and operate and answers concrete needs in terms of reliance upon an efficient and dynamic evaluation tool through a coherent evaluation system. On that basis, our model could be implemented internally within organizations, allowing them to govern better their Information Security. RÉSUMÉ : Contexte général de la thèse L'évaluation de la sécurité en général, et plus particulièrement, celle de la sécurité de l'information, est devenue pour les organisations non seulement une mission cruciale à réaliser, mais aussi de plus en plus complexe. A l'heure actuelle, cette évaluation se base principalement sur des méthodologies, des bonnes pratiques, des normes ou des standards qui appréhendent séparément les différents aspects qui composent la sécurité de l'information. Nous pensons que cette manière d'évaluer la sécurité est inefficiente, car elle ne tient pas compte de l'interaction des différentes dimensions et composantes de la sécurité entre elles, bien qu'il soit admis depuis longtemps que le niveau de sécurité globale d'une organisation est toujours celui du maillon le plus faible de la chaîne sécuritaire. Nous avons identifié le besoin d'une approche globale, intégrée, systémique et multidimensionnelle de l'évaluation de la sécurité de l'information. En effet, et c'est le point de départ de notre thèse, nous démontrons que seule une prise en compte globale de la sécurité permettra de répondre aux exigences de sécurité optimale ainsi qu'aux besoins de protection spécifiques d'une organisation. Ainsi, notre thèse propose un nouveau paradigme d'évaluation de la sécurité afin de satisfaire aux besoins d'efficacité et d'efficience d'une organisation donnée. Nous proposons alors un modèle qui vise à évaluer d'une manière holistique toutes les dimensions de la sécurité, afin de minimiser la probabilité qu'une menace potentielle puisse exploiter des vulnérabilités et engendrer des dommages directs ou indirects. Ce modèle se base sur une structure formalisée qui prend en compte tous les éléments d'un système ou programme de sécurité. Ainsi, nous proposons un cadre méthodologique d'évaluation qui considère la sécurité de l'information à partir d'une perspective globale. Structure de la thèse et thèmes abordés Notre document est structuré en trois parties. La première intitulée : « La problématique de l'évaluation de la sécurité de l'information » est composée de quatre chapitres. Le chapitre 1 introduit l'objet de la recherche ainsi que les concepts de base du modèle d'évaluation proposé. La maniéré traditionnelle de l'évaluation de la sécurité fait l'objet d'une analyse critique pour identifier les éléments principaux et invariants à prendre en compte dans notre approche holistique. Les éléments de base de notre modèle d'évaluation ainsi que son fonctionnement attendu sont ensuite présentés pour pouvoir tracer les résultats attendus de ce modèle. Le chapitre 2 se focalise sur la définition de la notion de Sécurité de l'Information. Il ne s'agit pas d'une redéfinition de la notion de la sécurité, mais d'une mise en perspectives des dimensions, critères, indicateurs à utiliser comme base de référence, afin de déterminer l'objet de l'évaluation qui sera utilisé tout au long de notre travail. Les concepts inhérents de ce qui constitue le caractère holistique de la sécurité ainsi que les éléments constitutifs d'un niveau de référence de sécurité sont définis en conséquence. Ceci permet d'identifier ceux que nous avons dénommés « les racines de confiance ». Le chapitre 3 présente et analyse la différence et les relations qui existent entre les processus de la Gestion des Risques et de la Gestion de la Sécurité, afin d'identifier les éléments constitutifs du cadre de protection à inclure dans notre modèle d'évaluation. Le chapitre 4 est consacré à la présentation de notre modèle d'évaluation Information Security Assurance Assessment Model (ISAAM) et la manière dont il répond aux exigences de l'évaluation telle que nous les avons préalablement présentées. Dans ce chapitre les concepts sous-jacents relatifs aux notions d'assurance et de confiance sont analysés. En se basant sur ces deux concepts, la structure du modèle d'évaluation est développée pour obtenir une plateforme qui offre un certain niveau de garantie en s'appuyant sur trois attributs d'évaluation, à savoir : « la structure de confiance », « la qualité du processus », et « la réalisation des exigences et des objectifs ». Les problématiques liées à chacun de ces attributs d'évaluation sont analysées en se basant sur l'état de l'art de la recherche et de la littérature, sur les différentes méthodes existantes ainsi que sur les normes et les standards les plus courants dans le domaine de la sécurité. Sur cette base, trois différents niveaux d'évaluation sont construits, à savoir : le niveau d'assurance, le niveau de qualité et le niveau de maturité qui constituent la base de l'évaluation de l'état global de la sécurité d'une organisation. La deuxième partie: « L'application du Modèle d'évaluation de l'assurance de la sécurité de l'information par domaine de sécurité » est elle aussi composée de quatre chapitres. Le modèle d'évaluation déjà construit et analysé est, dans cette partie, mis dans un contexte spécifique selon les quatre dimensions prédéfinies de sécurité qui sont: la dimension Organisationnelle, la dimension Fonctionnelle, la dimension Humaine, et la dimension Légale. Chacune de ces dimensions et son évaluation spécifique fait l'objet d'un chapitre distinct. Pour chacune des dimensions, une évaluation en deux phases est construite comme suit. La première phase concerne l'identification des éléments qui constituent la base de l'évaluation: ? Identification des éléments clés de l'évaluation ; ? Identification des « Focus Area » pour chaque dimension qui représentent les problématiques se trouvant dans la dimension ; ? Identification des « Specific Factors » pour chaque Focus Area qui représentent les mesures de sécurité et de contrôle qui contribuent à résoudre ou à diminuer les impacts des risques. La deuxième phase concerne l'évaluation de chaque dimension précédemment présentées. Elle est constituée d'une part, de l'implémentation du modèle général d'évaluation à la dimension concernée en : ? Se basant sur les éléments spécifiés lors de la première phase ; ? Identifiant les taches sécuritaires spécifiques, les processus, les procédures qui auraient dû être effectués pour atteindre le niveau de protection souhaité. D'autre part, l'évaluation de chaque dimension est complétée par la proposition d'un modèle de maturité spécifique à chaque dimension, qui est à considérer comme une base de référence pour le niveau global de sécurité. Pour chaque dimension nous proposons un modèle de maturité générique qui peut être utilisé par chaque organisation, afin de spécifier ses propres exigences en matière de sécurité. Cela constitue une innovation dans le domaine de l'évaluation, que nous justifions pour chaque dimension et dont nous mettons systématiquement en avant la plus value apportée. La troisième partie de notre document est relative à la validation globale de notre proposition et contient en guise de conclusion, une mise en perspective critique de notre travail et des remarques finales. Cette dernière partie est complétée par une bibliographie et des annexes. Notre modèle d'évaluation de la sécurité intègre et se base sur de nombreuses sources d'expertise, telles que les bonnes pratiques, les normes, les standards, les méthodes et l'expertise de la recherche scientifique du domaine. Notre proposition constructive répond à un véritable problème non encore résolu, auquel doivent faire face toutes les organisations, indépendamment de la taille et du profil. Cela permettrait à ces dernières de spécifier leurs exigences particulières en matière du niveau de sécurité à satisfaire, d'instancier un processus d'évaluation spécifique à leurs besoins afin qu'elles puissent s'assurer que leur sécurité de l'information soit gérée d'une manière appropriée, offrant ainsi un certain niveau de confiance dans le degré de protection fourni. Nous avons intégré dans notre modèle le meilleur du savoir faire, de l'expérience et de l'expertise disponible actuellement au niveau international, dans le but de fournir un modèle d'évaluation simple, générique et applicable à un grand nombre d'organisations publiques ou privées. La valeur ajoutée de notre modèle d'évaluation réside précisément dans le fait qu'il est suffisamment générique et facile à implémenter tout en apportant des réponses sur les besoins concrets des organisations. Ainsi notre proposition constitue un outil d'évaluation fiable, efficient et dynamique découlant d'une approche d'évaluation cohérente. De ce fait, notre système d'évaluation peut être implémenté à l'interne par l'entreprise elle-même, sans recourir à des ressources supplémentaires et lui donne également ainsi la possibilité de mieux gouverner sa sécurité de l'information.