968 resultados para 4-component gaussian basis sets
Resumo:
The present research project was designed to identify the typical Iowa material input values that are required by the Mechanistic-Empirical Pavement Design Guide (MEPDG) for the Level 3 concrete pavement design. It was also designed to investigate the existing equations that might be used to predict Iowa pavement concrete for the Level 2 pavement design. In this project, over 20,000 data were collected from the Iowa Department of Transportation (DOT) and other sources. These data, most of which were concrete compressive strength, slump, air content, and unit weight data, were synthesized and their statistical parameters (such as the mean values and standard variations) were analyzed. Based on the analyses, the typical input values of Iowa pavement concrete, such as 28-day compressive strength (f’c), splitting tensile strength (fsp), elastic modulus (Ec), and modulus of rupture (MOR), were evaluated. The study indicates that the 28-day MOR of Iowa concrete is 646 + 51 psi, very close to the MEPDG default value (650 psi). The 28-day Ec of Iowa concrete (based only on two available data of the Iowa Curling and Warping project) is 4.82 + 0.28x106 psi, which is quite different from the MEPDG default value (3.93 x106 psi); therefore, the researchers recommend re-evaluating after more Iowa test data become available. The drying shrinkage (εc) of a typical Iowa concrete (C-3WR-C20 mix) was tested at Concrete Technology Laboratory (CTL). The test results show that the ultimate shrinkage of the concrete is about 454 microstrain and the time for the concrete to reach 50% of ultimate shrinkage is at 32 days; both of these values are very close to the MEPDG default values. The comparison of the Iowa test data and the MEPDG default values, as well as the recommendations on the input values to be used in MEPDG for Iowa PCC pavement design, are summarized in Table 20 of this report. The available equations for predicting the above-mentioned concrete properties were also assembled. The validity of these equations for Iowa concrete materials was examined. Multiple-parameters nonlinear regression analyses, along with the artificial neural network (ANN) method, were employed to investigate the relationships among Iowa concrete material properties and to modify the existing equations so as to be suitable for Iowa concrete materials. However, due to lack of necessary data sets, the relationships between Iowa concrete properties were established based on the limited data from CP Tech Center’s projects and ISU classes only. The researchers suggest that the resulting relationships be used by Iowa pavement design engineers as references only. The present study furthermore indicates that appropriately documenting concrete properties, including flexural strength, elastic modulus, and information on concrete mix design, is essential for updating the typical Iowa material input values and providing rational prediction equations for concrete pavement design in the future.
Resumo:
T-cell responses are regulated by activating and inhibiting signals. CD28 and its homologue, cytotoxic T-lymphocyte antigen 4 (CTLA-4), are the primary regulatory molecules that enhance or inhibit T-cell activation, respectively. Recently it has been shown that inhibitory natural killer (NK) cell receptors (NKRs) are expressed on subsets of T cells. It has been proposed that these receptors may also play an important role in regulating T-cell responses. However, the extent to which the NKRs modulate peripheral T-cell homeostasis and activation in vivo remains unclear. In this report we show that NK cell inhibitory receptor Ly49A engagement on T cells dramatically limits T-cell activation and the resultant lymphoproliferative disorder that occurs in CTLA-4-deficient mice. Prevention of activation and expansion of the potentially autoreactive CTLA-4(-/-) T cells by the Ly49A-mediated inhibitory signal demonstrates that NKR expression can play an important regulatory role in T-cell homeostasis in vivo. These results demonstrate the importance of inhibitory signals in T-cell homeostasis and suggest the common biochemical basis of inhibitory signaling pathways in T lymphocytes.
Resumo:
The potent antimicrobial compound 2,4-diacetylphloroglucinol (DAPG) is a major determinant of biocontrol activity of plant-beneficial Pseudomonas fluorescens CHA0 against root diseases caused by fungal pathogens. The DAPG biosynthetic locus harbors the phlG gene, the function of which has not been elucidated thus far. The phlG gene is located upstream of the phlACBD biosynthetic operon, between the phlF and phlH genes which encode pathway-specific regulators. In this study, we assigned a function to PhlG as a hydrolase specifically degrades DAPG to equimolar amounts of mildly toxic monoacetylphloroglucinol (MAPG) and acetate. DAPG added to cultures of a DAPG-negative DeltaphlA mutant of strain CHA0 was completely degraded, and MAPG was temporarily accumulated. In contrast, DAPG was not degraded in cultures of a DeltaphlA DeltaphlG double mutant. To confirm the enzymatic nature of PhlG in vitro, the protein was histidine tagged, overexpressed in Escherichia coli, and purified by affinity chromatography. Purified PhlG had a molecular mass of about 40 kDa and catalyzed the degradation of DAPG to MAPG. The enzyme had a kcat of 33 s(-1) and a Km of 140 microM at 30 degrees C and pH 7. The PhlG enzyme did not degrade other compounds with structures similar to DAPG, such as MAPG and triacetylphloroglucinol, suggesting strict substrate specificity. Interestingly, PhlG activity was strongly reduced by pyoluteorin, a further antifungal compound produced by the bacterium. Expression of phlG was not influenced by the substrate DAPG or the degradation product MAPG but was subject to positive control by the GacS/GacA two-component system and to negative control by the pathway-specific regulators PhlF and PhlH.
Resumo:
We investigated possible relations among four common neonatal manifestations of diabetic pregnancy (macrosomia, hypoglycemia, hypocalcemia, jaundice) and four enzyme polymorphisms (PGM1, ADA, AK1, ACP1 in a sample of infants born of diabetic mothers. The pattern of associations observed between the two sets of variables is consistent with known differences in enzymatic activity within phenotypes of each system, suggesting that low enzymatic activity may have unfavorable effects on fetal development and on adaptability of the neonate to the extrauterine environment, Some of the polymorphic enzymes studied influence fetal growth in normal pregnancy as well. Analysis of relations between genetic polymorphisms and the clinical pattern of common diseases may provide a better understanding of the genetic basis of the clinical variability of diseases within and between human populations.
Resumo:
A water reducing and retarding type admixture in concrete is commonly used on continuous bridge deck pours in Iowa. The concrete placed in the negative moment areas must remain plastic until all the dead load deflection due to the new deck's weight occurs. If the concrete does not remain plastic until the total deflection has occurred, structural cracks will develop in these areas. Retarding type admixtures will delay the setting time of concrete and prevent structural cracks if added in the proper amounts. In Section 2412.02 of the Standard Specifications, 1972, Iowa State Highway Commission, it states, "The admixture shall be used in amounts recommended by the manufacturer for conditions which prevail on the project and as approved by the engineer." The conditions which prevail on the project depend on temperature, humidity, wind conditions, etc. Each of these factors will affect the setting rate of the plastic concrete. The purpose of this project is to provide data that will be useful to field personnel concerning the retardation of concrete setting times, and how the of sets will vary with different addition rates and curing temperatures holding all other atmospheric variables constant.
Resumo:
Homologous recombination provides a major pathway for the repair of DNA double-strand breaks in mammalian cells. Defects in homologous recombination can lead to high levels of chromosomal translocations or deletions, which may promote cell transformation and cancer development. A key component of this process is RAD51. In comparison to RecA, the bacterial homologue, human RAD51 protein exhibits low-level strand-exchange activity in vitro. This activity can, however, be stimulated by the presence of high salt. Here, we have investigated the mechanistic basis for this stimulation. We show that high ionic strength favours the co-aggregation of RAD51-single-stranded DNA (ssDNA) nucleoprotein filaments with naked duplex DNA, to form a complex in which the search for homologous sequences takes place. High ionic strength allows differential binding of RAD51 to ssDNA and double-stranded DNA (dsDNA), such that ssDNA-RAD51 interactions are unaffected, whereas those between RAD51 and dsDNA are destabilized. Most importantly, high salt induces a conformational change in RAD51, leading to the formation of extended nucleoprotein filaments on ssDNA. These extended filaments mimic the active form of the Escherichia coli RecA-ssDNA filament that exhibits efficient strand-exchange activity.
Resumo:
Background: Conventional magnetic resonance imaging (MRI) techniques are highly sensitive to detect multiple sclerosis (MS) plaques, enabling a quantitative assessment of inflammatory activity and lesion load. In quantitative analyses of focal lesions, manual or semi-automated segmentations have been widely used to compute the total number of lesions and the total lesion volume. These techniques, however, are both challenging and time-consuming, being also prone to intra-observer and inter-observer variability.Aim: To develop an automated approach to segment brain tissues and MS lesions from brain MRI images. The goal is to reduce the user interaction and to provide an objective tool that eliminates the inter- and intra-observer variability.Methods: Based on the recent methods developed by Souplet et al. and de Boer et al., we propose a novel pipeline which includes the following steps: bias correction, skull stripping, atlas registration, tissue classification, and lesion segmentation. After the initial pre-processing steps, a MRI scan is automatically segmented into 4 classes: white matter (WM), grey matter (GM), cerebrospinal fluid (CSF) and partial volume. An expectation maximisation method which fits a multivariate Gaussian mixture model to T1-w, T2-w and PD-w images is used for this purpose. Based on the obtained tissue masks and using the estimated GM mean and variance, we apply an intensity threshold to the FLAIR image, which provides the lesion segmentation. With the aim of improving this initial result, spatial information coming from the neighbouring tissue labels is used to refine the final lesion segmentation.Results:The experimental evaluation was performed using real data sets of 1.5T and the corresponding ground truth annotations provided by expert radiologists. The following values were obtained: 64% of true positive (TP) fraction, 80% of false positive (FP) fraction, and an average surface distance of 7.89 mm. The results of our approach were quantitatively compared to our implementations of the works of Souplet et al. and de Boer et al., obtaining higher TP and lower FP values.Conclusion: Promising MS lesion segmentation results have been obtained in terms of TP. However, the high number of FP which is still a well-known problem of all the automated MS lesion segmentation approaches has to be improved in order to use them for the standard clinical practice. Our future work will focus on tackling this issue.
Resumo:
MCT2 is the major neuronal monocarboxylate transporter (MCT) that allows the supply of alternative energy substrates such as lactate to neurons. Recent evidence obtained by electron microscopy has demonstrated that MCT2, like alpha-amino-3-hydroxyl-5-methyl-4-isoxazole-propionic acid (AMPA) receptors, is localized in dendritic spines of glutamatergic synapses. Using immunofluorescence, we show in this study that MCT2 colocalizes extensively with GluR2/3 subunits of AMPA receptors in neurons from various mouse brain regions as well as in cultured neurons. It also colocalizes with GluR2/3-interacting proteins, such as C-kinase-interacting protein 1, glutamate receptor-interacting protein 1 and clathrin adaptor protein. Coimmunoprecipitation of MCT2 with GluR2/3 and C-kinase-interacting protein 1 suggests their close interaction within spines. Parallel changes in the localization of both MCT2 and GluR2/3 subunits at and beneath the plasma membrane upon various stimulation paradigms were unraveled using an original immunocytochemical and transfection approach combined with three-dimensional image reconstruction. Cell culture incubation with AMPA or insulin triggered a marked intracellular accumulation of both MCT2 and GluR2/3, whereas both tumor necrosis factor alpha and glycine (with glutamate) increased their cell surface immunolabeling. Similar results were obtained using Western blots performed on membrane or cytoplasm-enriched cell fractions. Finally, an enhanced lactate flux into neurons was demonstrated after MCT2 translocation on the cell surface. These observations provide unequivocal evidence that MCT2 is linked to AMPA receptor GluR2/3 subunits and undergoes a similar translocation process in neurons upon activation. MCT2 emerges as a novel component of the synaptic machinery putatively linking neuroenergetics to synaptic transmission.
Resumo:
OBJECTIVE: To establish the genetic basis of Landau-Kleffner syndrome (LKS) in a cohort of two discordant monozygotic (MZ) twin pairs and 11 isolated cases. METHODS: We used a multifaceted approach to identify genetic risk factors for LKS. Array comparative genomic hybridization (CGH) was performed using the Agilent 180K array. Whole genome methylation profiling was undertaken in the two discordant twin pairs, three isolated LKS cases, and 12 control samples using the Illumina 27K array. Exome sequencing was undertaken in 13 patients with LKS including two sets of discordant MZ twins. Data were analyzed with respect to novel and rare variants, overlapping genes, variants in reported epilepsy genes, and pathway enrichment. RESULTS: A variant (cG1553A) was found in a single patient in the GRIN2A gene, causing an arginine to histidine change at site 518, a predicted glutamate binding site. Following copy number variation (CNV), methylation, and exome sequencing analysis, no single candidate gene was identified to cause LKS in the remaining cohort. However, a number of interesting additional candidate variants were identified including variants in RELN, BSN, EPHB2, and NID2. SIGNIFICANCE: A single mutation was identified in the GRIN2A gene. This study has identified a number of additional candidate genes including RELN, BSN, EPHB2, and NID2. A PowerPoint slide summarizing this article is available for download in the Supporting Information section here.
Resumo:
After cemented total hip arthroplasty (THA) there may be failure at either the cement-stem or the cement-bone interface. This results from the occurrence of abnormally high shear and compressive stresses within the cement and excessive relative micromovement. We therefore evaluated micromovement and stress at the cement-bone and cement-stem interfaces for a titanium and a chromium-cobalt stem. The behaviour of both implants was similar and no substantial differences were found in the size and distribution of micromovement on either interface with respect to the stiffness of the stem. Micromovement was minimal with a cement mantle 3 to 4 mm thick but then increased with greater thickness of the cement. Abnormally high micromovement occurred when the cement was thinner than 2 mm and the stem was made of titanium. The relative decrease in surface roughness augmented slipping but decreased debonding at the cement-bone interface. Shear stress at this site did not vary significantly for the different coefficients of cement-bone friction while compressive and hoop stresses within the cement increased slightly.
Resumo:
In the administration, planning, design, and maintenance of road systems, transportation professionals often need to choose between alternatives, justify decisions, evaluate tradeoffs, determine how much to spend, set priorities, assess how well the network meets traveler needs, and communicate the basis for their actions to others. A variety of technical guidelines, tools, and methods have been developed to help with these activities. Such work aids include design criteria guidelines, design exception analysis methods, needs studies, revenue allocation schemes, regional planning guides, designation of minimum standards, sufficiency ratings, management systems, point based systems to determine eligibility for paving, functional classification, and bridge ratings. While such tools play valuable roles, they also manifest a number of deficiencies and are poorly integrated. Design guides tell what solutions MAY be used, they aren't oriented towards helping find which one SHOULD be used. Design exception methods help justify deviation from design guide requirements but omit consideration of important factors. Resource distribution is too often based on dividing up what's available rather than helping determine how much should be spent. Point systems serve well as procedural tools but are employed primarily to justify decisions that have already been made. In addition, the tools aren't very scalable: a system level method of analysis seldom works at the project level and vice versa. In conjunction with the issues cited above, the operation and financing of the road and highway system is often the subject of criticisms that raise fundamental questions: What is the best way to determine how much money should be spent on a city or a county's road network? Is the size and quality of the rural road system appropriate? Is too much or too little money spent on road work? What parts of the system should be upgraded and in what sequence? Do truckers receive a hidden subsidy from other motorists? Do transportation professions evaluate road situations from too narrow of a perspective? In considering the issues and questions the author concluded that it would be of value if one could identify and develop a new method that would overcome the shortcomings of existing methods, be scalable, be capable of being understood by the general public, and utilize a broad viewpoint. After trying out a number of concepts, it appeared that a good approach would be to view the road network as a sub-component of a much larger system that also includes vehicles, people, goods-in-transit, and all the ancillary items needed to make the system function. Highway investment decisions could then be made on the basis of how they affect the total cost of operating the total system. A concept, named the "Total Cost of Transportation" method, was then developed and tested. The concept rests on four key principles: 1) that roads are but one sub-system of a much larger 'Road Based Transportation System', 2) that the size and activity level of the overall system are determined by market forces, 3) that the sum of everything expended, consumed, given up, or permanently reserved in building the system and generating the activity that results from the market forces represents the total cost of transportation, and 4) that the economic purpose of making road improvements is to minimize that total cost. To test the practical value of the theory, a special database and spreadsheet model of Iowa's county road network was developed. This involved creating a physical model to represent the size, characteristics, activity levels, and the rates at which the activities take place, developing a companion economic cost model, then using the two in tandem to explore a variety of issues. Ultimately, the theory and model proved capable of being used in full system, partial system, single segment, project, and general design guide levels of analysis. The method appeared to be capable of remedying many of the existing work method defects and to answer society's transportation questions from a new perspective.
Resumo:
The Certified Budget Report is prepared annually by each community college. Each college has specific steps that it follows in order to prepare this report and to submit it to the controlling county auditor by March 15 of each year. In January, the valuation reports are available from the county auditors to use as a basis for tax revenue estimates. In preparing the Certified Budget Report, historical year numbers are verified, current year numbers are re-estimated, and the next fiscal year numbers are estimated. Once the Certified Budget Report is prepared, it is filed with the community college board. After filing with the community college board, a public hearing is set. The date for the public hearing must be published no sooner than 20 days before the hearing and no later than 10 days before the hearing. At that public hearing, any comments from the public are heard and the board votes to accept the budget. If adopted by the board, the budget is filed with the controlling county auditor.
Resumo:
EXECUTIVE SUMMARY : Evaluating Information Security Posture within an organization is becoming a very complex task. Currently, the evaluation and assessment of Information Security are commonly performed using frameworks, methodologies and standards which often consider the various aspects of security independently. Unfortunately this is ineffective because it does not take into consideration the necessity of having a global and systemic multidimensional approach to Information Security evaluation. At the same time the overall security level is globally considered to be only as strong as its weakest link. This thesis proposes a model aiming to holistically assess all dimensions of security in order to minimize the likelihood that a given threat will exploit the weakest link. A formalized structure taking into account all security elements is presented; this is based on a methodological evaluation framework in which Information Security is evaluated from a global perspective. This dissertation is divided into three parts. Part One: Information Security Evaluation issues consists of four chapters. Chapter 1 is an introduction to the purpose of this research purpose and the Model that will be proposed. In this chapter we raise some questions with respect to "traditional evaluation methods" as well as identifying the principal elements to be addressed in this direction. Then we introduce the baseline attributes of our model and set out the expected result of evaluations according to our model. Chapter 2 is focused on the definition of Information Security to be used as a reference point for our evaluation model. The inherent concepts of the contents of a holistic and baseline Information Security Program are defined. Based on this, the most common roots-of-trust in Information Security are identified. Chapter 3 focuses on an analysis of the difference and the relationship between the concepts of Information Risk and Security Management. Comparing these two concepts allows us to identify the most relevant elements to be included within our evaluation model, while clearing situating these two notions within a defined framework is of the utmost importance for the results that will be obtained from the evaluation process. Chapter 4 sets out our evaluation model and the way it addresses issues relating to the evaluation of Information Security. Within this Chapter the underlying concepts of assurance and trust are discussed. Based on these two concepts, the structure of the model is developed in order to provide an assurance related platform as well as three evaluation attributes: "assurance structure", "quality issues", and "requirements achievement". Issues relating to each of these evaluation attributes are analysed with reference to sources such as methodologies, standards and published research papers. Then the operation of the model is discussed. Assurance levels, quality levels and maturity levels are defined in order to perform the evaluation according to the model. Part Two: Implementation of the Information Security Assurance Assessment Model (ISAAM) according to the Information Security Domains consists of four chapters. This is the section where our evaluation model is put into a welldefined context with respect to the four pre-defined Information Security dimensions: the Organizational dimension, Functional dimension, Human dimension, and Legal dimension. Each Information Security dimension is discussed in a separate chapter. For each dimension, the following two-phase evaluation path is followed. The first phase concerns the identification of the elements which will constitute the basis of the evaluation: ? Identification of the key elements within the dimension; ? Identification of the Focus Areas for each dimension, consisting of the security issues identified for each dimension; ? Identification of the Specific Factors for each dimension, consisting of the security measures or control addressing the security issues identified for each dimension. The second phase concerns the evaluation of each Information Security dimension by: ? The implementation of the evaluation model, based on the elements identified for each dimension within the first phase, by identifying the security tasks, processes, procedures, and actions that should have been performed by the organization to reach the desired level of protection; ? The maturity model for each dimension as a basis for reliance on security. For each dimension we propose a generic maturity model that could be used by every organization in order to define its own security requirements. Part three of this dissertation contains the Final Remarks, Supporting Resources and Annexes. With reference to the objectives of our thesis, the Final Remarks briefly analyse whether these objectives were achieved and suggest directions for future related research. Supporting resources comprise the bibliographic resources that were used to elaborate and justify our approach. Annexes include all the relevant topics identified within the literature to illustrate certain aspects of our approach. Our Information Security evaluation model is based on and integrates different Information Security best practices, standards, methodologies and research expertise which can be combined in order to define an reliable categorization of Information Security. After the definition of terms and requirements, an evaluation process should be performed in order to obtain evidence that the Information Security within the organization in question is adequately managed. We have specifically integrated into our model the most useful elements of these sources of information in order to provide a generic model able to be implemented in all kinds of organizations. The value added by our evaluation model is that it is easy to implement and operate and answers concrete needs in terms of reliance upon an efficient and dynamic evaluation tool through a coherent evaluation system. On that basis, our model could be implemented internally within organizations, allowing them to govern better their Information Security. RÉSUMÉ : Contexte général de la thèse L'évaluation de la sécurité en général, et plus particulièrement, celle de la sécurité de l'information, est devenue pour les organisations non seulement une mission cruciale à réaliser, mais aussi de plus en plus complexe. A l'heure actuelle, cette évaluation se base principalement sur des méthodologies, des bonnes pratiques, des normes ou des standards qui appréhendent séparément les différents aspects qui composent la sécurité de l'information. Nous pensons que cette manière d'évaluer la sécurité est inefficiente, car elle ne tient pas compte de l'interaction des différentes dimensions et composantes de la sécurité entre elles, bien qu'il soit admis depuis longtemps que le niveau de sécurité globale d'une organisation est toujours celui du maillon le plus faible de la chaîne sécuritaire. Nous avons identifié le besoin d'une approche globale, intégrée, systémique et multidimensionnelle de l'évaluation de la sécurité de l'information. En effet, et c'est le point de départ de notre thèse, nous démontrons que seule une prise en compte globale de la sécurité permettra de répondre aux exigences de sécurité optimale ainsi qu'aux besoins de protection spécifiques d'une organisation. Ainsi, notre thèse propose un nouveau paradigme d'évaluation de la sécurité afin de satisfaire aux besoins d'efficacité et d'efficience d'une organisation donnée. Nous proposons alors un modèle qui vise à évaluer d'une manière holistique toutes les dimensions de la sécurité, afin de minimiser la probabilité qu'une menace potentielle puisse exploiter des vulnérabilités et engendrer des dommages directs ou indirects. Ce modèle se base sur une structure formalisée qui prend en compte tous les éléments d'un système ou programme de sécurité. Ainsi, nous proposons un cadre méthodologique d'évaluation qui considère la sécurité de l'information à partir d'une perspective globale. Structure de la thèse et thèmes abordés Notre document est structuré en trois parties. La première intitulée : « La problématique de l'évaluation de la sécurité de l'information » est composée de quatre chapitres. Le chapitre 1 introduit l'objet de la recherche ainsi que les concepts de base du modèle d'évaluation proposé. La maniéré traditionnelle de l'évaluation de la sécurité fait l'objet d'une analyse critique pour identifier les éléments principaux et invariants à prendre en compte dans notre approche holistique. Les éléments de base de notre modèle d'évaluation ainsi que son fonctionnement attendu sont ensuite présentés pour pouvoir tracer les résultats attendus de ce modèle. Le chapitre 2 se focalise sur la définition de la notion de Sécurité de l'Information. Il ne s'agit pas d'une redéfinition de la notion de la sécurité, mais d'une mise en perspectives des dimensions, critères, indicateurs à utiliser comme base de référence, afin de déterminer l'objet de l'évaluation qui sera utilisé tout au long de notre travail. Les concepts inhérents de ce qui constitue le caractère holistique de la sécurité ainsi que les éléments constitutifs d'un niveau de référence de sécurité sont définis en conséquence. Ceci permet d'identifier ceux que nous avons dénommés « les racines de confiance ». Le chapitre 3 présente et analyse la différence et les relations qui existent entre les processus de la Gestion des Risques et de la Gestion de la Sécurité, afin d'identifier les éléments constitutifs du cadre de protection à inclure dans notre modèle d'évaluation. Le chapitre 4 est consacré à la présentation de notre modèle d'évaluation Information Security Assurance Assessment Model (ISAAM) et la manière dont il répond aux exigences de l'évaluation telle que nous les avons préalablement présentées. Dans ce chapitre les concepts sous-jacents relatifs aux notions d'assurance et de confiance sont analysés. En se basant sur ces deux concepts, la structure du modèle d'évaluation est développée pour obtenir une plateforme qui offre un certain niveau de garantie en s'appuyant sur trois attributs d'évaluation, à savoir : « la structure de confiance », « la qualité du processus », et « la réalisation des exigences et des objectifs ». Les problématiques liées à chacun de ces attributs d'évaluation sont analysées en se basant sur l'état de l'art de la recherche et de la littérature, sur les différentes méthodes existantes ainsi que sur les normes et les standards les plus courants dans le domaine de la sécurité. Sur cette base, trois différents niveaux d'évaluation sont construits, à savoir : le niveau d'assurance, le niveau de qualité et le niveau de maturité qui constituent la base de l'évaluation de l'état global de la sécurité d'une organisation. La deuxième partie: « L'application du Modèle d'évaluation de l'assurance de la sécurité de l'information par domaine de sécurité » est elle aussi composée de quatre chapitres. Le modèle d'évaluation déjà construit et analysé est, dans cette partie, mis dans un contexte spécifique selon les quatre dimensions prédéfinies de sécurité qui sont: la dimension Organisationnelle, la dimension Fonctionnelle, la dimension Humaine, et la dimension Légale. Chacune de ces dimensions et son évaluation spécifique fait l'objet d'un chapitre distinct. Pour chacune des dimensions, une évaluation en deux phases est construite comme suit. La première phase concerne l'identification des éléments qui constituent la base de l'évaluation: ? Identification des éléments clés de l'évaluation ; ? Identification des « Focus Area » pour chaque dimension qui représentent les problématiques se trouvant dans la dimension ; ? Identification des « Specific Factors » pour chaque Focus Area qui représentent les mesures de sécurité et de contrôle qui contribuent à résoudre ou à diminuer les impacts des risques. La deuxième phase concerne l'évaluation de chaque dimension précédemment présentées. Elle est constituée d'une part, de l'implémentation du modèle général d'évaluation à la dimension concernée en : ? Se basant sur les éléments spécifiés lors de la première phase ; ? Identifiant les taches sécuritaires spécifiques, les processus, les procédures qui auraient dû être effectués pour atteindre le niveau de protection souhaité. D'autre part, l'évaluation de chaque dimension est complétée par la proposition d'un modèle de maturité spécifique à chaque dimension, qui est à considérer comme une base de référence pour le niveau global de sécurité. Pour chaque dimension nous proposons un modèle de maturité générique qui peut être utilisé par chaque organisation, afin de spécifier ses propres exigences en matière de sécurité. Cela constitue une innovation dans le domaine de l'évaluation, que nous justifions pour chaque dimension et dont nous mettons systématiquement en avant la plus value apportée. La troisième partie de notre document est relative à la validation globale de notre proposition et contient en guise de conclusion, une mise en perspective critique de notre travail et des remarques finales. Cette dernière partie est complétée par une bibliographie et des annexes. Notre modèle d'évaluation de la sécurité intègre et se base sur de nombreuses sources d'expertise, telles que les bonnes pratiques, les normes, les standards, les méthodes et l'expertise de la recherche scientifique du domaine. Notre proposition constructive répond à un véritable problème non encore résolu, auquel doivent faire face toutes les organisations, indépendamment de la taille et du profil. Cela permettrait à ces dernières de spécifier leurs exigences particulières en matière du niveau de sécurité à satisfaire, d'instancier un processus d'évaluation spécifique à leurs besoins afin qu'elles puissent s'assurer que leur sécurité de l'information soit gérée d'une manière appropriée, offrant ainsi un certain niveau de confiance dans le degré de protection fourni. Nous avons intégré dans notre modèle le meilleur du savoir faire, de l'expérience et de l'expertise disponible actuellement au niveau international, dans le but de fournir un modèle d'évaluation simple, générique et applicable à un grand nombre d'organisations publiques ou privées. La valeur ajoutée de notre modèle d'évaluation réside précisément dans le fait qu'il est suffisamment générique et facile à implémenter tout en apportant des réponses sur les besoins concrets des organisations. Ainsi notre proposition constitue un outil d'évaluation fiable, efficient et dynamique découlant d'une approche d'évaluation cohérente. De ce fait, notre système d'évaluation peut être implémenté à l'interne par l'entreprise elle-même, sans recourir à des ressources supplémentaires et lui donne également ainsi la possibilité de mieux gouverner sa sécurité de l'information.
Resumo:
We characterized lipid and lipoprotein changes associated with a lopinavir/ritonavir-containing regimen. We enrolled previously antiretroviral-naive patients participating in the Swiss HIV Cohort Study. Fasting blood samples (baseline) were retrieved retrospectively from stored frozen plasma and posttreatment (follow-up) samples were collected prospectively at two separate visits. Lipids and lipoproteins were analyzed at a single reference laboratory. Sixty-five patients had two posttreatment lipid profile measurements and nine had only one. Most of the measured lipids and lipoprotein plasma concentrations increased on lopinavir/ritonavir-based treatment. The percentage of patients with hypertriglyceridemia (TG >150 mg/dl) increased from 28/74 (38%) at baseline to 37/65 (57%) at the second follow-up. We did not find any correlation between lopinavir plasma levels and the concentration of triglycerides. There was weak evidence of an increase in small dense LDL-apoB during the first year of treatment but not beyond 1 year (odds ratio 4.5, 90% CI 0.7 to 29 and 0.9, 90% CI 0.5 to 1.5, respectively). However, 69% of our patients still had undetectable small dense LDL-apoB levels while on treatment. LDL-cholesterol increased by a mean of 17 mg/dl (90% CI -3 to 37) during the first year of treatment, but mean values remained below the cut-off for therapeutic intervention. Despite an increase in the majority of measured lipids and lipoproteins particularly in the first year after initiation, we could not detect an obvious increase of cardiovascular risk resulting from the observed lipid changes.