937 resultados para share of no par value
Resumo:
PURPOSE: Mutations within the KRAS proto-oncogene have predictive value but are of uncertain prognostic value in the treatment of advanced colorectal cancer. We took advantage of PETACC-3, an adjuvant trial with 3,278 patients with stage II to III colon cancer, to evaluate the prognostic value of KRAS and BRAF tumor mutation status in this setting. PATIENTS AND METHODS: Formalin-fixed paraffin-embedded tissue blocks (n = 1,564) were prospectively collected and DNA was extracted from tissue sections from 1,404 cases. Planned analysis of KRAS exon 2 and BRAF exon 15 mutations was performed by allele-specific real-time polymerase chain reaction. Survival analyses were based on univariate and multivariate proportional hazard regression models. RESULTS: KRAS and BRAF tumor mutation rates were 37.0% and 7.9%, respectively, and were not significantly different according to tumor stage. In a multivariate analysis containing stage, tumor site, nodal status, sex, age, grade, and microsatellite instability (MSI) status, KRAS mutation was associated with grade (P = .0016), while BRAF mutation was significantly associated with female sex (P = .017), and highly significantly associated with right-sided tumors, older age, high grade, and MSI-high tumors (all P < 10(-4)). In univariate and multivariate analysis, KRAS mutations did not have a major prognostic value regarding relapse-free survival (RFS) or overall survival (OS). BRAF mutation was not prognostic for RFS, but was for OS, particularly in patients with MSI-low (MSI-L) and stable (MSI-S) tumors (hazard ratio, 2.2; 95% CI, 1.4 to 3.4; P = .0003). CONCLUSION: In stage II-III colon cancer, the KRAS mutation status does not have major prognostic value. BRAF is prognostic for OS in MS-L/S tumors.
Resumo:
Introduction: Survival of children born prematurely or with very low birth weight has increased dramatically, but the long term developmental outcome remains unknown. Many children have deficits in cognitive capacities, in particular involving executive domains and those disabilities are likely to involve a central nervous system deficit. To understand their neurostructural origin, we use DTI. Structurally segregated and functionally regions of the cerebral cortex are interconnected by a dense network of axonal pathways. We noninvasively map these pathways across cortical hemispheres and construct normalized structural connection matrices derived from DTI MR tractography. Group comparisons of brain connectivity reveal significant changes in fiber density in case of children with poor intrauterine grown and extremely premature children (gestational age<28 weeks at birth) compared to control subjects. This changes suggest a link between cortico-axonal pathways and the central nervous system deficit. Methods: Sixty premature born infants (5-6 years old) were scanned on clinical 3T scanner (Magnetom Trio, Siemens Medical Solutions, Erlangen, Germany) at two hospitals (HUG, Geneva and CHUV, Lausanne). For each subject, T1-weighted MPRAGE images (TR/TE=2500/2.91,TI=1100, resolution=1x1x1mm, matrix=256x154) and DTI images (30 directions, TR/TE=10200/107, in-plane resolution=1.8x1.8x2mm, 64 axial, matrix=112x112) were acquired. Parent(s) provided written consent on prior ethical board approval. The extraction of the Whole Brain Structural Connectivity Matrix was performed following (Cammoun, 2009 and Hagmann, 2008). The MPARGE images were registered using an affine registration to the non-weighted-DTI and WM-GM segmentation performed on it. In order to have equal anatomical localization among subjects, 66 cortical regions with anatomical landmarks were created using the curvature information, i.e. sulcus and gyrus (Cammoun et al, 2007; Fischl et al, 2004; Desikan et al, 2006) with freesurfer software (http://surfer.nmr.mgh.harvard.edu/). Tractography was performed in WM using an algorithm especially designed for DTI/DSI data (Hagmann et al., 2007) and both information were then combined in a matrix. Each row and column of the matrix corresponds to a particular ROI. Each cell of index (i,j) represents the fiber density of the bundle connecting the ROIs i and j. Subdividing each cortical region, we obtained 4 Connectivity Matrices of different resolution (33, 66, 125 and 250 ROI/hemisphere) for each subject . Subjects were sorted in 3 different groups, namely (1) control, (2) Intrauterine Growth Restriction (IUGR), (3) Extreme Prematurity (EP), depending on their gestational age, weight and percentile-weight score at birth. Group-to-group comparisons were performed between groups (1)-(2) and (1)-(3). The mean age at examination of the three groups were similar. Results: Quantitative analysis were performed between groups to determine fibers density differences. For each group, a mean connectivity matrix with 33ROI/hemisphere resolution was computed. On the other hand, for all matrix resolutions (33,66,125,250 ROI/hemisphere), the number of bundles were computed and averaged. As seen in figure 1, EP and IUGR subjects present an overall reduction of fibers density in both interhemispherical and intrahemispherical connections. This is given quantitatively in table 1. IUGR subjects presents a higher percentage of missing fiber bundles than EP when compared to control subjects (~16% against 11%). When comparing both groups to control subjects, for the EP subjects, the occipito-parietal regions seem less interhemispherically connected whilst the intrahemispherical networks present lack of fiber density in the lymbic system. Children born with IUGR, have similar reductions in interhemispherical connections than the EP. However, the cuneus and precuneus connections with the precentral and paracentral lobe are even lower than in the case of the EP. For the intrahemispherical connections the IUGR group preset a loss of fiber density between the deep gray matter structures (striatum) and the frontal and middlefrontal poles, connections typically involved in the control of executive functions. For the qualitative analysis, a t-test comparing number of bundles (p-value<0.05) gave some preliminary significant results (figure 2). Again, even if both IUGR and EP appear to have significantly less connections comparing to the control subjects, the IUGR cohort seems to present a higher lack of fiber density specially relying the cuneus, precuneus and parietal areas. In terms of fiber density, preliminary Wilcoxon tests seem to validate the hypothesis set by the previous analysis. Conclusions: The goal of this study was to determine the effect of extreme prematurity and poor intrauterine growth on neurostructural development at the age of 6 years-old. This data indicates that differences in connectivity may well be the basis for the neurostructural and neuropsychological deficit described in these populations in the absence of overt brain lesions (Inder TE, 2005; Borradori-Tolsa, 2004; Dubois, 2008). Indeed, we suggest that IUGR and prematurity leads to alteration of connectivity between brain structures, especially in occipito-parietal and frontal lobes for EP and frontal and middletemporal poles for IUGR. Overall, IUGR children have a higher loss of connectivity in the overall connectivity matrix than EP children. In both cases, the localized alteration of connectivity suggests a direct link between cortico-axonal pathways and the central nervous system deficit. Our next step is to link these connectivity alterations to the performance in executive function tests.
Resumo:
Background: b-value is the parameter characterizing the intensity of the diffusion weighting during image acquisition. Data acquisition is usually performed with low b value (b~1000 s/mm2). Evidence shows that high b-values (b>2000 s/mm2) are more sensitive to the slow diffusion compartment (SDC) and maybe more sensitive in detecting white matter (WM) anomalies in schizophrenia.Methods: 12 male patients with schizophrenia (mean age 35 +/-3 years) and 16 healthy male controls matched for age were scanned with a low b-value (1000 s/mm2) and a high b-value (4000 s/mm2) protocol. Apparent diffusion coefficient (ADC) is a measure of the average diffusion distance of water molecules per time unit (mm2/s). ADC maps were generated for all individuals. 8 region of interests (frontal and parietal region bilaterally, centrum semi-ovale bilaterally and anterior and posterior corpus callosum) were manually traced blind to diagnosis.Results: ADC measures acquired with high b-value imaging were more sensitive in detecting differences between schizophrenia patients and healthy controls than low b-value imaging with a gain in significance by a factor of 20- 100 times despite the lower image Signal-to-noise ratio (SNR). Increased ADC was identified in patient's WM (p=0.00015) with major contributions from left and right centrum semi-ovale and to a lesser extent right parietal region.Conclusions: Our results may be related to the sensitivity of high b-value imaging to the SDC believed to reflect mainly the intra-axonal and myelin bound water pool. High b-value imaging might be more sensitive and specific to WM anomalies in schizophrenia than low b-value imaging
Resumo:
We show that a new, simple, and robust general mechanism for the social suppression of within-group selfishness follows from Hamilton's rule applied in a multilevel selection approach to asymmetrical, two-person groups: If it pays a group member to behave selfishly (i.e., increase its share of the group's reproduction, at the expense of group productivity), then its partner will virtually always be favored to provide a reproductive "bribe" sufficient to remove the incentive for the selfish behavior. The magnitude of the bribe will vary directly with the number of offspring (or other close kin) potentially gained by the selfish individual and inversely with both the relatedness r between the interactants and the loss in group productivity because of selfishness. This bribe principle greatly extends the scope for cooperation within groups. Reproductive bribing is more likely to be favored over social policing for dominants rather than subordinates and as intragroup relatedness increases. Finally, analysis of the difference between the group optimum for an individual's behavior and the individual's inclusive fitness optimum reveals a paradoxical feedback loop by which bribing and policing, while nullifying particular selfish acts, automatically widen the separation of individual and group optima for other behaviors (i.e., resolution of one conflict intensifies others).
Resumo:
The canvas support in easel paintings is composed mainly of cellulose. One of the maindegradation paths of cellulose is acid-catalysed hydrolysis, which means that in an acidic environment (low pH), its degradation proceeds at a faster rate (Strlič et al., 2005).The main effect of acid-catalysed hydrolysis is the breaking up of the polymer chains,measured by the “Degree of Polymerisation” (DP). The lowering of the DP value impliesa lower mechanical strength of the textile (Scicolone, 1993), and thus this parameter canbe used to monitor degradation. Knowing these two parameters can, therefore, be veryinformative regarding the condition of the canvas support.
Resumo:
Introduction Societies of ants, bees, wasps and termites dominate many terrestrial ecosystems (Wilson 1971). Their evolutionary and ecological success is based upon the regulation of internal conflicts (e.g. Ratnieks et al. 2006), control of diseases (e.g. Schmid-Hempel 1998) and individual skills and collective intelligence in resource acquisition, nest building and defence (e.g. Camazine 2001). Individuals in social species can pass on their genes not only directly trough their own offspring, but also indirectly by favouring the reproduction of relatives. The inclusive fitness theory of Hamilton (1963; 1964) provides a powerful explanation for the evolution of reproductive altruism and cooperation in groups with related individuals. The same theory also led to the realization that insect societies are subject to internal conflicts over reproduction. Relatedness of less-than-one is not sufficient to eliminate all incentive for individual selfishness. This would indeed require a relatedness of one, as found among cells of an organism (Hardin 1968; Keller 1999). The challenge for evolutionary biology is to understand how groups can prevent or reduce the selfish exploitation of resources by group members, and how societies with low relatedness are maintained. In social insects the evolutionary shift from single- to multiple queens colonies modified the relatedness structure, the dispersal, and the mode of colony founding (e.g. (Crozier & Pamilo 1996). In ants, the most common, and presumably ancestral mode of reproduction is the emission of winged males and females, which found a new colony independently after mating and dispersal flights (Hölldobler & Wilson 1990). The alternative reproductive tactic for ant queens in multiple-queen colonies (polygyne) is to seek to be re-accepted in their natal colonies, where they may remain as additional reproductives or subsequently disperse on foot with part of the colony (budding) (Bourke & Franks 1995; Crozier & Pamilo 1996; Hölldobler & Wilson 1990). Such ant colonies can contain up to several hundred reproductive queens with an even more numerous workforce (Cherix 1980; Cherix 1983). As a consequence in polygynous ants the relatedness among nestmates is very low, and workers raise brood of queens to which they are only distantly related (Crozier & Pamilo 1996; Queller & Strassmann 1998). Therefore workers could increase their inclusive fitness by preferentially caring for their closest relatives and discriminate against less related or foreign individuals (Keller 1997; Queller & Strassmann 2002; Tarpy et al. 2004). However, the bulk of the evidence suggests that social insects do not behave nepotistically, probably because of the costs entailed by decreased colony efficiency or discrimination errors (Keller 1997). Recently, the consensus that nepotistic behaviour does not occur in insect colonies was challenged by a study in the ant Formica fusca (Hannonen & Sundström 2003b) showing that the reproductive share of queens more closely related to workers increases during brood development. However, this pattern can be explained either by nepotism with workers preferentially rearing the brood of more closely related queens or intrinsic differences in the viability of eggs laid by queens. In the first chapter, we designed an experiment to disentangle nepotism and differences in brood viability. We tested if workers prefer to rear their kin when given the choice between highly related and unrelated brood in the ant F. exsecta. We also looked for differences in egg viability among queens and simulated if such differences in egg viability may mistakenly lead to the conclusion that workers behave nepotistically. The acceptance of queens in polygnous ants raises the question whether the varying degree of relatedness affects their share in reproduction. In such colonies workers should favour nestmate queens over foreign queens. Numerous studies have investigated reproductive skew and partitioning of reproduction among queens (Bourke et al. 1997; Fournier et al. 2004; Fournier & Keller 2001; Hammond et al. 2006; Hannonen & Sundström 2003a; Heinze et al. 2001; Kümmerli & Keller 2007; Langer et al. 2004; Pamilo & Seppä 1994; Ross 1988; Ross 1993; Rüppell et al. 2002), yet almost no information is available on whether differences among queens in their relatedness to other colony members affects their share in reproduction. Such data are necessary to compare the relative reproductive success of dispersing and non-dispersing individuals. Moreover, information on whether there is a difference in reproductive success between resident and dispersing queens is also important for our understanding of the genetic structure of ant colonies and the dynamics of within group conflicts. In chapter two, we created single-queen colonies and then introduced a foreign queens originating from another colony kept under similar conditions in order to estimate the rate of queen acceptance into foreign established colonies, and to quantify the reproductive share of resident and introduced queens. An increasing number of studies have investigated the discrimination ability between ant workers (e.g. Holzer et al. 2006; Pedersen et al. 2006), but few have addressed the recognition and discrimination behaviour of workers towards reproductive individuals entering colonies (Bennett 1988; Brown et al. 2003; Evans 1996; Fortelius et al. 1993; Kikuchi et al. 2007; Rosengren & Pamilo 1986; Stuart et al. 1993; Sundström 1997; Vásquez & Silverman in press). These studies are important, because accepting new queens will generally have a large impact on colony kin structure and inclusive fitness of workers (Heinze & Keller 2000). In chapter three, we examined whether resident workers reject young foreign queens that enter into their nest. We introduced mated queens into their natal nest, a foreign-female producing nest, or a foreign male-producing nest and measured their survival. In addition, we also introduced young virgin and mated queens into their natal nest to examine whether the mating status of the queens influences their survival and acceptance by workers. On top of polgyny, some ant species have evolved an extraordinary social organization called 'unicoloniality' (Hölldobler & Wilson 1977; Pedersen et al. 2006). In unicolonial ants, intercolony borders are absent and workers and queens mix among the physically separated nests, such that nests form one large supercolony. Super-colonies can become very large, so that direct cooperative interactions are impossible between individuals of distant nests. Unicoloniality is an evolutionary paradox and a potential problem for kin selection theory because the mixing of queens and workers between nests leads to extremely low relatedness among nestmates (Bourke & Franks 1995; Crozier & Pamilo 1996; Keller 1995). A better understanding of the evolution and maintenance of unicoloniality requests detailed information on the discrimination behavior, dispersal, population structure, and the scale of competition. Cryptic genetic population structure may provide important information on the relevant scale to be considered when measuring relatedness and the role of kin selection. Theoretical studies have shown that relatedness should be measured at the level of the `economic neighborhood', which is the scale at which intraspecific competition generally takes place (Griffin & West 2002; Kelly 1994; Queller 1994; Taylor 1992). In chapter four, we conducted alarge-scale study to determine whether the unicolonial ant Formica paralugubris forms populations that are organised in discrete supercolonies or whether there is a continuous gradation in the level of aggression that may correlate with genetic isolation by distance and/or spatial distance between nests. In chapter five, we investigated the fine-scale population structure in three populations of F. paralugubris. We have developed mitochondria) markers, which together with the nuclear markers allowed us to detect cryptic genetic clusters of nests, to obtain more precise information on the genetic differentiation within populations, and to separate male and female gene flow. These new data provide important information on the scale to be considered when measuring relatedness in native unicolonial populations.
Resumo:
In this paper, we argue that important labor market phenomena can be better understood if one takes (a) the inherent incompleteness and relational nature of most employment contracts and (b) the existence of reference-dependent fairness concerns among a substantial share of the population into account. Theory shows and experiments confirm that, even if fairness concerns were to exert only weak effects in one-shot interactions, repeated interactions greatly magnify the relevance of such concerns on economic outcomes. We also review evidence from laboratory and field experiments examining the role of wages and fairness on effort, derive predictions from our approach for entry-level wages and incumbent workers' wages, confront these predictions with the evidence, and show that reference-dependent fairness concerns may have important consequences for the effects of economic policies such as minimum wage laws.
Resumo:
The Iowa State Highway Commission initiated this research to evaluate a new lowering device for tower luminaires and a new concept of tower luminaire light distribution. Lighting at the West interchange of I-80, I-35, and I-235 in Polk County was also designated as an FHWA experimental project. As highway lighting has become more widely used, highway officials recognized the increasing importance of reducing safety hazards and improving aesthetic appearance of lighting installations. Also, lighting construction, energy, and maintenance costs were absorbing a larger share of the maintenance budget. A search began for a method of lighting whereby the fixed objects by the roadside could be eliminated or reduced in number, the costs could be reduced and the quality of lighting improved over existing methods. Lack of design data in this area illustrated the need for research.
Resumo:
The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.
Resumo:
Unlike the evaluation of single items of scientific evidence, the formal study and analysis of the jointevaluation of several distinct items of forensic evidence has to date received some punctual, ratherthan systematic, attention. Questions about the (i) relationships among a set of (usually unobservable)propositions and a set of (observable) items of scientific evidence, (ii) the joint probative valueof a collection of distinct items of evidence as well as (iii) the contribution of each individual itemwithin a given group of pieces of evidence still represent fundamental areas of research. To somedegree, this is remarkable since both, forensic science theory and practice, yet many daily inferencetasks, require the consideration of multiple items if not masses of evidence. A recurrent and particularcomplication that arises in such settings is that the application of probability theory, i.e. the referencemethod for reasoning under uncertainty, becomes increasingly demanding. The present paper takesthis as a starting point and discusses graphical probability models, i.e. Bayesian networks, as frameworkwithin which the joint evaluation of scientific evidence can be approached in some viable way.Based on a review of existing main contributions in this area, the article here aims at presentinginstances of real case studies from the author's institution in order to point out the usefulness andcapacities of Bayesian networks for the probabilistic assessment of the probative value of multipleand interrelated items of evidence. A main emphasis is placed on underlying general patterns of inference,their representation as well as their graphical probabilistic analysis. Attention is also drawnto inferential interactions, such as redundancy, synergy and directional change. These distinguish thejoint evaluation of evidence from assessments of isolated items of evidence. Together, these topicspresent aspects of interest to both, domain experts and recipients of expert information, because theyhave bearing on how multiple items of evidence are meaningfully and appropriately set into context.
Resumo:
Let $ E_{\lambda}(z)=\lambda {\rm exp}(z), \lambda\in \mathbb{C}$, be the complex exponential family. For all functions in the family there is a unique asymptotic value at 0 (and no critical values). For a fixed $ \lambda$, the set of points in $ \mathbb{C}$ with orbit tending to infinity is called the escaping set. We prove that the escaping set of $ E_{\lambda}$ with $ \lambda$ Misiurewicz (that is, a parameter for which the orbit of the singular value is strictly preperiodic) is a connected set.
Resumo:
EXECUTIVE SUMMARY : Evaluating Information Security Posture within an organization is becoming a very complex task. Currently, the evaluation and assessment of Information Security are commonly performed using frameworks, methodologies and standards which often consider the various aspects of security independently. Unfortunately this is ineffective because it does not take into consideration the necessity of having a global and systemic multidimensional approach to Information Security evaluation. At the same time the overall security level is globally considered to be only as strong as its weakest link. This thesis proposes a model aiming to holistically assess all dimensions of security in order to minimize the likelihood that a given threat will exploit the weakest link. A formalized structure taking into account all security elements is presented; this is based on a methodological evaluation framework in which Information Security is evaluated from a global perspective. This dissertation is divided into three parts. Part One: Information Security Evaluation issues consists of four chapters. Chapter 1 is an introduction to the purpose of this research purpose and the Model that will be proposed. In this chapter we raise some questions with respect to "traditional evaluation methods" as well as identifying the principal elements to be addressed in this direction. Then we introduce the baseline attributes of our model and set out the expected result of evaluations according to our model. Chapter 2 is focused on the definition of Information Security to be used as a reference point for our evaluation model. The inherent concepts of the contents of a holistic and baseline Information Security Program are defined. Based on this, the most common roots-of-trust in Information Security are identified. Chapter 3 focuses on an analysis of the difference and the relationship between the concepts of Information Risk and Security Management. Comparing these two concepts allows us to identify the most relevant elements to be included within our evaluation model, while clearing situating these two notions within a defined framework is of the utmost importance for the results that will be obtained from the evaluation process. Chapter 4 sets out our evaluation model and the way it addresses issues relating to the evaluation of Information Security. Within this Chapter the underlying concepts of assurance and trust are discussed. Based on these two concepts, the structure of the model is developed in order to provide an assurance related platform as well as three evaluation attributes: "assurance structure", "quality issues", and "requirements achievement". Issues relating to each of these evaluation attributes are analysed with reference to sources such as methodologies, standards and published research papers. Then the operation of the model is discussed. Assurance levels, quality levels and maturity levels are defined in order to perform the evaluation according to the model. Part Two: Implementation of the Information Security Assurance Assessment Model (ISAAM) according to the Information Security Domains consists of four chapters. This is the section where our evaluation model is put into a welldefined context with respect to the four pre-defined Information Security dimensions: the Organizational dimension, Functional dimension, Human dimension, and Legal dimension. Each Information Security dimension is discussed in a separate chapter. For each dimension, the following two-phase evaluation path is followed. The first phase concerns the identification of the elements which will constitute the basis of the evaluation: ? Identification of the key elements within the dimension; ? Identification of the Focus Areas for each dimension, consisting of the security issues identified for each dimension; ? Identification of the Specific Factors for each dimension, consisting of the security measures or control addressing the security issues identified for each dimension. The second phase concerns the evaluation of each Information Security dimension by: ? The implementation of the evaluation model, based on the elements identified for each dimension within the first phase, by identifying the security tasks, processes, procedures, and actions that should have been performed by the organization to reach the desired level of protection; ? The maturity model for each dimension as a basis for reliance on security. For each dimension we propose a generic maturity model that could be used by every organization in order to define its own security requirements. Part three of this dissertation contains the Final Remarks, Supporting Resources and Annexes. With reference to the objectives of our thesis, the Final Remarks briefly analyse whether these objectives were achieved and suggest directions for future related research. Supporting resources comprise the bibliographic resources that were used to elaborate and justify our approach. Annexes include all the relevant topics identified within the literature to illustrate certain aspects of our approach. Our Information Security evaluation model is based on and integrates different Information Security best practices, standards, methodologies and research expertise which can be combined in order to define an reliable categorization of Information Security. After the definition of terms and requirements, an evaluation process should be performed in order to obtain evidence that the Information Security within the organization in question is adequately managed. We have specifically integrated into our model the most useful elements of these sources of information in order to provide a generic model able to be implemented in all kinds of organizations. The value added by our evaluation model is that it is easy to implement and operate and answers concrete needs in terms of reliance upon an efficient and dynamic evaluation tool through a coherent evaluation system. On that basis, our model could be implemented internally within organizations, allowing them to govern better their Information Security. RÉSUMÉ : Contexte général de la thèse L'évaluation de la sécurité en général, et plus particulièrement, celle de la sécurité de l'information, est devenue pour les organisations non seulement une mission cruciale à réaliser, mais aussi de plus en plus complexe. A l'heure actuelle, cette évaluation se base principalement sur des méthodologies, des bonnes pratiques, des normes ou des standards qui appréhendent séparément les différents aspects qui composent la sécurité de l'information. Nous pensons que cette manière d'évaluer la sécurité est inefficiente, car elle ne tient pas compte de l'interaction des différentes dimensions et composantes de la sécurité entre elles, bien qu'il soit admis depuis longtemps que le niveau de sécurité globale d'une organisation est toujours celui du maillon le plus faible de la chaîne sécuritaire. Nous avons identifié le besoin d'une approche globale, intégrée, systémique et multidimensionnelle de l'évaluation de la sécurité de l'information. En effet, et c'est le point de départ de notre thèse, nous démontrons que seule une prise en compte globale de la sécurité permettra de répondre aux exigences de sécurité optimale ainsi qu'aux besoins de protection spécifiques d'une organisation. Ainsi, notre thèse propose un nouveau paradigme d'évaluation de la sécurité afin de satisfaire aux besoins d'efficacité et d'efficience d'une organisation donnée. Nous proposons alors un modèle qui vise à évaluer d'une manière holistique toutes les dimensions de la sécurité, afin de minimiser la probabilité qu'une menace potentielle puisse exploiter des vulnérabilités et engendrer des dommages directs ou indirects. Ce modèle se base sur une structure formalisée qui prend en compte tous les éléments d'un système ou programme de sécurité. Ainsi, nous proposons un cadre méthodologique d'évaluation qui considère la sécurité de l'information à partir d'une perspective globale. Structure de la thèse et thèmes abordés Notre document est structuré en trois parties. La première intitulée : « La problématique de l'évaluation de la sécurité de l'information » est composée de quatre chapitres. Le chapitre 1 introduit l'objet de la recherche ainsi que les concepts de base du modèle d'évaluation proposé. La maniéré traditionnelle de l'évaluation de la sécurité fait l'objet d'une analyse critique pour identifier les éléments principaux et invariants à prendre en compte dans notre approche holistique. Les éléments de base de notre modèle d'évaluation ainsi que son fonctionnement attendu sont ensuite présentés pour pouvoir tracer les résultats attendus de ce modèle. Le chapitre 2 se focalise sur la définition de la notion de Sécurité de l'Information. Il ne s'agit pas d'une redéfinition de la notion de la sécurité, mais d'une mise en perspectives des dimensions, critères, indicateurs à utiliser comme base de référence, afin de déterminer l'objet de l'évaluation qui sera utilisé tout au long de notre travail. Les concepts inhérents de ce qui constitue le caractère holistique de la sécurité ainsi que les éléments constitutifs d'un niveau de référence de sécurité sont définis en conséquence. Ceci permet d'identifier ceux que nous avons dénommés « les racines de confiance ». Le chapitre 3 présente et analyse la différence et les relations qui existent entre les processus de la Gestion des Risques et de la Gestion de la Sécurité, afin d'identifier les éléments constitutifs du cadre de protection à inclure dans notre modèle d'évaluation. Le chapitre 4 est consacré à la présentation de notre modèle d'évaluation Information Security Assurance Assessment Model (ISAAM) et la manière dont il répond aux exigences de l'évaluation telle que nous les avons préalablement présentées. Dans ce chapitre les concepts sous-jacents relatifs aux notions d'assurance et de confiance sont analysés. En se basant sur ces deux concepts, la structure du modèle d'évaluation est développée pour obtenir une plateforme qui offre un certain niveau de garantie en s'appuyant sur trois attributs d'évaluation, à savoir : « la structure de confiance », « la qualité du processus », et « la réalisation des exigences et des objectifs ». Les problématiques liées à chacun de ces attributs d'évaluation sont analysées en se basant sur l'état de l'art de la recherche et de la littérature, sur les différentes méthodes existantes ainsi que sur les normes et les standards les plus courants dans le domaine de la sécurité. Sur cette base, trois différents niveaux d'évaluation sont construits, à savoir : le niveau d'assurance, le niveau de qualité et le niveau de maturité qui constituent la base de l'évaluation de l'état global de la sécurité d'une organisation. La deuxième partie: « L'application du Modèle d'évaluation de l'assurance de la sécurité de l'information par domaine de sécurité » est elle aussi composée de quatre chapitres. Le modèle d'évaluation déjà construit et analysé est, dans cette partie, mis dans un contexte spécifique selon les quatre dimensions prédéfinies de sécurité qui sont: la dimension Organisationnelle, la dimension Fonctionnelle, la dimension Humaine, et la dimension Légale. Chacune de ces dimensions et son évaluation spécifique fait l'objet d'un chapitre distinct. Pour chacune des dimensions, une évaluation en deux phases est construite comme suit. La première phase concerne l'identification des éléments qui constituent la base de l'évaluation: ? Identification des éléments clés de l'évaluation ; ? Identification des « Focus Area » pour chaque dimension qui représentent les problématiques se trouvant dans la dimension ; ? Identification des « Specific Factors » pour chaque Focus Area qui représentent les mesures de sécurité et de contrôle qui contribuent à résoudre ou à diminuer les impacts des risques. La deuxième phase concerne l'évaluation de chaque dimension précédemment présentées. Elle est constituée d'une part, de l'implémentation du modèle général d'évaluation à la dimension concernée en : ? Se basant sur les éléments spécifiés lors de la première phase ; ? Identifiant les taches sécuritaires spécifiques, les processus, les procédures qui auraient dû être effectués pour atteindre le niveau de protection souhaité. D'autre part, l'évaluation de chaque dimension est complétée par la proposition d'un modèle de maturité spécifique à chaque dimension, qui est à considérer comme une base de référence pour le niveau global de sécurité. Pour chaque dimension nous proposons un modèle de maturité générique qui peut être utilisé par chaque organisation, afin de spécifier ses propres exigences en matière de sécurité. Cela constitue une innovation dans le domaine de l'évaluation, que nous justifions pour chaque dimension et dont nous mettons systématiquement en avant la plus value apportée. La troisième partie de notre document est relative à la validation globale de notre proposition et contient en guise de conclusion, une mise en perspective critique de notre travail et des remarques finales. Cette dernière partie est complétée par une bibliographie et des annexes. Notre modèle d'évaluation de la sécurité intègre et se base sur de nombreuses sources d'expertise, telles que les bonnes pratiques, les normes, les standards, les méthodes et l'expertise de la recherche scientifique du domaine. Notre proposition constructive répond à un véritable problème non encore résolu, auquel doivent faire face toutes les organisations, indépendamment de la taille et du profil. Cela permettrait à ces dernières de spécifier leurs exigences particulières en matière du niveau de sécurité à satisfaire, d'instancier un processus d'évaluation spécifique à leurs besoins afin qu'elles puissent s'assurer que leur sécurité de l'information soit gérée d'une manière appropriée, offrant ainsi un certain niveau de confiance dans le degré de protection fourni. Nous avons intégré dans notre modèle le meilleur du savoir faire, de l'expérience et de l'expertise disponible actuellement au niveau international, dans le but de fournir un modèle d'évaluation simple, générique et applicable à un grand nombre d'organisations publiques ou privées. La valeur ajoutée de notre modèle d'évaluation réside précisément dans le fait qu'il est suffisamment générique et facile à implémenter tout en apportant des réponses sur les besoins concrets des organisations. Ainsi notre proposition constitue un outil d'évaluation fiable, efficient et dynamique découlant d'une approche d'évaluation cohérente. De ce fait, notre système d'évaluation peut être implémenté à l'interne par l'entreprise elle-même, sans recourir à des ressources supplémentaires et lui donne également ainsi la possibilité de mieux gouverner sa sécurité de l'information.
Resumo:
Résumé Métropolisation, morphologie urbaine et développement durable. Transformations urbaines et régulation de l'étalement : le cas de l'agglomération lausannoise. Cette thèse s'inscrit clans la perspective d'une analyse stratégique visant à un définir et à expliciter les liens entre connaissance, expertise et décision politique. L'hypothèse fondamentale qui oriente l'ensemble de ce travail est la suivante : le régime d'urbanisation qui s'est imposé au cours des trente dernières années correspond à une transformation du principe morphogénétique de développement spatial des agglomérations qui tend à alourdir leurs bilans écologiques et à péjorer la qualité du cadre de vie des citadins. Ces enjeux environnementaux liés aux changements urbains et singulièrement ceux de la forme urbaine constituent un thème de plus en plus important dans la recherche de solutions d'aménagement urbain dans une perspective de développement durable. Dans ce contexte, l'aménagement urbain devient un mode d'action et une composante de tout premier ordre des politiques publiques visant un développement durable à l'échelle locale et globale. Ces modalités de développement spatial des agglomérations émergent indiscutablement au coeur de la problématique environnementale. Or si le concept de développement durable nous livre une nouvelle de de lecture des territoires et de ses transformations, en prônant le modèle de la ville compacte et son corollaire la densification, la traduction à donner à ce principe stratégique reste controversée, notamment sous l'angle de l'aménagement du territoire et des stratégies de développement urbain permettant une mise en oeuvre adéquate des solutions proposées. Nous avons ainsi tenté dans ce travail de répondre à un certain nombre de questions : quelle validité accorder au modèle de la ville compacte ? La densification est-elle une réponse adéquate ? Si oui, sous quelles modalités ? Quelles sont, en termes de stratégies d'aménagement, les alternatives durables au modèle de la ville étalée ? Faut-il vraiment densifier ou simplement maîtriser la dispersion ? Notre objectif principal étant in fine de déterminer les orientations et contenus urbanistiques de politiques publiques visant à réguler l'étalement urbain, de valider la faisabilité de ces principes et à définir les conditions de leur mise en place dans le cas d'une agglomération. Pour cela, et après avoir choisi l'agglomération lausannoise comme terrain d'expérimentation, trois approches complémentaires se sont révélées indispensables dans ce travail 1. une approche théorique visant à définir un cadre conceptuel interdisciplinaire d'analyse du phénomène urbain dans ses rapports à la problématique du développement durable liant régime d'urbanisation - forme urbaine - développement durable ; 2. une approche méthodologique proposant des outils d'analyse simples et efficaces de description des nouvelles morphologies urbaines pour une meilleure gestion de l'environnement urbain et de la pratique de l'aménagement urbain ; 3. une approche pragmatique visant à approfondir la réflexion sur la ville étalée en passant d'une approche descriptive des conséquences du nouveau régime d'urbanisation à une approche opérationnelle, visant à identifier les lignes d'actions possibles dans une perspective de développement durable. Cette démarche d'analyse nous a conduits à trois résultats majeurs, nous permettant de définir une stratégie de lutte contre l'étalement. Premièrement, si la densification est acceptée comme un objectif stratégique de l'aménagement urbain, le modèle de la ville dense ne peut être appliqué saris la prise en considération d'autres objectifs d'aménagement. Il ne suffit pas de densifier pour réduire l'empreinte écologique de la ville et améliorer la qualité de vie des citadins. La recherche d'une forme urbaine plus durable est tributaire d'une multiplicité de facteurs et d'effets de synergie et la maîtrise des effets négatifs de l'étalement urbain passe par la mise en oeuvre de politiques urbaines intégrées et concertées, comme par exemple prôner la densification qualifiée comme résultante d'un processus finalisé, intégrer et valoriser les transports collectifs et encore plus la métrique pédestre avec l'aménagement urbain, intégrer systématiquement la diversité à travers les dimensions physique et sociale du territoire. Deuxièmement, l'avenir de ces territoires étalés n'est pas figé. Notre enquête de terrain a montré une évolution des modes d'habitat liée aux modes de vie, à l'organisation du travail, à la mobilité, qui font que l'on peut penser à un retour d'une partie de la population dans les villes centres (fin de la toute puissance du modèle de la maison individuelle). Ainsi, le diagnostic et la recherche de solutions d'aménagement efficaces et viables ne peuvent être dissociés des demandes des habitants et des comportements des acteurs de la production du cadre bâti. Dans cette perspective, tout programme d'urbanisme doit nécessairement s'appuyer sur la connaissance des aspirations de la population. Troisièmement, la réussite de la mise en oeuvre d'une politique globale de maîtrise des effets négatifs de l'étalement urbain est fortement conditionnée par l'adaptation de l'offre immobilière à la demande de nouveaux modèles d'habitat répondant à la fois à la nécessité d'une maîtrise des coûts de l'urbanisation (économiques, sociaux, environnementaux), ainsi qu'aux aspirations émergentes des ménages. Ces résultats nous ont permis de définir les orientations d'une stratégie de lutte contre l'étalement, dont nous avons testé la faisabilité ainsi que les conditions de mise en oeuvre sur le territoire de l'agglomération lausannoise. Abstract This dissertation participates in the perspective of a strategic analysis aiming at specifying the links between knowledge, expertise and political decision, The fundamental hypothesis directing this study assumes that the urban dynamics that has characterized the past thirty years signifies a trans-formation of the morphogenetic principle of agglomerations' spatial development that results in a worsening of their ecological balance and of city dwellers' quality of life. The environmental implications linked to urban changes and particularly to changes in urban form constitute an ever greater share of research into sustainable urban planning solutions. In this context, urban planning becomes a mode of action and an essential component of public policies aiming at local and global sustainable development. These patterns of spatial development indisputably emerge at the heart of environmental issues. If the concept of sustainable development provides us with new understanding into territories and their transformations, by arguing in favor of densification, its concretization remains at issue, especially in terms of urban planning and of urban development strategies allowing the appropriate implementations of the solutions offered. Thus, this study tries to answer a certain number of questions: what validity should be granted to the model of the dense city? Is densification an adequate answer? If so, under what terms? What are the sustainable alternatives to urban sprawl in terms of planning strategies? Should densification really be pursued or should we simply try to master urban sprawl? Our main objective being in fine to determine the directions and urban con-tents of public policies aiming at regulating urban sprawl, to validate the feasibility of these principles and to define the conditions of their implementation in the case of one agglomeration. Once the Lausanne agglomeration had been chosen as experimentation field, three complementary approaches proved to be essential to this study: 1. a theoretical approach aiming at definying an interdisciplinary conceptual framework of the ur-ban phenomenon in its relation to sustainable development linking urban dynamics - urban form - sustainable development ; 2. a methodological approach proposing simple and effective tools for analyzing and describing new urban morphologies for a better management of the urban environment and of urban planning practices 3. a pragmatic approach aiming at deepening reflection on urban sprawl by switching from a descriptive approach of the consequences of the new urban dynamics to an operational approach, aiming at identifying possible avenues of action respecting the principles of sustainable development. This analysis approach provided us with three major results, allowing us to define a strategy to cur-tail urban sprawl. First, if densification is accepted as a strategic objective of urban planning, the model of the dense city can not be applied without taking into consideration other urban planning objectives. Densification does not suffice to reduce the ecological impact of the city and improve the quality of life of its dwellers. The search for a more sustainable urban form depends on a multitude of factors and effects of synergy. Reducing the negative effects of urban sprawl requires the implementation of integrated and concerted urban policies, like for example encouraging densification qualified as resulting from a finalized process, integrating and developing collective forms of transportation and even more so the pedestrian metric with urban planning, integrating diversity on a systematic basis through the physical and social dimensions of the territory. Second, the future of such sprawling territories is not fixed. Our research on the ground revea-led an evolution in the modes of habitat related to ways of life, work organization and mobility that suggest the possibility of the return of a part of the population to the center of cities (end of the rule of the model of the individual home). Thus, the diagnosis and the search for effective and sustainable solutions can not be conceived of independently of the needs of the inhabitants and of the behavior of the actors behind the production of the built territory. In this perspective, any urban program must necessarily be based upon the knowledge of the population's wishes. Third, the successful implementation of a global policy of control of urban sprawl's negative effects is highly influenced by the adaptation of property offer to the demand of new habitat models satisfying both the necessity of urbanization cost controls (economical, social, environ-mental) and people's emerging aspirations. These results allowed us to define a strategy to cur-tail urban sprawl. Its feasibility and conditions of implementation were tested on the territory of the Lausanne agglomeration.
Resumo:
Tämän työn tarkoituksena oli selvittää mekaanisen massan valmistuksen käsittävän paperitehtaan rejektija jätevirtojen poltettavuutta, jos paperitehtaan vesikiertojen sulkemisastetta lisätään. Jotta prosessin tilannetta sulkemisen jälkeen saatiin arvioitua, Anjalan paperitehtaan nykypäivän PK3:n prosessia tutkittiin kuorimolta jätevesilaitokselle. Kirjallisuusosassa käsiteltiin rejekti- ja jätevirtojen alkuperää mekaanista massaa käyttävässä paperitehtaassa. Myös tämän päivän jätevedenkäsittelyprosessit sekä sulkemisessa mahdolliset prosessiveden puhdistustekniikat esiteltiin lyhyesti. Lisäksikäytiin läpi nykypäivänä metsäteollisuudessa käytössä olevat polttotekniikat sekä polttoaineiden karakterisointi kattilan käytettävyyden ja päästöjen kannalta. Anjalan PK3:lla käytetään sekä peroksidi- että ditioniittivalkaistua tai pelkästään ditioniittivalkaistua hioketta riippuen tuotannossa olevasta lajista. PK3-prosessissa syntyneet jätevesi-, liete- ja muut jätevirrat selvitettiin molemmissa valkaisuolosuhteissa. Prosessin eniten liuennutta orgaanista ainesta sisältävät jätevesijakeet, 3-hiomon kuumankierron ja kirkassuodoksen ulosajot sekä kuoripuristimen suodos, valittiin puhdistettaviksi virroiksi prosessin sulkemista arvioitaessa. Kun peroksidivalkaisua käytettiin 3-hiomolla, TOC-kuorma jokeen oli 30 % suurempi kuin pelkällä ditioniittivalkaisulla. Jos prosessin sulkemisastetta lisättäisiin, TOC-kuorma olisi 30 %pienempi kuin tänäpäivänä peroksidivalkaisua käytettäessä (80 % puhdistustehokkuudella). Prosessin sulkemisastetta lisättäessä biolietettä muodostuisi n. 30 % vähemmän verrattuna nykytilanteeseen, sillä mikrobien ravintona käyttämää orgaanista ainesta päätyisi vähemmän jäteveteen. 3-hiomon peroksidivalkaisun vaikutus kattilan käytettävyyteen ja päästöihin oli pieni, sillä biolietteen osuus polttoaineen syötöstä oli vain 4 %. Vain osa biolietteestä muodostui 3-hiomolta peräisin olevaa orgaanista ainesta poistettaessa. Jos nykyisen pääpolttoaineen, PDF:n,osuuden jättää huomioimatta, SO2- ja NOx-päästöt sekä leijupedin sintrautuvuus ovat hiukan suuremmat käytettäessä peroksidivalkaisua 3-hiomolla kuin pelkästään ditioniittivalkaisulla. Jos kuoripuristimen ja hiomon suodosten puhdistuksen konsentraatit johdetaan poltettaviksi BFB-tyyppiseen kattilaan, leijupedin sintrautuminen tulisi olemaan suurin ongelma. Myös raskasmetalli-, SO2- ja NOx-päästöt lisääntyisivät merkittävästi verrattuna nykyiseen tilanteeseen. Sen sijaan kattilan korroosioriski tuskin lisääntyisi. Lisäksi konsentraattien kosteuspitoisuus olisi korkea, mikä tekisi poltosta kannattamatonta veden haihdutuksen vaatiessa paljon energiaa. Yksityiskohtaisempaa tutkimusta tarvitaan vielä prosessin sulkemisen vaikutuksista päästöihin ja kattilan käytettävyyteen. Myös muita konsentraattien hävittämismahdollisuuksia tulisi tutkia lisää.
Resumo:
Hearing loss can be caused by a variety of insults, including acoustic trauma and exposure to ototoxins, that principally effect the viability of sensory hair cells via the MAP kinase (MAPK) cell death signaling pathway that incorporates c-Jun N-terminal kinase (JNK). We evaluated the otoprotective efficacy of D-JNKI-1, a cell permeable peptide that blocks the MAPK-JNK signal pathway. The experimental studies included organ cultures of neonatal mouse cochlea exposed to an ototoxic drug and cochleae of adult guinea pigs that were exposed to either an ototoxic drug or acoustic trauma. Results obtained from the organ of Corti explants demonstrated that the MAPK-JNK signal pathway is associated with injury and that blocking of this signal pathway prevented apoptosis in areas of aminoglycoside damage. Treatment of the neomycin-exposed organ of Corti explants with D-JNKI-1 completely prevented hair cell death initiated by this ototoxin. Results from in vivo studies showed that direct application of D-JNKI-1 into the scala tympani of the guinea pig cochlea prevented nearly all hair cell death and permanent hearing loss induced by neomycin ototoxicity. Local delivery of D-JNKI-1 also prevented acoustic trauma-induced permanent hearing loss in a dose-dependent manner. These results indicate that the MAPK-JNK signal pathway is involved in both ototoxicity and acoustic trauma-induced hair cell loss and permanent hearing loss. Blocking this signal pathway with D-JNKI-1 is of potential therapeutic value for long-term protection of both the morphological integrity and physiological function of the organ of Corti during times of oxidative stress.