90 resultados para coherent magnify
Resumo:
The objective of this essay is to reflect on a possible relation between entropy and emergence. A qualitative, relational approach is followed. We begin by highlighting that entropy includes the concept of dispersal, relevant to our enquiry. Emergence in complex systems arises from the coordinated behavior of their parts. Coordination in turn necessitates recognition between parts, i.e., information exchange. What will be argued here is that the scope of recognition processes between parts is increased when preceded by their dispersal, which multiplies the number of encounters and creates a richer potential for recognition. A process intrinsic to emergence is dissolvence (aka submergence or top-down constraints), which participates in the information-entropy interplay underlying the creation, evolution and breakdown of higher-level entities.
Resumo:
The theory of small-world networks as initiated by Watts and Strogatz (1998) has drawn new insights in spatial analysis as well as systems theory. The theoryâeuro?s concepts and methods are particularly relevant to geography, where spatial interaction is mainstream and where interactions can be described and studied using large numbers of exchanges or similarity matrices. Networks are organized through direct links or by indirect paths, inducing topological proximities that simultaneously involve spatial, social, cultural or organizational dimensions. Network synergies build over similarities and are fed by complementarities between or inside cities, with the two effects potentially amplifying each other according to the âeurooepreferential attachmentâeuro hypothesis that has been explored in a number of different scientific fields (Barabási, Albert 1999; Barabási A-L 2002; Newman M, Watts D, Barabà si A-L). In fact, according to Barabási and Albert (1999), the high level of hierarchy observed in âeurooescale-free networksâeuro results from âeurooepreferential attachmentâeuro, which characterizes the development of networks: new connections appear preferentially close to nodes that already have the largest number of connections because in this way, the improvement in the network accessibility of the new connection will likely be greater. However, at the same time, network regions gathering dense and numerous weak links (Granovetter, 1985) or network entities acting as bridges between several components (Burt 2005) offer a higher capacity for urban communities to benefit from opportunities and create future synergies. Several methodologies have been suggested to identify such denser and more coherent regions (also called communities or clusters) in terms of links (Watts, Strogatz 1998; Watts 1999; Barabási, Albert 1999; Barabási 2002; Auber 2003; Newman 2006). These communities not only possess a high level of dependency among their member entities but also show a low level of âeurooevulnerabilityâeuro, allowing for numerous redundancies (Burt 2000; Burt 2005). The SPANGEO project 2005âeuro"2008 (SPAtial Networks in GEOgraphy), gathering a team of geographers and computer scientists, has included empirical studies to survey concepts and measures developed in other related fields, such as physics, sociology and communication science. The relevancy and potential interpretation of weighted or non-weighted measures on edges and nodes were examined and analyzed at different scales (intra-urban, inter-urban or both). New classification and clustering schemes based on the relative local density of subgraphs were developed. The present article describes how these notions and methods contribute on a conceptual level, in terms of measures, delineations, explanatory analyses and visualization of geographical phenomena.
Resumo:
Les différents pays membres de l'UE connaissent des politiques dites de « conciliation de la vie professionnelle et familiale » qui correspondent à un ensemble de dispositifs hétéroclites, plus ou moins complexes, mais rarement cohérents. Alliant des objectifs tels que la hausse de la natalité, la protection des mères et des enfants, l'égalité entre femmes et hommes, la lutte contre la pauvreté des enfants et des familles monoparentales et l'activation des femmes, ces politiques sont fortement ancrées dans des traditions nationales de politiques familiales, d'emploi et fiscales. Ces politiques portent en elles l'héritage et les tensions de l'histoire d'un pays. Au moment où un nouvel acteur international, l'Union européenne, intervient de manière de plus en plus explicite dans le débat et dans la définition de ces politiques, la présente étude tend à analyser l'influence exercées par les référentiels européens en matière de politiques de conciliation sur les discours et politiques nationales de l'Italie et de la France. A partir d'une analyse cognitive du processus d'européanisation, nous montrons que les référentiels développés au sein de l'UE, par leur caractère abstrait et flou, n'ont eu jusqu'ici qu'une faible influence sur les discours et politiques en Italie et en France. Croisant une perspective néo-institutionnaliste historique et discursive, notre recherche a été construite autour de deux axes de réflexion. Premièrement, il a été question d'analyser, d'une part, l'évolution du discours tenu par les différentes instances européennes (notamment de la Commission européenne, le Conseil européen et le Fonds Social européen) et, d'autre part, questionner comment un consensus a pu émerger entre des pays et des acteurs qui ont des traditions extrêmement différentes en matière de politique sociale, de politique familiale et de convention de genre. Deuxièmement, il a été question d'analyser si et comment un cadre de référence conçu au niveau communautaire a pu influencer les discours et politiques au niveau national. - The reconciliation of work and family life policies forms, in the EU's member States, a plurality of politics, more or less complex, but rarely coherent. Combining different objectives such as fertility increase, mothers and children protection, equality between men and women, fight against children and lone-parent families poverty and women activation, these policies are part of the national traditions of family, employment and tax policy and bear the heritage and the tensions of the country history. At a moment when a new global player, the European Union, interferes increasingly explicitly in the debate and the definition of reconciling work and family life policies, the question at the heart of this thesis was to define what kind of influence the référentiels of European discourses have on reconciliation policies since the late 1990s, in the Italian and French discourses and policies. Starting from a cognitive analysis of the Europeanization process, we show that the référentiels developed within the EU, by their abstract and vague nature, have had little influence in Italy and France. Crossing an historical and a discursive neo-institutionalist perspective, our research was based on two axes of reasoning. First, we have analysed, on the one hand, the evolution of various European institutions' discoursed (including the European Commission, the European Council and the European Social Fund) and, on the other hand, we have questioned how a consensus has emerged between countries and actors who have very different traditions in social policy, family policy and gender conventions. Secondly, we have observed if and how a framework developed at Community level, as a kind of ideal to strive for, has influenced discourses and policies at the national level.
Geochemistry of the thermal springs and fumaroles of Basse-Terre Island, Guadeloupe, Lesser Antilles
Resumo:
The purpose of this work was to study jointly the volcanic-hydrothermal system of the high-risk volcano La Soufriere, in the southern part of Basse-Terre, and the geothermal area of Bouillante, on its western coast, to derive an all-embracing and coherent conceptual geochemical model that provides the necessary basis for adequate volcanic surveillance and further geothermal exploration. The active andesitic dome of La Soufriere has erupted eight times since 1660, most recently in 1976-1977. All these historic eruptions have been phreatic. High-salinity, Na-CI geothermal liquids circulate in the Bouillante geothermal reservoir, at temperatures close to 250 degrees C. These Na-CI solutions rise toward the surface, undergo boiling and mixing with groundwater and/or seawater, and feed most Na-CI thermal springs in the central Bouillante area. The Na-Cl thermal springs are surrounded by Na-HCO3 thermal springs and by the Na-Cl thermal spring of Anse a la Barque (a groundwater slightly mixed with seawater), which are all heated through conductive transfer. The two main fumarolic fields of La Soufriere area discharge vapors formed through boiling of hydrothermal aqueous solutions at temperatures of 190-215 degrees C below the ``Ty'' fault area and close to 260 degrees C below the dome summit. The boiling liquid producing the vapors of the Ty fault area has SD and delta(18)O values relatively similar to those of the Na-CI liquids of the Bouillante geothermal reservoir, whereas the liquid originating the vapors of the summit fumaroles is strongly enriched in O-18, due to input of magmatic fluids from below. This process is also responsible for the paucity of CH;I in the fumaroles. The thermal features around La Soufriere dome include: (a) Ca-SO4 springs, produced through absorption of hydrothermal vapors in shallow groundwaters; (b) conductively heated, Ca-Na-HCO3 springs; and (c) two Ca-Na-Cl springs produced through mixing of shallow Ca-SO4 waters and deep Na-Cl hydrothermal liquids. The geographical distribution of the different thermal features of La Soufriere area indicates the presence of: (a) a central zone dominated by the ascent of steam, which either discharges at the surface in the fumarolic fields or is absorbed in shallow groundwaters; and (b) an outer zone, where the shallow groundwaters are heated through conduction or addition of Na-Cl liquids coming from hydrothermal aquifer(s).
Resumo:
As a thorough aggregation of probability and graph theory, Bayesian networks currently enjoy widespread interest as a means for studying factors that affect the coherent evaluation of scientific evidence in forensic science. Paper I of this series of papers intends to contribute to the discussion of Bayesian networks as a framework that is helpful for both illustrating and implementing statistical procedures that are commonly employed for the study of uncertainties (e.g. the estimation of unknown quantities). While the respective statistical procedures are widely described in literature, the primary aim of this paper is to offer an essentially non-technical introduction on how interested readers may use these analytical approaches - with the help of Bayesian networks - for processing their own forensic science data. Attention is mainly drawn to the structure and underlying rationale of a series of basic and context-independent network fragments that users may incorporate as building blocs while constructing larger inference models. As an example of how this may be done, the proposed concepts will be used in a second paper (Part II) for specifying graphical probability networks whose purpose is to assist forensic scientists in the evaluation of scientific evidence encountered in the context of forensic document examination (i.e. results of the analysis of black toners present on printed or copied documents).
Resumo:
An oceanic assemblage of alkaline basalts, radiolarites and polymictic breccias forms the tectonic substratum of the Santa Elena Nappe, which is constituted by extensive outcrops of ultramafic and mafic rocks of the Santa Elena Peninsula (NW Costa Rica). The undulating basal contact of this nappe defines several half-windows along the south shores of the Santa Elena Peninsula. Lithologically it is constituted by vesicular pillowed and massive alkaline basaltic flows, alkaline sills, ribbon-bedded and knobby radiolarites, muddy tuffaceous and detrital turbidites, debris flows and polymictic breccias and megabreccias. Sediments and basalt flows show predominant subvertical dips and occur in packages separated by roughly bed-parallel thrust planes. Individual packages reveal a coherent internal stratigraphy that records younging to the east in all packages and shows rapid coarsening upwards of the detrital facies. Alkaline basalt flows, pillow breccias and sills within radiolarite successions are genetically related to a mid-Cretaceous submarine seamount. Detrital sedimentary facies range form distal turbidites to proximal debris flows and culminate in megabreccias related to collapse and mass wasting in an accretionary prism. According to radiolarian dating, bedded radiolarites and soft-sediment- deformed clasts in the megabreccias formed in a short, late Aptian to Cenomanian time interval. Middle Jurassic to Lower Cretaceous radiolarian ages are found in clasts and blocks reworked from an older oceanic basement. We conclude that the oceanic assemblage beneath the Santa Elena Nappe does not represent a continuous stratigraphic succession. It is a pile of individual thrust sheets constituting an accretionary sequence, where intrusion and extrusion of alkaline basalts, sedimentation of radiolarites, turbidites and trench fill chaotic sediments occurred during the Aptian-Cenomanian. These thrust sheets formed shortly before the off-scraping and accretion of the complex. Here we define the Santa Rosa Accretionary Complex and propose a new hypothesis not considered in former interpretations. This hypothesis would be the basis for further research.
Resumo:
It is estimated that around 230 people die each year due to radon (222Rn) exposure in Switzerland. 222Rn occurs mainly in closed environments like buildings and originates primarily from the subjacent ground. Therefore it depends strongly on geology and shows substantial regional variations. Correct identification of these regional variations would lead to substantial reduction of 222Rn exposure of the population based on appropriate construction of new and mitigation of already existing buildings. Prediction of indoor 222Rn concentrations (IRC) and identification of 222Rn prone areas is however difficult since IRC depend on a variety of different variables like building characteristics, meteorology, geology and anthropogenic factors. The present work aims at the development of predictive models and the understanding of IRC in Switzerland, taking into account a maximum of information in order to minimize the prediction uncertainty. The predictive maps will be used as a decision-support tool for 222Rn risk management. The construction of these models is based on different data-driven statistical methods, in combination with geographical information systems (GIS). In a first phase we performed univariate analysis of IRC for different variables, namely the detector type, building category, foundation, year of construction, the average outdoor temperature during measurement, altitude and lithology. All variables showed significant associations to IRC. Buildings constructed after 1900 showed significantly lower IRC compared to earlier constructions. We observed a further drop of IRC after 1970. In addition to that, we found an association of IRC with altitude. With regard to lithology, we observed the lowest IRC in sedimentary rocks (excluding carbonates) and sediments and the highest IRC in the Jura carbonates and igneous rock. The IRC data was systematically analyzed for potential bias due to spatially unbalanced sampling of measurements. In order to facilitate the modeling and the interpretation of the influence of geology on IRC, we developed an algorithm based on k-medoids clustering which permits to define coherent geological classes in terms of IRC. We performed a soil gas 222Rn concentration (SRC) measurement campaign in order to determine the predictive power of SRC with respect to IRC. We found that the use of SRC is limited for IRC prediction. The second part of the project was dedicated to predictive mapping of IRC using models which take into account the multidimensionality of the process of 222Rn entry into buildings. We used kernel regression and ensemble regression tree for this purpose. We could explain up to 33% of the variance of the log transformed IRC all over Switzerland. This is a good performance compared to former attempts of IRC modeling in Switzerland. As predictor variables we considered geographical coordinates, altitude, outdoor temperature, building type, foundation, year of construction and detector type. Ensemble regression trees like random forests allow to determine the role of each IRC predictor in a multidimensional setting. We found spatial information like geology, altitude and coordinates to have stronger influences on IRC than building related variables like foundation type, building type and year of construction. Based on kernel estimation we developed an approach to determine the local probability of IRC to exceed 300 Bq/m3. In addition to that we developed a confidence index in order to provide an estimate of uncertainty of the map. All methods allow an easy creation of tailor-made maps for different building characteristics. Our work is an essential step towards a 222Rn risk assessment which accounts at the same time for different architectural situations as well as geological and geographical conditions. For the communication of 222Rn hazard to the population we recommend to make use of the probability map based on kernel estimation. The communication of 222Rn hazard could for example be implemented via a web interface where the users specify the characteristics and coordinates of their home in order to obtain the probability to be above a given IRC with a corresponding index of confidence. Taking into account the health effects of 222Rn, our results have the potential to substantially improve the estimation of the effective dose from 222Rn delivered to the Swiss population.
Resumo:
Sleep spindles are approximately 1 s bursts of 10-16 Hz activity that occur during stage 2 sleep. Spindles are highly synchronous across the cortex and thalamus in animals, and across the scalp in humans, implying correspondingly widespread and synchronized cortical generators. However, prior studies have noted occasional dissociations of the magnetoencephalogram (MEG) from the EEG during spindles, although detailed studies of this phenomenon have been lacking. We systematically compared high-density MEG and EEG recordings during naturally occurring spindles in healthy humans. As expected, EEG was highly coherent across the scalp, with consistent topography across spindles. In contrast, the simultaneously recorded MEG was not synchronous, but varied strongly in amplitude and phase across locations and spindles. Overall, average coherence between pairs of EEG sensors was approximately 0.7, whereas MEG coherence was approximately 0.3 during spindles. Whereas 2 principle components explained approximately 50% of EEG spindle variance, >15 were required for MEG. Each PCA component for MEG typically involved several widely distributed locations, which were relatively coherent with each other. These results show that, in contrast to current models based on animal experiments, multiple asynchronous neural generators are active during normal human sleep spindles and are visible to MEG. It is possible that these multiple sources may overlap sufficiently in different EEG sensors to appear synchronous. Alternatively, EEG recordings may reflect diffusely distributed synchronous generators that are less visible to MEG. An intriguing possibility is that MEG preferentially records from the focal core thalamocortical system during spindles, and EEG from the distributed matrix system.
Resumo:
Dans cette thèse, nous étudions les aspects comportementaux d'agents qui interagissent dans des systèmes de files d'attente à l'aide de modèles de simulation et de méthodologies expérimentales. Chaque période les clients doivent choisir un prestataire de servivce. L'objectif est d'analyser l'impact des décisions des clients et des prestataires sur la formation des files d'attente. Dans un premier cas nous considérons des clients ayant un certain degré d'aversion au risque. Sur la base de leur perception de l'attente moyenne et de la variabilité de cette attente, ils forment une estimation de la limite supérieure de l'attente chez chacun des prestataires. Chaque période, ils choisissent le prestataire pour lequel cette estimation est la plus basse. Nos résultats indiquent qu'il n'y a pas de relation monotone entre le degré d'aversion au risque et la performance globale. En effet, une population de clients ayant un degré d'aversion au risque intermédiaire encoure généralement une attente moyenne plus élevée qu'une population d'agents indifférents au risque ou très averses au risque. Ensuite, nous incorporons les décisions des prestataires en leur permettant d'ajuster leur capacité de service sur la base de leur perception de la fréquence moyenne d'arrivées. Les résultats montrent que le comportement des clients et les décisions des prestataires présentent une forte "dépendance au sentier". En outre, nous montrons que les décisions des prestataires font converger l'attente moyenne pondérée vers l'attente de référence du marché. Finalement, une expérience de laboratoire dans laquelle des sujets jouent le rôle de prestataire de service nous a permis de conclure que les délais d'installation et de démantèlement de capacité affectent de manière significative la performance et les décisions des sujets. En particulier, les décisions du prestataire, sont influencées par ses commandes en carnet, sa capacité de service actuellement disponible et les décisions d'ajustement de capacité qu'il a prises, mais pas encore implémentées. - Queuing is a fact of life that we witness daily. We all have had the experience of waiting in line for some reason and we also know that it is an annoying situation. As the adage says "time is money"; this is perhaps the best way of stating what queuing problems mean for customers. Human beings are not very tolerant, but they are even less so when having to wait in line for service. Banks, roads, post offices and restaurants are just some examples where people must wait for service. Studies of queuing phenomena have typically addressed the optimisation of performance measures (e.g. average waiting time, queue length and server utilisation rates) and the analysis of equilibrium solutions. The individual behaviour of the agents involved in queueing systems and their decision making process have received little attention. Although this work has been useful to improve the efficiency of many queueing systems, or to design new processes in social and physical systems, it has only provided us with a limited ability to explain the behaviour observed in many real queues. In this dissertation we differ from this traditional research by analysing how the agents involved in the system make decisions instead of focusing on optimising performance measures or analysing an equilibrium solution. This dissertation builds on and extends the framework proposed by van Ackere and Larsen (2004) and van Ackere et al. (2010). We focus on studying behavioural aspects in queueing systems and incorporate this still underdeveloped framework into the operations management field. In the first chapter of this thesis we provide a general introduction to the area, as well as an overview of the results. In Chapters 2 and 3, we use Cellular Automata (CA) to model service systems where captive interacting customers must decide each period which facility to join for service. They base this decision on their expectations of sojourn times. Each period, customers use new information (their most recent experience and that of their best performing neighbour) to form expectations of sojourn time at the different facilities. Customers update their expectations using an adaptive expectations process to combine their memory and their new information. We label "conservative" those customers who give more weight to their memory than to the xiv Summary new information. In contrast, when they give more weight to new information, we call them "reactive". In Chapter 2, we consider customers with different degree of risk-aversion who take into account uncertainty. They choose which facility to join based on an estimated upper-bound of the sojourn time which they compute using their perceptions of the average sojourn time and the level of uncertainty. We assume the same exogenous service capacity for all facilities, which remains constant throughout. We first analyse the collective behaviour generated by the customers' decisions. We show that the system achieves low weighted average sojourn times when the collective behaviour results in neighbourhoods of customers loyal to a facility and the customers are approximately equally split among all facilities. The lowest weighted average sojourn time is achieved when exactly the same number of customers patronises each facility, implying that they do not wish to switch facility. In this case, the system has achieved the Nash equilibrium. We show that there is a non-monotonic relationship between the degree of risk-aversion and system performance. Customers with an intermediate degree of riskaversion typically achieve higher sojourn times; in particular they rarely achieve the Nash equilibrium. Risk-neutral customers have the highest probability of achieving the Nash Equilibrium. Chapter 3 considers a service system similar to the previous one but with risk-neutral customers, and relaxes the assumption of exogenous service rates. In this sense, we model a queueing system with endogenous service rates by enabling managers to adjust the service capacity of the facilities. We assume that managers do so based on their perceptions of the arrival rates and use the same principle of adaptive expectations to model these perceptions. We consider service systems in which the managers' decisions take time to be implemented. Managers are characterised by a profile which is determined by the speed at which they update their perceptions, the speed at which they take decisions, and how coherent they are when accounting for their previous decisions still to be implemented when taking their next decision. We find that the managers' decisions exhibit a strong path-dependence: owing to the initial conditions of the model, the facilities of managers with identical profiles can evolve completely differently. In some cases the system becomes "locked-in" into a monopoly or duopoly situation. The competition between managers causes the weighted average sojourn time of the system to converge to the exogenous benchmark value which they use to estimate their desired capacity. Concerning the managers' profile, we found that the more conservative Summary xv a manager is regarding new information, the larger the market share his facility achieves. Additionally, the faster he takes decisions, the higher the probability that he achieves a monopoly position. In Chapter 4 we consider a one-server queueing system with non-captive customers. We carry out an experiment aimed at analysing the way human subjects, taking on the role of the manager, take decisions in a laboratory regarding the capacity of a service facility. We adapt the model proposed by van Ackere et al (2010). This model relaxes the assumption of a captive market and allows current customers to decide whether or not to use the facility. Additionally the facility also has potential customers who currently do not patronise it, but might consider doing so in the future. We identify three groups of subjects whose decisions cause similar behavioural patterns. These groups are labelled: gradual investors, lumpy investors, and random investor. Using an autocorrelation analysis of the subjects' decisions, we illustrate that these decisions are positively correlated to the decisions taken one period early. Subsequently we formulate a heuristic to model the decision rule considered by subjects in the laboratory. We found that this decision rule fits very well for those subjects who gradually adjust capacity, but it does not capture the behaviour of the subjects of the other two groups. In Chapter 5 we summarise the results and provide suggestions for further work. Our main contribution is the use of simulation and experimental methodologies to explain the collective behaviour generated by customers' and managers' decisions in queueing systems as well as the analysis of the individual behaviour of these agents. In this way, we differ from the typical literature related to queueing systems which focuses on optimising performance measures and the analysis of equilibrium solutions. Our work can be seen as a first step towards understanding the interaction between customer behaviour and the capacity adjustment process in queueing systems. This framework is still in its early stages and accordingly there is a large potential for further work that spans several research topics. Interesting extensions to this work include incorporating other characteristics of queueing systems which affect the customers' experience (e.g. balking, reneging and jockeying); providing customers and managers with additional information to take their decisions (e.g. service price, quality, customers' profile); analysing different decision rules and studying other characteristics which determine the profile of customers and managers.
Resumo:
In this paper, we argue that important labor market phenomena can be better understood if one takes (a) the inherent incompleteness and relational nature of most employment contracts and (b) the existence of reference-dependent fairness concerns among a substantial share of the population into account. Theory shows and experiments confirm that, even if fairness concerns were to exert only weak effects in one-shot interactions, repeated interactions greatly magnify the relevance of such concerns on economic outcomes. We also review evidence from laboratory and field experiments examining the role of wages and fairness on effort, derive predictions from our approach for entry-level wages and incumbent workers' wages, confront these predictions with the evidence, and show that reference-dependent fairness concerns may have important consequences for the effects of economic policies such as minimum wage laws.
Resumo:
BACKGROUND: Caring for individuals with schizophrenia can create distress for caregivers which can, in turn, have a harmful impact on patient progress. There could be a better understanding of the connections between caregivers' representations of schizophrenia and coping styles. This study aims at exploring those connections. METHODS: This correlational descriptive study was conducted with 92 caregivers of individuals suffering from schizophrenia. The participants completed three questionnaires translated and validated in French: (a) a socio-demographic questionnaire, (b) the Illness Perception Questionnaire for Schizophrenia and (c) the Family Coping Questionnaire. RESULTS: Our results show that illness representations are slightly correlated with coping styles. More specifically, emotional representations are correlated to an emotion-focused coping style centred on coercion, avoidance and resignation. CONCLUSION: Our results are coherent with the Commonsense Model of Self-Regulation of Health and Illness and should enable to develop new interventions for caregivers.
Resumo:
The present work, derived from a full global geodynamic reconstruction model over 600 Ma and based on a large database, focuses herein on the interaction between the Pacific, Australian and Antarctic plates since 200 Ma, and proposes integrated solutions for a coherent, physically consistent scenario. The evolution of the Australia-Antarctica-West Pacific plate system is dependent on the Gondwana fit chosen for the reconstruction. Our fit, as defined for the latest Triassic, implies an original scenario for the evolution of the region, in particular for the "early" opening history of the Tasman Sea. The interaction with the Pacific, moreover, is characterised by many magmatic arc migrations and ocean openings, which are stopped by arc-arc collision, arc-spreading axis collision, or arc-oceanic plateau collision, and subduction reversals. Mid-Pacific oceanic plateaus created in the model are much wider than they are on present-day maps, and although they were subducted to a large extent, they were able to stop subduction. We also suggest that adduction processes (i.e., re-emergence of subducted material) may have played an important role, in particular along the plate limit now represented by the Alpine Fault in New Zealand.
Resumo:
New stratigraphic data along a profile from the Helvetic Gotthard massif to the remnants of the North Penninic basin in eastern Ticino and Graubunden are presented. The stratigraphic record together with existing geochemical and structural data, motivate a new interpretation of the fossil European distal margin. We introduce a new group of Triassic facies, the North-Penninic-Triassic (NPT), which is characterised by the Ladinian ``dolomie bicolori''. The NPT was located in-between the Brianconnais carbonate platform and the Helvetic lands. The observed horizontal transition, coupled with the stratigraphic superposition of a Helvetic Liassic on a Briaconnais Triassic in the Luzzone-Terri nappe, links, prior to Jurassic rifting, the Brianconnais paleogeographic domain at the Helvetic margin, south of the Gotthard. Our observations suggest that the Jurassic rifting separated the Brianconnais domain from the Helvetic margin by complex and protracted extension. The syn-rift stratigraphic record in the Adula nappe and surroundings suggests the presence of a diffuse rising area with only moderately subsiding basins above a thinned continental and proto-oceanic crust. Strong subsidence occurred in a second phase following protracted extension and the resulting delamination of the rising area. The stratigraphic coherency in the Adula's Mesozoic questions the idea of a lithospheric m lange in the eclogitic Adula nappe, which is more likely to be a coherent alpine tectonic unit. The structural and stratigraphic observations in the Piz Terri-Lunschania zone suggest the activity of syn-rift detachments. During the alpine collision these faults are reactivated (and inverted) and played a major role in allowing the Adula subduction, the ``Penninic Thrust'' above it and in creating the structural complexity of the Central Alps. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
EXECUTIVE SUMMARY : Evaluating Information Security Posture within an organization is becoming a very complex task. Currently, the evaluation and assessment of Information Security are commonly performed using frameworks, methodologies and standards which often consider the various aspects of security independently. Unfortunately this is ineffective because it does not take into consideration the necessity of having a global and systemic multidimensional approach to Information Security evaluation. At the same time the overall security level is globally considered to be only as strong as its weakest link. This thesis proposes a model aiming to holistically assess all dimensions of security in order to minimize the likelihood that a given threat will exploit the weakest link. A formalized structure taking into account all security elements is presented; this is based on a methodological evaluation framework in which Information Security is evaluated from a global perspective. This dissertation is divided into three parts. Part One: Information Security Evaluation issues consists of four chapters. Chapter 1 is an introduction to the purpose of this research purpose and the Model that will be proposed. In this chapter we raise some questions with respect to "traditional evaluation methods" as well as identifying the principal elements to be addressed in this direction. Then we introduce the baseline attributes of our model and set out the expected result of evaluations according to our model. Chapter 2 is focused on the definition of Information Security to be used as a reference point for our evaluation model. The inherent concepts of the contents of a holistic and baseline Information Security Program are defined. Based on this, the most common roots-of-trust in Information Security are identified. Chapter 3 focuses on an analysis of the difference and the relationship between the concepts of Information Risk and Security Management. Comparing these two concepts allows us to identify the most relevant elements to be included within our evaluation model, while clearing situating these two notions within a defined framework is of the utmost importance for the results that will be obtained from the evaluation process. Chapter 4 sets out our evaluation model and the way it addresses issues relating to the evaluation of Information Security. Within this Chapter the underlying concepts of assurance and trust are discussed. Based on these two concepts, the structure of the model is developed in order to provide an assurance related platform as well as three evaluation attributes: "assurance structure", "quality issues", and "requirements achievement". Issues relating to each of these evaluation attributes are analysed with reference to sources such as methodologies, standards and published research papers. Then the operation of the model is discussed. Assurance levels, quality levels and maturity levels are defined in order to perform the evaluation according to the model. Part Two: Implementation of the Information Security Assurance Assessment Model (ISAAM) according to the Information Security Domains consists of four chapters. This is the section where our evaluation model is put into a welldefined context with respect to the four pre-defined Information Security dimensions: the Organizational dimension, Functional dimension, Human dimension, and Legal dimension. Each Information Security dimension is discussed in a separate chapter. For each dimension, the following two-phase evaluation path is followed. The first phase concerns the identification of the elements which will constitute the basis of the evaluation: ? Identification of the key elements within the dimension; ? Identification of the Focus Areas for each dimension, consisting of the security issues identified for each dimension; ? Identification of the Specific Factors for each dimension, consisting of the security measures or control addressing the security issues identified for each dimension. The second phase concerns the evaluation of each Information Security dimension by: ? The implementation of the evaluation model, based on the elements identified for each dimension within the first phase, by identifying the security tasks, processes, procedures, and actions that should have been performed by the organization to reach the desired level of protection; ? The maturity model for each dimension as a basis for reliance on security. For each dimension we propose a generic maturity model that could be used by every organization in order to define its own security requirements. Part three of this dissertation contains the Final Remarks, Supporting Resources and Annexes. With reference to the objectives of our thesis, the Final Remarks briefly analyse whether these objectives were achieved and suggest directions for future related research. Supporting resources comprise the bibliographic resources that were used to elaborate and justify our approach. Annexes include all the relevant topics identified within the literature to illustrate certain aspects of our approach. Our Information Security evaluation model is based on and integrates different Information Security best practices, standards, methodologies and research expertise which can be combined in order to define an reliable categorization of Information Security. After the definition of terms and requirements, an evaluation process should be performed in order to obtain evidence that the Information Security within the organization in question is adequately managed. We have specifically integrated into our model the most useful elements of these sources of information in order to provide a generic model able to be implemented in all kinds of organizations. The value added by our evaluation model is that it is easy to implement and operate and answers concrete needs in terms of reliance upon an efficient and dynamic evaluation tool through a coherent evaluation system. On that basis, our model could be implemented internally within organizations, allowing them to govern better their Information Security. RÉSUMÉ : Contexte général de la thèse L'évaluation de la sécurité en général, et plus particulièrement, celle de la sécurité de l'information, est devenue pour les organisations non seulement une mission cruciale à réaliser, mais aussi de plus en plus complexe. A l'heure actuelle, cette évaluation se base principalement sur des méthodologies, des bonnes pratiques, des normes ou des standards qui appréhendent séparément les différents aspects qui composent la sécurité de l'information. Nous pensons que cette manière d'évaluer la sécurité est inefficiente, car elle ne tient pas compte de l'interaction des différentes dimensions et composantes de la sécurité entre elles, bien qu'il soit admis depuis longtemps que le niveau de sécurité globale d'une organisation est toujours celui du maillon le plus faible de la chaîne sécuritaire. Nous avons identifié le besoin d'une approche globale, intégrée, systémique et multidimensionnelle de l'évaluation de la sécurité de l'information. En effet, et c'est le point de départ de notre thèse, nous démontrons que seule une prise en compte globale de la sécurité permettra de répondre aux exigences de sécurité optimale ainsi qu'aux besoins de protection spécifiques d'une organisation. Ainsi, notre thèse propose un nouveau paradigme d'évaluation de la sécurité afin de satisfaire aux besoins d'efficacité et d'efficience d'une organisation donnée. Nous proposons alors un modèle qui vise à évaluer d'une manière holistique toutes les dimensions de la sécurité, afin de minimiser la probabilité qu'une menace potentielle puisse exploiter des vulnérabilités et engendrer des dommages directs ou indirects. Ce modèle se base sur une structure formalisée qui prend en compte tous les éléments d'un système ou programme de sécurité. Ainsi, nous proposons un cadre méthodologique d'évaluation qui considère la sécurité de l'information à partir d'une perspective globale. Structure de la thèse et thèmes abordés Notre document est structuré en trois parties. La première intitulée : « La problématique de l'évaluation de la sécurité de l'information » est composée de quatre chapitres. Le chapitre 1 introduit l'objet de la recherche ainsi que les concepts de base du modèle d'évaluation proposé. La maniéré traditionnelle de l'évaluation de la sécurité fait l'objet d'une analyse critique pour identifier les éléments principaux et invariants à prendre en compte dans notre approche holistique. Les éléments de base de notre modèle d'évaluation ainsi que son fonctionnement attendu sont ensuite présentés pour pouvoir tracer les résultats attendus de ce modèle. Le chapitre 2 se focalise sur la définition de la notion de Sécurité de l'Information. Il ne s'agit pas d'une redéfinition de la notion de la sécurité, mais d'une mise en perspectives des dimensions, critères, indicateurs à utiliser comme base de référence, afin de déterminer l'objet de l'évaluation qui sera utilisé tout au long de notre travail. Les concepts inhérents de ce qui constitue le caractère holistique de la sécurité ainsi que les éléments constitutifs d'un niveau de référence de sécurité sont définis en conséquence. Ceci permet d'identifier ceux que nous avons dénommés « les racines de confiance ». Le chapitre 3 présente et analyse la différence et les relations qui existent entre les processus de la Gestion des Risques et de la Gestion de la Sécurité, afin d'identifier les éléments constitutifs du cadre de protection à inclure dans notre modèle d'évaluation. Le chapitre 4 est consacré à la présentation de notre modèle d'évaluation Information Security Assurance Assessment Model (ISAAM) et la manière dont il répond aux exigences de l'évaluation telle que nous les avons préalablement présentées. Dans ce chapitre les concepts sous-jacents relatifs aux notions d'assurance et de confiance sont analysés. En se basant sur ces deux concepts, la structure du modèle d'évaluation est développée pour obtenir une plateforme qui offre un certain niveau de garantie en s'appuyant sur trois attributs d'évaluation, à savoir : « la structure de confiance », « la qualité du processus », et « la réalisation des exigences et des objectifs ». Les problématiques liées à chacun de ces attributs d'évaluation sont analysées en se basant sur l'état de l'art de la recherche et de la littérature, sur les différentes méthodes existantes ainsi que sur les normes et les standards les plus courants dans le domaine de la sécurité. Sur cette base, trois différents niveaux d'évaluation sont construits, à savoir : le niveau d'assurance, le niveau de qualité et le niveau de maturité qui constituent la base de l'évaluation de l'état global de la sécurité d'une organisation. La deuxième partie: « L'application du Modèle d'évaluation de l'assurance de la sécurité de l'information par domaine de sécurité » est elle aussi composée de quatre chapitres. Le modèle d'évaluation déjà construit et analysé est, dans cette partie, mis dans un contexte spécifique selon les quatre dimensions prédéfinies de sécurité qui sont: la dimension Organisationnelle, la dimension Fonctionnelle, la dimension Humaine, et la dimension Légale. Chacune de ces dimensions et son évaluation spécifique fait l'objet d'un chapitre distinct. Pour chacune des dimensions, une évaluation en deux phases est construite comme suit. La première phase concerne l'identification des éléments qui constituent la base de l'évaluation: ? Identification des éléments clés de l'évaluation ; ? Identification des « Focus Area » pour chaque dimension qui représentent les problématiques se trouvant dans la dimension ; ? Identification des « Specific Factors » pour chaque Focus Area qui représentent les mesures de sécurité et de contrôle qui contribuent à résoudre ou à diminuer les impacts des risques. La deuxième phase concerne l'évaluation de chaque dimension précédemment présentées. Elle est constituée d'une part, de l'implémentation du modèle général d'évaluation à la dimension concernée en : ? Se basant sur les éléments spécifiés lors de la première phase ; ? Identifiant les taches sécuritaires spécifiques, les processus, les procédures qui auraient dû être effectués pour atteindre le niveau de protection souhaité. D'autre part, l'évaluation de chaque dimension est complétée par la proposition d'un modèle de maturité spécifique à chaque dimension, qui est à considérer comme une base de référence pour le niveau global de sécurité. Pour chaque dimension nous proposons un modèle de maturité générique qui peut être utilisé par chaque organisation, afin de spécifier ses propres exigences en matière de sécurité. Cela constitue une innovation dans le domaine de l'évaluation, que nous justifions pour chaque dimension et dont nous mettons systématiquement en avant la plus value apportée. La troisième partie de notre document est relative à la validation globale de notre proposition et contient en guise de conclusion, une mise en perspective critique de notre travail et des remarques finales. Cette dernière partie est complétée par une bibliographie et des annexes. Notre modèle d'évaluation de la sécurité intègre et se base sur de nombreuses sources d'expertise, telles que les bonnes pratiques, les normes, les standards, les méthodes et l'expertise de la recherche scientifique du domaine. Notre proposition constructive répond à un véritable problème non encore résolu, auquel doivent faire face toutes les organisations, indépendamment de la taille et du profil. Cela permettrait à ces dernières de spécifier leurs exigences particulières en matière du niveau de sécurité à satisfaire, d'instancier un processus d'évaluation spécifique à leurs besoins afin qu'elles puissent s'assurer que leur sécurité de l'information soit gérée d'une manière appropriée, offrant ainsi un certain niveau de confiance dans le degré de protection fourni. Nous avons intégré dans notre modèle le meilleur du savoir faire, de l'expérience et de l'expertise disponible actuellement au niveau international, dans le but de fournir un modèle d'évaluation simple, générique et applicable à un grand nombre d'organisations publiques ou privées. La valeur ajoutée de notre modèle d'évaluation réside précisément dans le fait qu'il est suffisamment générique et facile à implémenter tout en apportant des réponses sur les besoins concrets des organisations. Ainsi notre proposition constitue un outil d'évaluation fiable, efficient et dynamique découlant d'une approche d'évaluation cohérente. De ce fait, notre système d'évaluation peut être implémenté à l'interne par l'entreprise elle-même, sans recourir à des ressources supplémentaires et lui donne également ainsi la possibilité de mieux gouverner sa sécurité de l'information.
Resumo:
In the cerebral cortex, the activity levels of neuronal populations are continuously fluctuating. When neuronal activity, as measured using functional MRI (fMRI), is temporally coherent across 2 populations, those populations are said to be functionally connected. Functional connectivity has previously been shown to correlate with structural (anatomical) connectivity patterns at an aggregate level. In the present study we investigate, with the aid of computational modeling, whether systems-level properties of functional networks-including their spatial statistics and their persistence across time-can be accounted for by properties of the underlying anatomical network. We measured resting state functional connectivity (using fMRI) and structural connectivity (using diffusion spectrum imaging tractography) in the same individuals at high resolution. Structural connectivity then provided the couplings for a model of macroscopic cortical dynamics. In both model and data, we observed (i) that strong functional connections commonly exist between regions with no direct structural connection, rendering the inference of structural connectivity from functional connectivity impractical; (ii) that indirect connections and interregional distance accounted for some of the variance in functional connectivity that was unexplained by direct structural connectivity; and (iii) that resting-state functional connectivity exhibits variability within and across both scanning sessions and model runs. These empirical and modeling results demonstrate that although resting state functional connectivity is variable and is frequently present between regions without direct structural linkage, its strength, persistence, and spatial statistics are nevertheless constrained by the large-scale anatomical structure of the human cerebral cortex.