856 resultados para Exclusion process, Multi-species, Multi-scale modelling


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abstract Claystones are considered worldwide as barrier materials for nuclear waste repositories. In the Mont Terri underground research laboratory (URL), a nearly 4-year diffusion and retention (DR) experiment has been performed in Opalinus Clay. It aimed at (1) obtaining data at larger space and time scales than in laboratory experiments and (2) under relevant in situ conditions with respect to pore water chemistry and mechanical stress, (3) quantifying the anisotropy of in situ diffusion, and (4) exploring possible effects of a borehole-disturbed zone. The experiment included two tracer injection intervals in a borehole perpendicular to bedding, through which traced artificial pore water (APW) was circulated, and a pressure monitoring interval. The APW was spiked with neutral tracers (HTO, HDO, H2O-18), anions (Br, I, SeO4), and cations (Na-22, Ba-133, Sr-85, Cs-137, Co-60, Eu-152, stable Cs, and stable Eu). Most tracers were added at the beginning, some were added at a later stage. The hydraulic pressure in the injection intervals was adjusted according to the measured value in the pressure monitoring interval to ensure transport by diffusion only. Concentration time-series in the APW within the borehole intervals were obtained, as well as 2D concentration distributions in the rock at the end of the experiment after overcoring and subsampling which resulted in �250 samples and �1300 analyses. As expected, HTO diffused the furthest into the rock, followed by the anions (Br, I, SeO4) and by the cationic sorbing tracers (Na-22, Ba-133, Cs, Cs-137, Co-60, Eu-152). The diffusion of SeO4 was slower than that of Br or I, approximately proportional to the ratio of their diffusion coefficients in water. Ba-133 diffused only into �0.1 m during the �4 a. Stable Cs, added at a higher concentration than Cs-137, diffused further into the rock than Cs-137, consistent with a non-linear sorption behavior. The rock properties (e.g., water contents) were rather homogeneous at the centimeter scale, with no evidence of a borehole-disturbed zone. In situ anisotropy ratios for diffusion, derived for the first time directly from field data, are larger for HTO and Na-22 (�5) than for anions (�3�4 for Br and I). The lower ionic strength of the pore water at this location (�0.22 M) as compared to locations of earlier experiments in the Mont Terri URL (�0.39 M) had no notable effect on the anion accessible pore fraction for Cl, Br, and I: the value of 0.55 is within the range of earlier data. Detailed transport simulations involving different codes will be presented in a companion paper.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The migration of radioactive and chemical contaminants in clay materials and argillaceous host rocks is characterised by diffusion and retention processes. Valuable information on such processes can be gained by combining diffusion studies at laboratory scale with field migration tests. In this work, the outcome of a multi-tracer in situ migration test performed in the Opalinus Clay formation in the Mont Terri underground rock laboratory (Switzerland) is presented. Thus, 1.16 x 10(5) Bq/L of HTO, 3.96 x 10(3) Bq/L of Sr-85, 6.29 x 10(2) Bq/L of Co-60, 2.01 x 10(-3) mol/L Cs, 9.10 x 10(-4) mol/L I and 1.04 x 10(-3) mol/L Br were injected into the borehole. The decrease of the radioisotope concentrations in the borehole was monitored using in situ gamma-spectrometry. The other tracers were analyzed with state-of-the-art laboratory procedures after sampling of small water aliquots from the reservoir. The diffusion experiment was carried out over a period of one year after which the interval section was overcored and analyzed. Based on the experimental data from the tracer evolution in the borehole and the tracer profiles in the rock, the diffusion of tracers was modelled with the numerical code CRUNCH. The results obtained for HTO (H-3), I- and Br- confirm previous lab and in situ diffusion data. Anionic fluxes into the formation were smaller compared to HTO because of anion exclusion effects. The migration of the cations Sr-85(2+), Cs+ and Co-60(2+) was found to be governed by both diffusion and sorption processes. For Sr-85(2+), the slightly higher diffusivity relative to HTO and the low sorption value are consistent with laboratory diffusion measurements on small-scale samples. In the case of Cs+, the numerically deduced high diffusivity and the Freundlich-type sorption behaviour is also supported by ongoing laboratory data. For Co, no laboratory diffusion data were yet available for comparison; however, the modelled data suggests that Co-60(2+) sorption was weaker than would be expected from available batch sorption data. Overall, the results demonstrate the feasibility of the experimental setup for obtaining high-quality diffusion data for conservative and sorbing tracers. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An accurate and coherent chronological framework is essential for the interpretation of climatic and environmental records obtained from deep polar ice cores. Until now, one common ice core age scale had been developed based on an inverse dating method (Datice), combining glaciological modelling with absolute and stratigraphic markers between 4 ice cores covering the last 50 ka (thousands of years before present) (Lemieux-Dudon et al., 2010). In this paper, together with the companion paper of Veres et al. (2013), we present an extension of this work back to 800 ka for the NGRIP, TALDICE, EDML, Vostok and EDC ice cores using an improved version of the Datice tool. The AICC2012 (Antarctic Ice Core Chronology 2012) chronology includes numerous new gas and ice stratigraphic links as well as improved evaluation of background and associated variance scenarios. This paper concentrates on the long timescales between 120–800 ka. In this framework, new measurements of δ18Oatm over Marine Isotope Stage (MIS) 11–12 on EDC and a complete δ18Oatm record of the TALDICE ice cores permit us to derive additional orbital gas age constraints. The coherency of the different orbitally deduced ages (from δ18Oatm, δO2/N2 and air content) has been verified before implementation in AICC2012. The new chronology is now independent of other archives and shows only small differences, most of the time within the original uncertainty range calculated by Datice, when compared with the previous ice core reference age scale EDC3, the Dome F chronology, or using a comparison between speleothems and methane. For instance, the largest deviation between AICC2012 and EDC3 (5.4 ka) is obtained around MIS 12. Despite significant modifications of the chronological constraints around MIS 5, now independent of speleothem records in AICC2012, the date of Termination II is very close to the EDC3 one.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Enemy release is frequently posed as a main driver of invasiveness of alien species. However, an experimental multi-species test examining performance and herbivory of invasive alien, non-invasive alien and native plant species in the presence and absence of natural enemies is lacking. In a common garden experiment in Switzerland, we manipulated exposure of seven alien invasive, eight alien non-invasive and fourteen native species from six taxonomic groups to natural enemies (invertebrate herbivores), by applying a pesticide treatment under two different nutrient levels. We assessed biomass production, herbivore damage and the major herbivore taxa on plants. Across all species, plants gained significantly greater biomass under pesticide treatment. However, invasive, non-invasive and native species did not differ in their biomass response to pesticide treatment at either nutrient level. The proportion of leaves damaged on invasive species was significantly lower compared to native species, but not when compared to non-invasive species. However, the difference was lost when plant size was accounted for. There were no differences between invasive, non-invasive and native species in herbivore abundance. Our study offers little support for invertebrate herbivore release as a driver of plant invasiveness, but suggests that future enemy release studies should account for differences in plant size among species.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Located in the northeastern region of Italy, the Venetian Plain (VP) is a sedimentary basin containing an extensively exploited groundwater system. The northern part is characterised by a large undifferentiated phreatic aquifer constituted by coarse grain alluvial deposits and recharged by local rainfalls and discharges from the rivers Brenta and Piave. The southern plain is characterised by a series of aquitards and sandy aquifers forming a well-defined artesian multi-aquifer system. In order to determine origins, transit times and mixing proportions of different components in groundwater (GW), a multi tracer study (H, He/He, C, CFC, SF, Kr, Ar, Sr/Sr, O, H, cations, and anions) has been carried out in VP between the rivers Brenta and Piave. The geochemical pattern of GW allows a distinction of the different water origins in the system, in particular based on View the MathML source HCO3-,SO42-,Ca/Mg,NO3-, O, H. A radiogenic Sr signature clearly marks GW originated from the Brenta and Tertiary catchments. End-member analysis and geochemical modelling highlight the existence of a mixing process involving waters recharged from the Brenta and Piave rivers, from the phreatic aquifer and from another GW reservoirs characterised by very low mineralization. Noble gas excesses in respect to atmospheric equilibrium occur in all samples, particularly in the deeper aquifers of the Piave river, but also in phreatic water of the undifferentiated aquifers. He–H ages in the phreatic aquifer and in the shallower level of the multi-aquifer system indicate recharge times in the years 1970–2008. The progression of H–He ages with the distance from the recharge areas together with initial tritium concentration (H + Hetrit) imply an infiltration rate of about 1 km/y and the absence of older components in these GW. SF and Kr data corroborate these conclusions. H − He ages in the deeper artesian aquifers suggest a dilution process with older, tritium free waters. C Fontes–Garnier model ages of the old GW components range from 1 to 12 ka, yielding an apparent GW velocity of about 1–10 m/y. Increase of radiogenic He follows the progression of C ages. Ar, radiogenic He and C tracers yield model-dependent age-ranges in overall good agreement once diffusion of C from aquitards, GW dispersion, lithogenic Ar production, and He production-rate heterogeneities are taken into account. The rate of radiogenic He increase with time, deduced by comparison with C model ages, is however very low compared to other studies. Comparison with C and C data obtained 40 years ago on the same aquifer system shows that exploitation of GW caused a significant loss of the old groundwater reservoir during this time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We study the sensitivity of large-scale xenon detectors to low-energy solar neutrinos, to coherent neutrino-nucleus scattering and to neutrinoless double beta decay. As a concrete example, we consider the xenon part of the proposed DARWIN (Dark Matter WIMP Search with Noble Liquids) experiment. We perform detailed Monte Carlo simulations of the expected backgrounds, considering realistic energy resolutions and thresholds in the detector. In a low-energy window of 2–30 keV, where the sensitivity to solar pp and 7Be-neutrinos is highest, an integrated pp-neutrino rate of 5900 events can be reached in a fiducial mass of 14 tons of natural xenon, after 5 years of data. The pp-neutrino flux could thus be measured with a statistical uncertainty around 1%, reaching the precision of solar model predictions. These low-energy solar neutrinos will be the limiting background to the dark matter search channel for WIMP-nucleon cross sections below ~2X 10-48 cm2 and WIMP masses around 50 GeV c 2, for an assumed 99.5% rejection of electronic recoils due to elastic neutrino-electron scatters. Nuclear recoils from coherent scattering of solar neutrinos will limit the sensitivity to WIMP masses below ~6 GeV c-2 to cross sections above ~4X10-45cm2. DARWIN could reach a competitive half-life sensitivity of 5.6X1026 y to the neutrinoless double beta decay of 136Xe after 5 years of data, using 6 tons of natural xenon in the central detector region.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ray (1998) developed measures of input- and output-oriented scale efficiency that can be directly computed from an estimated Translog frontier production function. This note extends the earlier results from Ray (1998) to the multiple-output multiple input case.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Detailed analyses of the Lake Van pollen, Ca/K ratio and stable oxygen isotope record allow the identification of millennial-scale vegetation and environmental changes in eastern Anatolia throughout the last glacial (~75-15 ka BP). The climate within the last glacial was cold and dry, with low arboreal pollen (AP) levels. The driest and coldest period corresponds to Marine Isotope Stage (MIS) 2 (~28-14.5 ka BP) dominated by the highest values of xerophytic steppe vegetation. Our high-resolution multi proxy record shows rapid expansions and contractions of tree populations that reflects variability in temperature and moisture availability. This rapid vegetation and environmental changes can be linked to the stadial-interstadial pattern of the Dansgaard-Oeschger (DO) events as recorded in the Greenland ice cores. Periods of reduced moisture availability were characterized by enhanced xerophytic species and high terrigenous input from the Lake Van catchment area. Furthermore, comparison with the marine realm reveals that the complex atmosphere-ocean interaction can be explained by the strength and position of the westerlies, which is responsible for the supply of humidity in eastern Anatolia. Influenced by diverse topography of the Lake Van catchment, larger DO interstadials (e.g. DO 19, 17-16, 14, 12 and 8) show the highest expansion of temperate species within the last glacial. However, Heinrich events (HE), characterized by highest concentrations of ice-rafted debris (IRD) in marine sediments, are identified in eastern Anatolia by AP values not lower and high steppe components not more abundant than during DO stadials. In addition, this work is a first attempt to establish a continuous microscopic charcoal record over the last glacial in the Near East, which documents an initial immediate response to millennial-scale climate and environmental variability and enables us to shed light on the history of fire activity during the last glacial.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Late Holocene laminated sediments from a core transect centred in the oxygen minimum zone (OMZ) impinging at the continental slope off Pakistan indicate stable oxygen minimum conditions for the past 7000 calendar years. High SW-monsoon-controlled biological productivity and enhanced organic matter preservation during this period is reflected in high contents of total organic carbon (TOC) and redox-sensitive elements (Ni, V), as well as by a low-diversity, high-abundance benthic foraminiferal Buliminacea association and high abundance of the planktonic species Globigerina bulloides indicative of upwelling conditions. Surface-water productivity was strongest during SW monsoon maxima. Stable OMZ conditions (reflected by laminated sediments) were found also during warm interstadial events (Preboreal, Bølling-Allerød, and Dansgaard-Oeschger events), as well as during peak glacial times (17-22.5 ka, all ages in calendar years). Sediment mass accumulation rates were at a maximum during the Preboreal and Younger Dryas periods due to strong riverine input and mobilisation of fine-grained sediment coinciding with rapid deglacial sea-level rise, whereas eolian input generally decreased from glacial to interglacial times. In contrast, the occurrence of bioturbated intervals from 7 to 10.5 ka (early Holocene), in the Younger Dryas (11.7-13 ka), from 15 to 17 ka (Heinrich event 1) and from 22.5 to 25 ka (Heinrich event 2) suggests completely different conditions of oxygen-rich bottom waters, extremely low mass and organic carbon accumulation rates, a high-diversity benthic fauna, all indicating lowered surface-water productivity. During these intervals the OMZ was very poorly developed or absent and a sharp fall of the aragonite compensation depth favoured the preservation of pteropods. The abundance of lithogenic proxies suggests aridity and wind transport by northwesterly or northeasterly winds during these periods coinciding with the North Atlantic Heinrich events and dust peaks in the Tibetan Loess records. The correlation of the monsoon-driven OMZ variability in the Arabian Sea with the rapid climatic fluctuations in the high northern latitudes suggests a close coupling between the climates of the high and low latitudes at a global scale.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A morphometric analysis was performed for the late Middle Miocene bivalve species lineage of Polititapes tricuspis (Eichwald, 1829) (Veneridae: Tapetini). Specimens from various localities grouped into two stratigraphically successive biozones, i.e. the upper Ervilia Zone and the Sarmatimactra Zone, were investigated using a multi-method approach. A Generalized Procrustes Analysis was computed for fifteen landmarks, covering characteristics of the hinge, muscle scars, and pallial line. The shell outline was separately quantified by applying the Fast Fourier Transform, which redraws the outline by fitting in a combination of trigonometric curves. Shell size was calculated as centroid size from the landmark configuration. Shell thickness, as not covered by either analysis, was additionally measured at the centroid. The analyses showed significant phenotypic differentiation between specimens from the two biozones. The bivalves become distinctly larger and thicker over geological time and develop circular shells with stronger cardinal teeth and a deeper pallial sinus. Data on the paleoenvironmental changes in the late Middle Miocene Central Paratethys Sea suggest the phenotypic shifts to be functional adaptations. The typical habitats for Polititapes changed to extensive, very shallow shores exposed to high wave action and tidal activity. Caused by the growing need for higher mechanical stability, the bivalves produced larger and thicker shells with stronger cardinal teeth. The latter are additionally shifted towards the hinge center to compensate for the lacking lateral teeth and improve stability. The deepening pallial sinus is related to a deeper burrowing habit, which is considered to impede being washed out in the new high-energy settings.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents the development of the robotic multi-agent system SMART. In this system, the agent concept is applied to both hardware and software entities. Hardware agents are robots, with three and four legs, and an IP-camera that takes images of the scene where the cooperative task is carried out. Hardware agents strongly cooperate with software agents. These latter agents can be classified into image processing, communications, task management and decision making, planning and trajectory generation agents. To model, control and evaluate the performance of cooperative tasks among agents, a kind of PetriNet, called Work-Flow Petri Net, is used. Experimental results shows the good performance of the system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Society today is completely dependent on computer networks, the Internet and distributed systems, which place at our disposal the necessary services to perform our daily tasks. Subconsciously, we rely increasingly on network management systems. These systems allow us to, in general, maintain, manage, configure, scale, adapt, modify, edit, protect, and enhance the main distributed systems. Their role is secondary and is unknown and transparent to the users. They provide the necessary support to maintain the distributed systems whose services we use every day. If we do not consider network management systems during the development stage of distributed systems, then there could be serious consequences or even total failures in the development of the distributed system. It is necessary, therefore, to consider the management of the systems within the design of the distributed systems and to systematise their design to minimise the impact of network management in distributed systems projects. In this paper, we present a framework that allows the design of network management systems systematically. To accomplish this goal, formal modelling tools are used for modelling different views sequentially proposed of the same problem. These views cover all the aspects that are involved in the system; based on process definitions for identifying responsible and defining the involved agents to propose the deployment in a distributed architecture that is both feasible and appropriate.