968 resultados para Set of Weak Stationary Dynamic Actions


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Programa de doctorado: Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería Instituto Universitario (SIANI)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Human reasoning is a fascinating and complex cognitive process that can be applied in different research areas such as philosophy, psychology, laws and financial. Unfortunately, developing supporting software (to those different areas) able to cope such as complex reasoning it’s difficult and requires a suitable logic abstract formalism. In this thesis we aim to develop a program, that has the job to evaluate a theory (a set of rules) w.r.t. a Goal, and provide some results such as “The Goal is derivable from the KB5 (of the theory)”. In order to achieve this goal we need to analyse different logics and choose the one that best meets our needs. In logic, usually, we try to determine if a given conclusion is logically implied by a set of assumptions T (theory). However, when we deal with programming logic we need an efficient algorithm in order to find such implications. In this work we use a logic rather similar to human logic. Indeed, human reasoning requires an extension of the first order logic able to reach a conclusion depending on not definitely true6 premises belonging to a incomplete set of knowledge. Thus, we implemented a defeasible logic7 framework able to manipulate defeasible rules. Defeasible logic is a non-monotonic logic designed for efficient defeasible reasoning by Nute (see Chapter 2). Those kind of applications are useful in laws area especially if they offer an implementation of an argumentation framework that provides a formal modelling of game. Roughly speaking, let the theory is the set of laws, a keyclaim is the conclusion that one of the party wants to prove (and the other one wants to defeat) and adding dynamic assertion of rules, namely, facts putted forward by the parties, then, we can play an argumentative challenge between two players and decide if the conclusion is provable or not depending on the different strategies performed by the players. Implementing a game model requires one more meta-interpreter able to evaluate the defeasible logic framework; indeed, according to Göedel theorem (see on page 127), we cannot evaluate the meaning of a language using the tools provided by the language itself, but we need a meta-language able to manipulate the object language8. Thus, rather than a simple meta-interpreter, we propose a Meta-level containing different Meta-evaluators. The former has been explained above, the second one is needed to perform the game model, and the last one will be used to change game execution and tree derivation strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die protokollbasierte Medizin stellt einen interdisziplinären Brennpunkt der Informatik dar. Als besonderer Ausschnitt der medizinischen Teilgebiete erlaubt sie die relativ formale Spezifikation von Prozessen in den drei Bereichen der Prävention, Diagnose und Therapie.Letzterer wurde immer besonders fokussiert und gilt seit jeher im Rahmen klinischer Studien als Projektionsfläche für informationstechnologische Konzepte. Die Euphorie der frühen Jahre ernüchtert sich jedoch bei jeder Bilanz. Nur sehr wenige der unzählbaren Projekte haben ihre Routine in der alltäglichen Praxis gefunden. Die meisten Vorhaben sind an der Illusion der vollständigen Berechenbarkeit medizinischer Arbeitsabläufe gescheitert. Die traditionelle Sichtweise der klinischen Praxis beruht auf einer blockorientierten Vorstellung des Therapieausführungsprozesses. Sie entsteht durch seine Zerlegung in einzelne Therapiezweige, welche aus vordefinierten Blöcken zusammengesetzt sind. Diese können sequentiell oder parallel ausgeführt werden und sind selbst zusammengesetzt aus jeweils einer Menge von Elementen,welche die Aktivitäten der untersten Ebene darstellen. Das blockorientierte Aufbaumodell wird ergänzt durch ein regelorientiertes Ablaufmodell. Ein komplexes Regelwerk bestimmt Bedingungen für die zeitlichen und logischen Abhängigkeiten der Blöcke, deren Anordnung durch den Ausführungsprozeß gebildet wird. Die Modellierung der Therapieausführung steht zunächst vor der grundsätzlichen Frage, inwieweit die traditionelle Sichtweise für eine interne Repräsentation geeignet ist. Das übergeordnete Ziel besteht in der Integration der unterschiedlichen Ebenen der Therapiespezifikation. Dazu gehört nicht nur die strukturelle Komponente, sondern vorallem die Ablaufkomponente. Ein geeignetes Regelmodell ist erforderlich, welches den spezifischen Bedürfnissen der Therapieüberwachung gerecht wird. Die zentrale Aufgabe besteht darin, diese unterschiedlichen Ebenen zusammenzuführen. Eine sinnvolle Alternative zur traditionellen Sichtweise liefert das zustandsorientierte Modell des Therapieausführungsprozesses. Das zustandsorientierte Modell beruht auf der Sichtweise, daß der gesamte Therapieausführungsprozeß letztendlich eine lineare Folge von Zuständen beschreibt, wobei jeder Zustandsübergang durch ein Ereignis eingeleitet wird, an bestimmte Bedingungen geknüpft ist und bestimmte Aktionen auslösen kann. Die Parallelität des blockorientierten Modells tritt in den Hintergrund, denn die Menge der durchzuführenden Maßnahmen sind lediglich Eigenschaften der Zustände und keine strukturellen Elemente der Ablaufspezifikation. Zu jedem Zeitpunkt ist genau ein Zustand aktiv, und er repräsentiert eine von endlich vielen klinischen Situationen, mit all ihren spezifischen Aktivitäten und Ausführungsregeln. Die Vorteile des zustandsorientierten Modells liegen in der Integration. Die Grundstruktur verbindet die statische Darstellung der möglichen Phasenanordnungen mit der dynamischen Ausführung aktiver Regeln. Die ursprünglichen Inhalte des blockorientierten Modells werden als gewöhnliche Eigenschaften der Zustände reproduziert und stellen damit nur einen Spezialfall der zustandsbezogenen Sicht dar.Weitere Möglichkeiten für die Anreicherung der Zustände mit zusätzlichen Details sind denkbar wie sinnvoll. Die Grundstruktur bleibt bei jeder Erweiterung jedoch die gleiche. Es ergibt sich ein wiederverwendbares Grundgerüst,ein gemeinsamer Nenner für die Erfüllung der Überwachungsaufgabe.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electronic applications are nowadays converging under the umbrella of the cloud computing vision. The future ecosystem of information and communication technology is going to integrate clouds of portable clients and embedded devices exchanging information, through the internet layer, with processing clusters of servers, data-centers and high performance computing systems. Even thus the whole society is waiting to embrace this revolution, there is a backside of the story. Portable devices require battery to work far from the power plugs and their storage capacity does not scale as the increasing power requirement does. At the other end processing clusters, such as data-centers and server farms, are build upon the integration of thousands multiprocessors. For each of them during the last decade the technology scaling has produced a dramatic increase in power density with significant spatial and temporal variability. This leads to power and temperature hot-spots, which may cause non-uniform ageing and accelerated chip failure. Nonetheless all the heat removed from the silicon translates in high cooling costs. Moreover trend in ICT carbon footprint shows that run-time power consumption of the all spectrum of devices accounts for a significant slice of entire world carbon emissions. This thesis work embrace the full ICT ecosystem and dynamic power consumption concerns by describing a set of new and promising system levels resource management techniques to reduce the power consumption and related issues for two corner cases: Mobile Devices and High Performance Computing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The focus of this thesis was the in-situ application of the new analytical technique "GCxGC" in both the marine and continental boundary layer, as well as in the free troposphere. Biogenic and anthropogenic VOCs were analysed and used to characterise local chemistry at the individual measurement sites. The first part of the thesis work was the characterisation of a new set of columns that was to be used later in the field. To simplify the identification, a time-of-flight mass spectrometer (TOF-MS) detector was coupled to the GCxGC. In the field the TOF-MS was substituted by a more robust and tractable flame ionisation detector (FID), which is more suitable for quantitative measurements. During the process, a variety of volatile organic compounds could be assigned to different environmental sources, e.g. plankton sources, eucalyptus forest or urban centers. In-situ measurements of biogenic and anthropogenic VOCs were conducted at the Meteorological Observatory Hohenpeissenberg (MOHP), Germany, applying a thermodesorption-GCxGC-FID system. The measured VOCs were compared to GC-MS measurements routinely conducted at the MOHP as well as to PTR-MS measurements. Furthermore, a compressed ambient air standard was measured from three different gas chromatographic instruments and the results were compared. With few exceptions, the in-situ, as well as the standard measurements, revealed good agreement between the individual instruments. Diurnal cycles were observed, with differing patterns for the biogenic and the anthropogenic compounds. The variability-lifetime relationship of compounds with atmospheric lifetimes from a few hours to a few days in presence of O3 and OH was examined. It revealed a weak but significant influence of chemistry on these short-lived VOCs at the site. The relationship was also used to estimate the average OH radical concentration during the campaign, which was compared to in-situ OH measurements (1.7 x 10^6 molecules/cm^3, 0.071 ppt) for the first time. The OH concentration ranging from 3.5 to 6.5 x 10^5 molecules/cm^3 (0.015 to 0.027 ppt) obtained with this method represents an approximation of the average OH concentration influencing the discussed VOCs from emission to measurement. Based on these findings, the average concentration of the nighttime NO3 radicals was estimated using the same approach and found to range from 2.2 to 5.0 x 10^8 molecules/cm^3 (9.2 to 21.0 ppt). During the MINATROC field campaign, in-situ ambient air measurements with the GCxGC-FID were conducted at Tenerife, Spain. Although the station is mainly situated in the free troposphere, local influences of anthropogenic and biogenic VOCs were observed. Due to a strong dust event originating from Western Africa it was possible to compare the mixing ratios during normal and elevated dust loading in the atmosphere. The mixing ratios during the dust event were found to be lower. However, this could not be attributed to heterogeneous reactions as there was a change in the wind direction from northwesterly to southeasterly during the dust event.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this work of thesis is the refined estimations of source parameters. To such a purpose we used two different approaches, one in the frequency domain and the other in the time domain. In frequency domain, we analyzed the P- and S-wave displacement spectra to estimate spectral parameters, that is corner frequencies and low frequency spectral amplitudes. We used a parametric modeling approach which is combined with a multi-step, non-linear inversion strategy and includes the correction for attenuation and site effects. The iterative multi-step procedure was applied to about 700 microearthquakes in the moment range 1011-1014 N•m and recorded at the dense, wide-dynamic range, seismic networks operating in Southern Apennines (Italy). The analysis of the source parameters is often complicated when we are not able to model the propagation accurately. In this case the empirical Green function approach is a very useful tool to study the seismic source properties. In fact the Empirical Green Functions (EGFs) consent to represent the contribution of propagation and site effects to signal without using approximate velocity models. An EGF is a recorded three-component set of time-histories of a small earthquake whose source mechanism and propagation path are similar to those of the master event. Thus, in time domain, the deconvolution method of Vallée (2004) was applied to calculate the source time functions (RSTFs) and to accurately estimate source size and rupture velocity. This technique was applied to 1) large event, that is Mw=6.3 2009 L’Aquila mainshock (Central Italy), 2) moderate events, that is cluster of earthquakes of 2009 L’Aquila sequence with moment magnitude ranging between 3 and 5.6, 3) small event, i.e. Mw=2.9 Laviano mainshock (Southern Italy).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neisseria meningitidis (Nm) is the major cause of septicemia and meningococcal meningitis. During the course of infection, it must adapt to different host environments as a crucial factor for survival. Despite the severity of meningococcal sepsis, little is known about how Nm adapts to permit survival and growth in human blood. A previous time-course transcriptome analysis, using an ex vivo model of human whole blood infection, showed that Nm alters the expression of nearly 30% of ORFs of the genome: major dynamic changes were observed in the expression of transcriptional regulators, transport and binding proteins, energy metabolism, and surface-exposed virulence factors. Starting from these data, mutagenesis studies of a subset of up-regulated genes were performed and the mutants were tested for the ability to survive in human whole blood; Nm mutant strains lacking the genes encoding NMB1483, NalP, Mip, NspA, Fur, TbpB, and LctP were sensitive to killing by human blood. Then, the analysis was extended to the whole Nm transcriptome in human blood, using a customized 60-mer oligonucleotide tiling microarray. The application of specifically developed software combined with this new tiling array allowed the identification of different types of regulated transcripts: small intergenic RNAs, antisense RNAs, 5’ and 3’ untranslated regions and operons. The expression of these RNA molecules was confirmed by 5’-3’RACE protocol and specific RT-PCR. Here we describe the complete transcriptome of Nm during incubation in human blood; we were able to identify new proteins important for survival in human blood and also to identify additional roles of previously known virulence factors in aiding survival in blood. In addition the tiling array analysis demonstrated that Nm expresses a set of new transcripts, not previously identified, and suggests the presence of a circuit of regulatory RNA elements used by Nm to adapt to proliferate in human blood.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents several data processing and compression techniques capable of addressing the strict requirements of wireless sensor networks. After introducing a general overview of sensor networks, the energy problem is introduced, dividing the different energy reduction approaches according to the different subsystem they try to optimize. To manage the complexity brought by these techniques, a quick overview of the most common middlewares for WSNs is given, describing in detail SPINE2, a framework for data processing in the node environment. The focus is then shifted on the in-network aggregation techniques, used to reduce data sent by the network nodes trying to prolong the network lifetime as long as possible. Among the several techniques, the most promising approach is the Compressive Sensing (CS). To investigate this technique, a practical implementation of the algorithm is compared against a simpler aggregation scheme, deriving a mixed algorithm able to successfully reduce the power consumption. The analysis moves from compression implemented on single nodes to CS for signal ensembles, trying to exploit the correlations among sensors and nodes to improve compression and reconstruction quality. The two main techniques for signal ensembles, Distributed CS (DCS) and Kronecker CS (KCS), are introduced and compared against a common set of data gathered by real deployments. The best trade-off between reconstruction quality and power consumption is then investigated. The usage of CS is also addressed when the signal of interest is sampled at a Sub-Nyquist rate, evaluating the reconstruction performance. Finally the group sparsity CS (GS-CS) is compared to another well-known technique for reconstruction of signals from an highly sub-sampled version. These two frameworks are compared again against a real data-set and an insightful analysis of the trade-off between reconstruction quality and lifetime is given.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The topic of this thesis is the feedback stabilization of the attitude of magnetically actuated spacecraft. The use of magnetic coils is an attractive solution for the generation of control torques on small satellites flying inclined low Earth orbits, since magnetic control systems are characterized by reduced weight and cost, higher reliability, and require less power with respect to other kinds of actuators. At the same time, the possibility of smooth modulation of control torques reduces coupling of the attitude control system with flexible modes, thus preserving pointing precision with respect to the case when pulse-modulated thrusters are used. The principle based on the interaction between the Earth's magnetic field and the magnetic field generated by the set of coils introduces an inherent nonlinearity, because control torques can be delivered only in a plane that is orthogonal to the direction of the geomagnetic field vector. In other words, the system is underactuated, because the rotational degrees of freedom of the spacecraft, modeled as a rigid body, exceed the number of independent control actions. The solution of the control issue for underactuated spacecraft is also interesting in the case of actuator failure, e.g. after the loss of a reaction-wheel in a three-axes stabilized spacecraft with no redundancy. The application of well known control strategies is no longer possible in this case for both regulation and tracking, so that new methods have been suggested for tackling this particular problem. The main contribution of this thesis is to propose continuous time-varying controllers that globally stabilize the attitude of a spacecraft, when magneto-torquers alone are used and when a momentum-wheel supports magnetic control in order to overcome the inherent underactuation. A kinematic maneuver planning scheme, stability analyses, and detailed simulation results are also provided, with new theoretical developments and particular attention toward application considerations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays microfluidic is becoming an important technology in many chemical and biological processes and analysis applications. The potential to replace large-scale conventional laboratory instrumentation with miniaturized and self-contained systems, (called lab-on-a-chip (LOC) or point-of-care-testing (POCT)), offers a variety of advantages such as low reagent consumption, faster analysis speeds, and the capability of operating in a massively parallel scale in order to achieve high-throughput. Micro-electro-mechanical-systems (MEMS) technologies enable both the fabrication of miniaturized system and the possibility of developing compact and portable systems. The work described in this dissertation is towards the development of micromachined separation devices for both high-speed gas chromatography (HSGC) and gravitational field-flow fractionation (GrFFF) using MEMS technologies. Concerning the HSGC, a complete platform of three MEMS-based GC core components (injector, separation column and detector) is designed, fabricated and characterized. The microinjector consists of a set of pneumatically driven microvalves, based on a polymeric actuating membrane. Experimental results demonstrate that the microinjector is able to guarantee low dead volumes, fast actuation time, a wide operating temperature range and high chemical inertness. The microcolumn consists of an all-silicon microcolumn having a nearly circular cross-section channel. The extensive characterization has produced separation performances very close to the theoretical ideal expectations. A thermal conductivity detector (TCD) is chosen as most proper detector to be miniaturized since the volume reduction of the detector chamber results in increased mass and reduced dead volumes. The microTDC shows a good sensitivity and a very wide dynamic range. Finally a feasibility study for miniaturizing a channel suited for GrFFF is performed. The proposed GrFFF microchannel is at early stage of development, but represents a first step for the realization of a highly portable and potentially low-cost POCT device for biomedical applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most of the problems in modern structural design can be described with a set of equation; solutions of these mathematical models can lead the engineer and designer to get info during the design stage. The same holds true for physical-chemistry; this branch of chemistry uses mathematics and physics in order to explain real chemical phenomena. In this work two extremely different chemical processes will be studied; the dynamic of an artificial molecular motor and the generation and propagation of the nervous signals between excitable cells and tissues like neurons and axons. These two processes, in spite of their chemical and physical differences, can be both described successfully by partial differential equations, that are, respectively the Fokker-Planck equation and the Hodgkin and Huxley model. With the aid of an advanced engineering software these two processes have been modeled and simulated in order to extract a lot of physical informations about them and to predict a lot of properties that can be, in future, extremely useful during the design stage of both molecular motors and devices which rely their actions on the nervous communications between active fibres.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In distributed systems like clouds or service oriented frameworks, applications are typically assembled by deploying and connecting a large number of heterogeneous software components, spanning from fine-grained packages to coarse-grained complex services. The complexity of such systems requires a rich set of techniques and tools to support the automation of their deployment process. By relying on a formal model of components, a technique is devised for computing the sequence of actions allowing the deployment of a desired configuration. An efficient algorithm, working in polynomial time, is described and proven to be sound and complete. Finally, a prototype tool implementing the proposed algorithm has been developed. Experimental results support the adoption of this novel approach in real life scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Natürliche hydraulische Bruchbildung ist in allen Bereichen der Erdkruste ein wichtiger und stark verbreiteter Prozess. Sie beeinflusst die effektive Permeabilität und Fluidtransport auf mehreren Größenordnungen, indem sie hydraulische Konnektivität bewirkt. Der Prozess der Bruchbildung ist sowohl sehr dynamisch als auch hoch komplex. Die Dynamik stammt von der starken Wechselwirkung tektonischer und hydraulischer Prozesse, während sich die Komplexität aus der potentiellen Abhängigkeit der poroelastischen Eigenschaften von Fluiddruck und Bruchbildung ergibt. Die Bildung hydraulischer Brüche besteht aus drei Phasen: 1) Nukleation, 2) zeitabhängiges quasi-statisches Wachstum so lange der Fluiddruck die Zugfestigkeit des Gesteins übersteigt, und 3) in heterogenen Gesteinen der Einfluss von Lagen unterschiedlicher mechanischer oder sedimentärer Eigenschaften auf die Bruchausbreitung. Auch die mechanische Heterogenität, die durch präexistierende Brüche und Gesteinsdeformation erzeugt wird, hat großen Einfluß auf den Wachstumsverlauf. Die Richtung der Bruchausbreitung wird entweder durch die Verbindung von Diskontinuitäten mit geringer Zugfestigkeit im Bereich vor der Bruchfront bestimmt, oder die Bruchausbreitung kann enden, wenn der Bruch auf Diskontinuitäten mit hoher Festigkeit trifft. Durch diese Wechselwirkungen entsteht ein Kluftnetzwerk mit komplexer Geometrie, das die lokale Deformationsgeschichte und die Dynamik der unterliegenden physikalischen Prozesse reflektiert. rnrnNatürliche hydraulische Bruchbildung hat wesentliche Implikationen für akademische und kommerzielle Fragestellungen in verschiedenen Feldern der Geowissenschaften. Seit den 50er Jahren wird hydraulisches Fracturing eingesetzt, um die Permeabilität von Gas und Öllagerstätten zu erhöhen. Geländebeobachtungen, Isotopenstudien, Laborexperimente und numerische Analysen bestätigen die entscheidende Rolle des Fluiddruckgefälles in Verbindung mit poroelastischen Effekten für den lokalen Spannungszustand und für die Bedingungen, unter denen sich hydraulische Brüche bilden und ausbreiten. Die meisten numerischen hydromechanischen Modelle nehmen für die Kopplung zwischen Fluid und propagierenden Brüchen vordefinierte Bruchgeometrien mit konstantem Fluiddruck an, um das Problem rechnerisch eingrenzen zu können. Da natürliche Gesteine kaum so einfach strukturiert sind, sind diese Modelle generell nicht sonderlich effektiv in der Analyse dieses komplexen Prozesses. Insbesondere unterschätzen sie die Rückkopplung von poroelastischen Effekten und gekoppelte Fluid-Festgestein Prozesse, d.h. die Entwicklung des Porendrucks in Abhängigkeit vom Gesteinsversagen und umgekehrt.rnrnIn dieser Arbeit wird ein zweidimensionales gekoppeltes poro-elasto-plastisches Computer-Model für die qualitative und zum Teil auch quantitativ Analyse der Rolle lokalisierter oder homogen verteilter Fluiddrücke auf die dynamische Ausbreitung von hydraulischen Brüchen und die zeitgleiche Evolution der effektiven Permeabilität entwickelt. Das Programm ist rechnerisch effizient, indem es die Fluiddynamik mittels einer Druckdiffusions-Gleichung nach Darcy ohne redundante Komponenten beschreibt. Es berücksichtigt auch die Biot-Kompressibilität poröser Gesteine, die implementiert wurde um die Kontrollparameter in der Mechanik hydraulischer Bruchbildung in verschiedenen geologischen Szenarien mit homogenen und heterogenen Sedimentären Abfolgen zu bestimmen. Als Resultat ergibt sich, dass der Fluiddruck-Gradient in geschlossenen Systemen lokal zu Störungen des homogenen Spannungsfeldes führen. Abhängig von den Randbedingungen können sich diese Störungen eine Neuausrichtung der Bruchausbreitung zur Folge haben kann. Durch den Effekt auf den lokalen Spannungszustand können hohe Druckgradienten auch schichtparallele Bruchbildung oder Schlupf in nicht-entwässerten heterogenen Medien erzeugen. Ein Beispiel von besonderer Bedeutung ist die Evolution von Akkretionskeilen, wo die große Dynamik der tektonischen Aktivität zusammen mit extremen Porendrücken lokal starke Störungen des Spannungsfeldes erzeugt, die eine hoch-komplexe strukturelle Entwicklung inklusive vertikaler und horizontaler hydraulischer Bruch-Netzwerke bewirkt. Die Transport-Eigenschaften der Gesteine werden stark durch die Dynamik in der Entwicklung lokaler Permeabilitäten durch Dehnungsbrüche und Störungen bestimmt. Möglicherweise besteht ein enger Zusammenhang zwischen der Bildung von Grabenstrukturen und großmaßstäblicher Fluid-Migration. rnrnDie Konsistenz zwischen den Resultaten der Simulationen und vorhergehender experimenteller Untersuchungen deutet darauf hin, dass das beschriebene numerische Verfahren zur qualitativen Analyse hydraulischer Brüche gut geeignet ist. Das Schema hat auch Nachteile wenn es um die quantitative Analyse des Fluidflusses durch induzierte Bruchflächen in deformierten Gesteinen geht. Es empfiehlt sich zudem, das vorgestellte numerische Schema um die Kopplung mit thermo-chemischen Prozessen zu erweitern, um dynamische Probleme im Zusammenhang mit dem Wachstum von Kluftfüllungen in hydraulischen Brüchen zu untersuchen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In dieser Arbeit untersuchen wir mittels zeitaufgelöster Abbildungen die Gigahertz-Dynamik von magnetischen Skyrmionen, um die Bewegungsgleichungen für diese Quasiteilchen zu bestimmen. Um dieses Ziel zu erreichen haben wir zunächst ein CoB/Pt Schichtsystem entwickelt, das starke senkrechte magnetische Anisotropie mit einer besonders geringen Rauigkeit der Energielandschaft verbindet. Diese Eigenschaften sind für das repetitive dynamische Abbildungsverfahren unerlässlich. In einem zweiten Schritt haben wir das Probendesign optimiert und so weiterentwickelt, dass eine Beobachtung der Skyrmionenbewegung mit einer Auflösung von besser als 3 nm möglich wurde. Aufgrund dieser Verbesserungen ist es uns gelungen, die Trajektorie eines Skyrmionen aufzuzeichnen. Diese Bewegung ist eine Superposition von zwei Drehbewegungen, einer im Uhrzeigersinn und einer gegen läufigen. Aus der Existenz dieser zwei Moden lässt sich schließen, dass Skyrmionen träge Quasiteilchen sind, und aus den Frequenzen können wir einen Wert für die träge Masse ableiten. Es stellt sich heraus, dass die Masse von Skyrmion fünfmal größer ist als von existierenden Theorien vorhergesagt. Die Masse wird folglich durch einen neuartigen Mechanismus bestimmt, der sich aus der räumlichen Beschränkung der Skyrmionen ergibt, welche sich direkt aus der Topologie bleitenrnlässt.