901 resultados para Distributed Traffic Control
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
Il lavoro è stato suddiviso in tre macro-aree. Una prima riguardante un'analisi teorica di come funzionano le intrusioni, di quali software vengono utilizzati per compierle, e di come proteggersi (usando i dispositivi che in termine generico si possono riconoscere come i firewall). Una seconda macro-area che analizza un'intrusione avvenuta dall'esterno verso dei server sensibili di una rete LAN. Questa analisi viene condotta sui file catturati dalle due interfacce di rete configurate in modalità promiscua su una sonda presente nella LAN. Le interfacce sono due per potersi interfacciare a due segmenti di LAN aventi due maschere di sotto-rete differenti. L'attacco viene analizzato mediante vari software. Si può infatti definire una terza parte del lavoro, la parte dove vengono analizzati i file catturati dalle due interfacce con i software che prima si occupano di analizzare i dati di contenuto completo, come Wireshark, poi dei software che si occupano di analizzare i dati di sessione che sono stati trattati con Argus, e infine i dati di tipo statistico che sono stati trattati con Ntop. Il penultimo capitolo, quello prima delle conclusioni, invece tratta l'installazione di Nagios, e la sua configurazione per il monitoraggio attraverso plugin dello spazio di disco rimanente su una macchina agent remota, e sui servizi MySql e DNS. Ovviamente Nagios può essere configurato per monitorare ogni tipo di servizio offerto sulla rete.
Resumo:
This thesis deals with distributed control strategies for cooperative control of multi-robot systems. Specifically, distributed coordination strategies are presented for groups of mobile robots. The formation control problem is initially solved exploiting artificial potential fields. The purpose of the presented formation control algorithm is to drive a group of mobile robots to create a completely arbitrarily shaped formation. Robots are initially controlled to create a regular polygon formation. A bijective coordinate transformation is then exploited to extend the scope of this strategy, to obtain arbitrarily shaped formations. For this purpose, artificial potential fields are specifically designed, and robots are driven to follow their negative gradient. Artificial potential fields are then subsequently exploited to solve the coordinated path tracking problem, thus making the robots autonomously spread along predefined paths, and move along them in a coordinated way. Formation control problem is then solved exploiting a consensus based approach. Specifically, weighted graphs are used both to define the desired formation, and to implement collision avoidance. As expected for consensus based algorithms, this control strategy is experimentally shown to be robust to the presence of communication delays. The global connectivity maintenance issue is then considered. Specifically, an estimation procedure is introduced to allow each agent to compute its own estimate of the algebraic connectivity of the communication graph, in a distributed manner. This estimate is then exploited to develop a gradient based control strategy that ensures that the communication graph remains connected, as the system evolves. The proposed control strategy is developed initially for single-integrator kinematic agents, and is then extended to Lagrangian dynamical systems.
Resumo:
This thesis deals with the study of optimal control problems for the incompressible Magnetohydrodynamics (MHD) equations. Particular attention to these problems arises from several applications in science and engineering, such as fission nuclear reactors with liquid metal coolant and aluminum casting in metallurgy. In such applications it is of great interest to achieve the control on the fluid state variables through the action of the magnetic Lorentz force. In this thesis we investigate a class of boundary optimal control problems, in which the flow is controlled through the boundary conditions of the magnetic field. Due to their complexity, these problems present various challenges in the definition of an adequate solution approach, both from a theoretical and from a computational point of view. In this thesis we propose a new boundary control approach, based on lifting functions of the boundary conditions, which yields both theoretical and numerical advantages. With the introduction of lifting functions, boundary control problems can be formulated as extended distributed problems. We consider a systematic mathematical formulation of these problems in terms of the minimization of a cost functional constrained by the MHD equations. The existence of a solution to the flow equations and to the optimal control problem are shown. The Lagrange multiplier technique is used to derive an optimality system from which candidate solutions for the control problem can be obtained. In order to achieve the numerical solution of this system, a finite element approximation is considered for the discretization together with an appropriate gradient-type algorithm. A finite element object-oriented library has been developed to obtain a parallel and multigrid computational implementation of the optimality system based on a multiphysics approach. Numerical results of two- and three-dimensional computations show that a possible minimum for the control problem can be computed in a robust and accurate manner.
Resumo:
This thesis presents some different techniques designed to drive a swarm of robots in an a-priori unknown environment in order to move the group from a starting area to a final one avoiding obstacles. The presented techniques are based on two different theories used alone or in combination: Swarm Intelligence (SI) and Graph Theory. Both theories are based on the study of interactions between different entities (also called agents or units) in Multi- Agent Systems (MAS). The first one belongs to the Artificial Intelligence context and the second one to the Distributed Systems context. These theories, each one from its own point of view, exploit the emergent behaviour that comes from the interactive work of the entities, in order to achieve a common goal. The features of flexibility and adaptability of the swarm have been exploited with the aim to overcome and to minimize difficulties and problems that can affect one or more units of the group, having minimal impact to the whole group and to the common main target. Another aim of this work is to show the importance of the information shared between the units of the group, such as the communication topology, because it helps to maintain the environmental information, detected by each single agent, updated among the swarm. Swarm Intelligence has been applied to the presented technique, through the Particle Swarm Optimization algorithm (PSO), taking advantage of its features as a navigation system. The Graph Theory has been applied by exploiting Consensus and the application of the agreement protocol with the aim to maintain the units in a desired and controlled formation. This approach has been followed in order to conserve the power of PSO and to control part of its random behaviour with a distributed control algorithm like Consensus.
Resumo:
The study of optic flow on postural control may explain how self-motion perception contributes to postural stability in young males and females and how such function changes in the old falls risk population. Study I: The aim was to examine the optic flow effect on postural control in young people (n=24), using stabilometry and surface-electromyography. Subjects viewed expansion and contraction optic flow stimuli which were presented full field, in the foveral or in the peripheral visual field. Results showed that optic flow stimulation causes an asymmetry in postural balance and a different lateralization of postural control in men and women. Gender differences evoked by optic flow were found both in the muscle activity and in the prevalent direction of oscillation. The COP spatial variability was reduced during the view of peripheral stimuli which evoked a clustered prevalent direction of oscillation, while foveal and random stimuli induced non-distributed directions. Study II was aimed at investigating the age-related mechanisms of postural stability during the view of optic flow stimuli in young (n=17) and old (n=19) people, using stabilometry and kinematic. Results showed that old people showed a greater effort to maintain posture during the view of optic flow stimuli than the young. Elderly seems to use the head stabilization on trunk strategy. Visual stimuli evoke an excitatory input on postural muscles, but the stimulus structure produces different postural effects. Peripheral optic flow stabilizes postural sway, while random and foveal stimuli provoke larger sway variability similar to those evoked in baseline. Postural control uses different mechanisms within each leg to produce the appropriate postural response to interact with extrapersonal environment. Ageing reduce the effortlessness to stabilize posture during optic flow, suggesting a neuronal processing decline associated with difficulty integrating multi-sensory information of self-motion perception and increasing risk of falls.
Resumo:
Objectives: Recent anatomical-functional studies have transformed our understanding of cerebral motor control away from a hierarchical structure and toward parallel and interconnected specialized circuits. Subcortical electrical stimulation during awake surgery provides a unique opportunity to identify white matter tracts involved in motor control. For the first time, this study reports the findings on motor modulatory responses evoked by subcortical stimulation and investigates the cortico-subcortical connectivity of cerebral motor control. Experimental design: Twenty-one selected patients were operated while awake for frontal, insular, and parietal diffuse low-grade gliomas. Subcortical electrostimulation mapping was used to search for interference with voluntary movements. The corresponding stimulation sites were localized on brain schemas using the anterior and posterior commissures method. Principal observations: Subcortical negative motor responses were evoked in 20/21 patients, whereas acceleration of voluntary movements and positive motor responses were observed in three and five patients, respectively. The majority of the stimulation sites were detected rostral of the corticospinal tract near the vertical anterior-commissural line, and additional sites were seen in the frontal and parietal white matter. Conclusions: The diverse interferences with motor function resulting in inhibition and acceleration imply a modulatory influence of the detected fiber network. The subcortical stimulation sites were distributed veil-like, anterior to the primary motor fibers, suggesting descending pathways originating from premotor areas known for negative motor response characteristics. Further stimulation sites in the parietal white matter as well as in the anterior arm of the internal capsule indicate a large-scale fronto-parietal motor control network. Hum Brain Mapp, 2012. © 2012 Wiley Periodicals, Inc.
Resumo:
Biomarkers are currently best used as mechanistic "signposts" rather than as "traffic lights" in the environmental risk assessment of endocrine-disrupting chemicals (EDCs). In field studies, biomarkers of exposure [e.g., vitellogenin (VTG) induction in male fish] are powerful tools for tracking single substances and mixtures of concern. Biomarkers also provide linkage between field and laboratory data, thereby playing an important role in directing the need for and design of fish chronic tests for EDCs. It is the adverse effect end points (e.g., altered development, growth, and/or reproduction) from such tests that are most valuable for calculating adverseNOEC (no observed effect concentration) or adverseEC10 (effective concentration for a 10% response) and subsequently deriving predicted no effect concentrations (PNECs). With current uncertainties, biomarkerNOEC or biomarkerEC10 data should not be used in isolation to derive PNECs. In the future, however, there may be scope to increasingly use biomarker data in environmental decision making, if plausible linkages can be made across levels of organization such that adverse outcomes might be envisaged relative to biomarker responses. For biomarkers to fulfil their potential, they should be mechanistically relevant and reproducible (as measured by interlaboratory comparisons of the same protocol). VTG is a good example of such a biomarker in that it provides an insight to the mode of action (estrogenicity) that is vital to fish reproductive health. Interlaboratory reproducibility data for VTG are also encouraging; recent comparisons (using the same immunoassay protocol) have provided coefficients of variation (CVs) of 38-55% (comparable to published CVs of 19-58% for fish survival and growth end points used in regulatory test guidelines). While concern over environmental xenoestrogens has led to the evaluation of reproductive biomarkers in fish, it must be remembered that many substances act via diverse mechanisms of action such that the environmental risk assessment for EDCs is a broad and complex issue. Also, biomarkers such as secondary sexual characteristics, gonadosomatic indices, plasma steroids, and gonadal histology have significant potential for guiding interspecies assessments of EDCs and designing fish chronic tests. To strengthen the utility of EDC biomarkers in fish, we need to establish a historical control database (also considering natural variability) to help differentiate between statistically detectable versus biologically significant responses. In conclusion, as research continues to develop a range of useful EDC biomarkers, environmental decision-making needs to move forward, and it is proposed that the "biomarkers as signposts" approach is a pragmatic way forward in the current risk assessment of EDCs.
Resumo:
As microgrid power systems gain prevalence and renewable energy comprises greater and greater portions of distributed generation, energy storage becomes important to offset the higher variance of renewable energy sources and maximize their usefulness. One of the emerging techniques is to utilize a combination of lead-acid batteries and ultracapacitors to provide both short and long-term stabilization to microgrid systems. The different energy and power characteristics of batteries and ultracapacitors imply that they ought to be utilized in different ways. Traditional linear controls can use these energy storage systems to stabilize a power grid, but cannot effect more complex interactions. This research explores a fuzzy logic approach to microgrid stabilization. The ability of a fuzzy logic controller to regulate a dc bus in the presence of source and load fluctuations, in a manner comparable to traditional linear control systems, is explored and demonstrated. Furthermore, the expanded capabilities (such as storage balancing, self-protection, and battery optimization) of a fuzzy logic system over a traditional linear control system are shown. System simulation results are presented and validated through hardware-based experiments. These experiments confirm the capabilities of the fuzzy logic control system to regulate bus voltage, balance storage elements, optimize battery usage, and effect self-protection.
Resumo:
Tonoplast, the membrane delimiting plant vacuoles, regulates ion, water and nutrient movement between the cytosol and the vacuolar lumen through the activity of its membrane proteins. Correct traffic of proteins from the endoplasmic reticulum (ER) to the tonoplast requires (i) approval by the ER quality control, (ii) motifs for exit from the ER and (iii) motifs that promote sorting to the tonoplast. Recent evidence suggests that this traffic follows different pathways that are protein-specific and could also reflect vacuole specialization for lytic or storage function. The routes can be distinguished based on their sensitivity to drugs such as brefeldin A and C834 as well as using mutant plants that are defective in adaptor proteins of vesicle coats, or dominant-negative mutants of Rab GTPases.
Resumo:
Recent advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing environmental conditions and number of users, application performance might suffer, leading to Service Level Agreement (SLA) violations and inefficient use of hardware resources. We introduce a system for controlling the complexity of scaling applications composed of multiple services using mechanisms based on fulfillment of SLAs. We present how service monitoring information can be used in conjunction with service level objectives, predictions, and correlations between performance indicators for optimizing the allocation of services belonging to distributed applications. We validate our models using experiments and simulations involving a distributed enterprise information system. We show how discovering correlations between application performance indicators can be used as a basis for creating refined service level objectives, which can then be used for scaling the application and improving the overall application's performance under similar conditions.
Resumo:
The paper presents a link layer stack for wireless sensor networks, which consists of the Burst-aware Energy-efficient Adaptive Medium access control (BEAM) and the Hop-to-Hop Reliability (H2HR) protocol. BEAM can operate with short beacons to announce data transmissions or include data within the beacons. Duty cycles can be adapted by a traffic prediction mechanism indicating pending packets destined for a node and by estimating its wake-up times. H2HR takes advantage of information provided by BEAM such as neighbour information and transmission information to perform per-hop congestion control. We justify the design decisions by measurements in a real-world wireless sensor network testbed and compare the performance with other link layer protocols.
Resumo:
This paper is a summary of the main contribu- tions of the PhD thesis published in [1]. The main research contributions of the thesis are driven by the research question how to design simple, yet efficient and robust run-time adaptive resource allocation schemes within the commu- nication stack of Wireless Sensor Network (WSN) nodes. The thesis addresses several problem domains with con- tributions on different layers of the WSN communication stack. The main contributions can be summarized as follows: First, a a novel run-time adaptive MAC protocol is intro- duced, which stepwise allocates the power-hungry radio interface in an on-demand manner when the encountered traffic load requires it. Second, the thesis outlines a metho- dology for robust, reliable and accurate software-based energy-estimation, which is calculated at network run- time on the sensor node itself. Third, the thesis evaluates several Forward Error Correction (FEC) strategies to adap- tively allocate the correctional power of Error Correcting Codes (ECCs) to cope with timely and spatially variable bit error rates. Fourth, in the context of TCP-based communi- cations in WSNs, the thesis evaluates distributed caching and local retransmission strategies to overcome the perfor- mance degrading effects of packet corruption and trans- mission failures when transmitting data over multiple hops. The performance of all developed protocols are eval- uated on a self-developed real-world WSN testbed and achieve superior performance over selected existing ap- proaches, especially where traffic load and channel condi- tions are suspect to rapid variations over time.
Resumo:
A complete physical map of Escherichia coli K-12 strain MG1655 was constructed by digesting chromosomal DNA with the infrequently cutting restriction enzymes NotI, SfiI and XbaI and separating the fragments by pulsed field gel electrophoresis. The map was used to compare six K-12 strains of E. coli. Although several differences were noted and localized, the map of MG1655 was representative of all the K-12 strains tested. The maps were also used to analyze chromosomal rearrangements in the E. coli strain MG1655. The spontaneous and UV induced frequencies of tandem duplication formation were measured at several loci distributed around the chromosome. The spontaneous duplication frequency varied from 10$\sp{-5}$ to 10$\sp{-3}$ and increased at least ten-fold following mild UV irradiation treatment. Duplications of several regions of the chromosome, including the serA region and the metE region, were mapped using pulsed field gel electrophoresis. Duplications of serA were found to be large, ranging in size from 600 kb to 2100 kb. Several of the duplications isolated at serA were caused by ectopic recombination between IS5 elements and between IS186 elements. Duplications of the metE region, however, were almost exclusively the result of ectopic recombination between ribosomal RNA cistrons. Duplication frequencies were determined at both serA and metE in wild type and mismatch repair mutant strains (mutL, mutS, uvrD and recF). Even though all of the mismatch repair mutations increased duplication frequency of metE, the largest increases were observed in the mutL and mutS strains. Duplication frequency of serA was increased less dramatically by mutations in mismatch repair. Several duplications of metE isolated in a wild type and a mismatch repair mutant were mapped. The results showed that the same repeated sequences were used for duplication formation in the mismatch repair mutant as were used in the wild type strain. Several isolates showed evidence of multiple rearrangements indicating that mismatch repair may play a role in stabilizing the genome by controlling chromosomal rearrangement. ^