19 resultados para Active Flow Control
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
In this thesis, a thorough investigation on acoustic noise control systems for realistic automotive scenarios is presented. The thesis is organized in two parts dealing with the main topics treated: Active Noise Control (ANC) systems and Virtual Microphone Technique (VMT), respectively. The technology of ANC allows to increase the driver's/passenger's comfort and safety exploiting the principle of mitigating the disturbing acoustic noise by the superposition of a secondary sound wave of equal amplitude but opposite phase. Performance analyses of both FeedForwrd (FF) and FeedBack (FB) ANC systems, in experimental scenarios, are presented. Since, environmental vibration noises within a car cabin are time-varying, most of the ANC solutions are adaptive. However, in this work, an effective fixed FB ANC system is proposed. Various ANC schemes are considered and compared with each other. In order to find the best possible ANC configuration which optimizes the performance in terms of disturbing noise attenuation, a thorough research of \gls{KPI}, system parameters and experimental setups design, is carried out. In the second part of this thesis, VMT, based on the estimation of specific acoustic channels, is investigated with the aim of generating a quiet acoustic zone around a confined area, e.g., the driver's ears. Performance analysis and comparison of various estimation approaches is presented. Several measurement campaigns were performed in order to acquire a sufficient duration and number of microphone signals in a significant variety of driving scenarios and employed cars. To do this, different experimental setups were designed and their performance compared. Design guidelines are given to obtain good trade-off between accuracy performance and equipment costs. Finally, a preliminary analysis with an innovative approach based on Neural Networks (NNs) to improve the current state of the art in microphone virtualization is proposed.
Resumo:
Sound radiators based on forced vibrations of plates are becoming widely employed, mainly for active sound enhancement and noise cancelling systems, both in music and automotive environment. Active sound enhancement solutions based on electromagnetic shakers hence find increasing interest. Mostly diffused applications deal with active noise control (ANC) and active vibration control systems for improving the acoustic experience inside or outside the vehicle. This requires investigating vibrational and, consequently, vibro-acoustic characteristics of vehicles. Therefore, simulation and processing methods capable of reducing the calculation time and providing high-accuracy results, are strongly demanded. In this work, an ideal case study on rectangular plates in fully clamped conditions preceded a real case analysis on vehicle panels. The sound radiation generated by a vibrating flat or shallow surface can be calculated by means of Rayleigh’s integral. The analytical solution of the problem is here calculated implementing the equations in MATLAB. Then, the results are compared with a numerical model developed in COMSOL Multiphysics, employing Finite Element Method (FEM). A very good matching between analytical and numerical solutions is shown, thus the cross validation of the two methods is achieved. The shift to the real case study, on a McLaren super car, led to the development of a mixed analytical-numerical method. Optimum results were obtained with mini shakers excitement, showing good matching of the recorded SPL with the calculated one over all the selected frequency band. In addition, a set of directivity measurements of the hood were realized, to start studying the spatiality of sound, which is fundamental to active noise control systems.
Resumo:
Urbanization is a continuing phenomenon in all the world. Grasslands, forests, etc. are being continually changed to residential, commercial and industrial complexes, roads and streets, and so on. One of the side effects of urbanization with which engineers and planners must deal with, is the increase of peak flows and volumes of runoff from rainfall events. As a result, the urban drainage and flood control systems must be designed to accommodate the peak flows from a variety of storms that may occur. Usually the peak flow, after development, is required not to exceed what would have occurred from the same storm under conditions existing prior to development. In order to do this it is necessary to design detention storage to hold back runoff and to release it downstream at controlled rates. In the first part of the work have been developed various simplified formulations that can be adopted for the design of stormwater detention facilities. In order to obtain a simplified hydrograph were adopted two approaches: the kinematic routing technique and the linear reservoir schematization. For the two approaches have been also obtained other two formulations depending if the IDF (intensity-duration-frequency) curve is described with two or three parameters. Other formulations have been developed taking into account if the outlet have a constant discharge or it depends on the water level in the pond. All these formulations can be easily applied when are known the characteristics of the drainage system and maximum discharge that these is in the outlet and has been defined a Return Period which characterize the IDF curve. In this way the volume of the detention pond can be calculated. In the second part of the work have been analyzed the design of detention ponds adopting continuous simulation models. The drainage systems adopted for the simulations, performed with SWMM5, are fictitious systems characterized by different sizes, and different shapes of the catchments and with a rainfall historical time series of 16 years recorded in Bologna. This approach suffers from the fact that continuous record of rainfall is often not available and when it is, the cost of such modelling can be very expensive, and that the majority of design practitioners are not prepared to use continuous long term modelling in the design of stormwater detention facilities. In the third part of the work have been analyzed statistical and stochastic methodologies in order to define the volume of the detention pond. In particular have been adopted the results of the long term simulation, performed with SWMM, to obtain the data to apply statistic and stochastic formulation. All these methodologies have been compared and correction coefficient have been proposed on the basis of the statistic and stochastic form. In this way engineers which have to design a detention pond can apply a simplified procedure appropriately corrected with the proposed coefficient.
Resumo:
In the last years of research, I focused my studies on different physiological problems. Together with my supervisors, I developed/improved different mathematical models in order to create valid tools useful for a better understanding of important clinical issues. The aim of all this work is to develop tools for learning and understanding cardiac and cerebrovascular physiology as well as pathology, generating research questions and developing clinical decision support systems useful for intensive care unit patients. I. ICP-model Designed for Medical Education We developed a comprehensive cerebral blood flow and intracranial pressure model to simulate and study the complex interactions in cerebrovascular dynamics caused by multiple simultaneous alterations, including normal and abnormal functional states of auto-regulation of the brain. Individual published equations (derived from prior animal and human studies) were implemented into a comprehensive simulation program. Included in the normal physiological modelling was: intracranial pressure, cerebral blood flow, blood pressure, and carbon dioxide (CO2) partial pressure. We also added external and pathological perturbations, such as head up position and intracranial haemorrhage. The model performed clinically realistically given inputs of published traumatized patients, and cases encountered by clinicians. The pulsatile nature of the output graphics was easy for clinicians to interpret. The manoeuvres simulated include changes of basic physiological inputs (e.g. blood pressure, central venous pressure, CO2 tension, head up position, and respiratory effects on vascular pressures) as well as pathological inputs (e.g. acute intracranial bleeding, and obstruction of cerebrospinal outflow). Based on the results, we believe the model would be useful to teach complex relationships of brain haemodynamics and study clinical research questions such as the optimal head-up position, the effects of intracranial haemorrhage on cerebral haemodynamics, as well as the best CO2 concentration to reach the optimal compromise between intracranial pressure and perfusion. We believe this model would be useful for both beginners and advanced learners. It could be used by practicing clinicians to model individual patients (entering the effects of needed clinical manipulations, and then running the model to test for optimal combinations of therapeutic manoeuvres). II. A Heterogeneous Cerebrovascular Mathematical Model Cerebrovascular pathologies are extremely complex, due to the multitude of factors acting simultaneously on cerebral haemodynamics. In this work, the mathematical model of cerebral haemodynamics and intracranial pressure dynamics, described in the point I, is extended to account for heterogeneity in cerebral blood flow. The model includes the Circle of Willis, six regional districts independently regulated by autoregulation and CO2 reactivity, distal cortical anastomoses, venous circulation, the cerebrospinal fluid circulation, and the intracranial pressure-volume relationship. Results agree with data in the literature and highlight the existence of a monotonic relationship between transient hyperemic response and the autoregulation gain. During unilateral internal carotid artery stenosis, local blood flow regulation is progressively lost in the ipsilateral territory with the presence of a steal phenomenon, while the anterior communicating artery plays the major role to redistribute the available blood flow. Conversely, distal collateral circulation plays a major role during unilateral occlusion of the middle cerebral artery. In conclusion, the model is able to reproduce several different pathological conditions characterized by heterogeneity in cerebrovascular haemodynamics and can not only explain generalized results in terms of physiological mechanisms involved, but also, by individualizing parameters, may represent a valuable tool to help with difficult clinical decisions. III. Effect of Cushing Response on Systemic Arterial Pressure. During cerebral hypoxic conditions, the sympathetic system causes an increase in arterial pressure (Cushing response), creating a link between the cerebral and the systemic circulation. This work investigates the complex relationships among cerebrovascular dynamics, intracranial pressure, Cushing response, and short-term systemic regulation, during plateau waves, by means of an original mathematical model. The model incorporates the pulsating heart, the pulmonary circulation and the systemic circulation, with an accurate description of the cerebral circulation and the intracranial pressure dynamics (same model as in the first paragraph). Various regulatory mechanisms are included: cerebral autoregulation, local blood flow control by oxygen (O2) and/or CO2 changes, sympathetic and vagal regulation of cardiovascular parameters by several reflex mechanisms (chemoreceptors, lung-stretch receptors, baroreceptors). The Cushing response has been described assuming a dramatic increase in sympathetic activity to vessels during a fall in brain O2 delivery. With this assumption, the model is able to simulate the cardiovascular effects experimentally observed when intracranial pressure is artificially elevated and maintained at constant level (arterial pressure increase and bradicardia). According to the model, these effects arise from the interaction between the Cushing response and the baroreflex response (secondary to arterial pressure increase). Then, patients with severe head injury have been simulated by reducing intracranial compliance and cerebrospinal fluid reabsorption. With these changes, oscillations with plateau waves developed. In these conditions, model results indicate that the Cushing response may have both positive effects, reducing the duration of the plateau phase via an increase in cerebral perfusion pressure, and negative effects, increasing the intracranial pressure plateau level, with a risk of greater compression of the cerebral vessels. This model may be of value to assist clinicians in finding the balance between clinical benefits of the Cushing response and its shortcomings. IV. Comprehensive Cardiopulmonary Simulation Model for the Analysis of Hypercapnic Respiratory Failure We developed a new comprehensive cardiopulmonary model that takes into account the mutual interactions between the cardiovascular and the respiratory systems along with their short-term regulatory mechanisms. The model includes the heart, systemic and pulmonary circulations, lung mechanics, gas exchange and transport equations, and cardio-ventilatory control. Results show good agreement with published patient data in case of normoxic and hyperoxic hypercapnia simulations. In particular, simulations predict a moderate increase in mean systemic arterial pressure and heart rate, with almost no change in cardiac output, paralleled by a relevant increase in minute ventilation, tidal volume and respiratory rate. The model can represent a valid tool for clinical practice and medical research, providing an alternative way to experience-based clinical decisions. In conclusion, models are not only capable of summarizing current knowledge, but also identifying missing knowledge. In the former case they can serve as training aids for teaching the operation of complex systems, especially if the model can be used to demonstrate the outcome of experiments. In the latter case they generate experiments to be performed to gather the missing data.
Resumo:
Network monitoring is of paramount importance for effective network management: it allows to constantly observe the network’s behavior to ensure it is working as intended and can trigger both automated and manual remediation procedures in case of failures and anomalies. The concept of SDN decouples the control logic from legacy network infrastructure to perform centralized control on multiple switches in the network, and in this context, the responsibility of switches is only to forward packets according to the flow control instructions provided by controller. However, as current SDN switches only expose simple per-port and per-flow counters, the controller has to do almost all the processing to determine the network state, which causes significant communication overhead and excessive latency for monitoring purposes. The absence of programmability in the data plane of SDN prompted the advent of programmable switches, which allow developers to customize the data-plane pipeline and implement novel programs operating directly in the switches. This means that we can offload certain monitoring tasks to programmable data planes, to perform fine-grained monitoring even at very high packet processing speeds. Given the central importance of network monitoring exploiting programmable data planes, the goal of this thesis is to enable a wide range of monitoring tasks in programmable switches, with a specific focus on the ones equipped with programmable ASICs. Indeed, most network monitoring solutions available in literature do not take computational and memory constraints of programmable switches into due account, preventing, de facto, their successful implementation in commodity switches. This claims that network monitoring tasks can be executed in programmable switches. Our evaluations show that the contributions in this thesis could be used by network administrators as well as network security engineers, to better understand the network status depending on different monitoring metrics, and thus prevent network infrastructure and service outages.
Fault detection, diagnosis and active fault tolerant control for a satellite attitude control system
Resumo:
Modern control systems are becoming more and more complex and control algorithms more and more sophisticated. Consequently, Fault Detection and Diagnosis (FDD) and Fault Tolerant Control (FTC) have gained central importance over the past decades, due to the increasing requirements of availability, cost efficiency, reliability and operating safety. This thesis deals with the FDD and FTC problems in a spacecraft Attitude Determination and Control System (ADCS). Firstly, the detailed nonlinear models of the spacecraft attitude dynamics and kinematics are described, along with the dynamic models of the actuators and main external disturbance sources. The considered ADCS is composed of an array of four redundant reaction wheels. A set of sensors provides satellite angular velocity, attitude and flywheel spin rate information. Then, general overviews of the Fault Detection and Isolation (FDI), Fault Estimation (FE) and Fault Tolerant Control (FTC) problems are presented, and the design and implementation of a novel diagnosis system is described. The system consists of a FDI module composed of properly organized model-based residual filters, exploiting the available input and output information for the detection and localization of an occurred fault. A proper fault mapping procedure and the nonlinear geometric approach are exploited to design residual filters explicitly decoupled from the external aerodynamic disturbance and sensitive to specific sets of faults. The subsequent use of suitable adaptive FE algorithms, based on the exploitation of radial basis function neural networks, allows to obtain accurate fault estimations. Finally, this estimation is actively exploited in a FTC scheme to achieve a suitable fault accommodation and guarantee the desired control performances. A standard sliding mode controller is implemented for attitude stabilization and control. Several simulation results are given to highlight the performances of the overall designed system in case of different types of faults affecting the ADCS actuators and sensors.
Resumo:
The study of optic flow on postural control may explain how self-motion perception contributes to postural stability in young males and females and how such function changes in the old falls risk population. Study I: The aim was to examine the optic flow effect on postural control in young people (n=24), using stabilometry and surface-electromyography. Subjects viewed expansion and contraction optic flow stimuli which were presented full field, in the foveral or in the peripheral visual field. Results showed that optic flow stimulation causes an asymmetry in postural balance and a different lateralization of postural control in men and women. Gender differences evoked by optic flow were found both in the muscle activity and in the prevalent direction of oscillation. The COP spatial variability was reduced during the view of peripheral stimuli which evoked a clustered prevalent direction of oscillation, while foveal and random stimuli induced non-distributed directions. Study II was aimed at investigating the age-related mechanisms of postural stability during the view of optic flow stimuli in young (n=17) and old (n=19) people, using stabilometry and kinematic. Results showed that old people showed a greater effort to maintain posture during the view of optic flow stimuli than the young. Elderly seems to use the head stabilization on trunk strategy. Visual stimuli evoke an excitatory input on postural muscles, but the stimulus structure produces different postural effects. Peripheral optic flow stabilizes postural sway, while random and foveal stimuli provoke larger sway variability similar to those evoked in baseline. Postural control uses different mechanisms within each leg to produce the appropriate postural response to interact with extrapersonal environment. Ageing reduce the effortlessness to stabilize posture during optic flow, suggesting a neuronal processing decline associated with difficulty integrating multi-sensory information of self-motion perception and increasing risk of falls.
Resumo:
A new control scheme has been presented in this thesis. Based on the NonLinear Geometric Approach, the proposed Active Control System represents a new way to see the reconfigurable controllers for aerospace applications. The presence of the Diagnosis module (providing the estimation of generic signals which, based on the case, can be faults, disturbances or system parameters), mean feature of the depicted Active Control System, is a characteristic shared by three well known control systems: the Active Fault Tolerant Controls, the Indirect Adaptive Controls and the Active Disturbance Rejection Controls. The standard NonLinear Geometric Approach (NLGA) has been accurately investigated and than improved to extend its applicability to more complex models. The standard NLGA procedure has been modified to take account of feasible and estimable sets of unknown signals. Furthermore the application of the Singular Perturbations approximation has led to the solution of Detection and Isolation problems in scenarios too complex to be solved by the standard NLGA. Also the estimation process has been improved, where multiple redundant measuremtent are available, by the introduction of a new algorithm, here called "Least Squares - Sliding Mode". It guarantees optimality, in the sense of the least squares, and finite estimation time, in the sense of the sliding mode. The Active Control System concept has been formalized in two controller: a nonlinear backstepping controller and a nonlinear composite controller. Particularly interesting is the integration, in the controller design, of the estimations coming from the Diagnosis module. Stability proofs are provided for both the control schemes. Finally, different applications in aerospace have been provided to show the applicability and the effectiveness of the proposed NLGA-based Active Control System.
Resumo:
This dissertation concerns active fibre-reinforced composites with embedded shape memory alloy wires. The structural application of active materials allows to develop adaptive structures which actively respond to changes in the environment, such as morphing structures, self-healing structures and power harvesting devices. In particular, shape memory alloy actuators integrated within a composite actively control the structural shape or stiffness, thus influencing the composite static and dynamic properties. Envisaged applications include, among others, the prevention of thermal buckling of the outer skin of air vehicles, shape changes in panels for improved aerodynamic characteristics and the deployment of large space structures. The study and design of active composites is a complex and multidisciplinary topic, requiring in-depth understanding of both the coupled behaviour of active materials and the interaction between the different composite constituents. Both fibre-reinforced composites and shape memory alloys are extremely active research topics, whose modelling and experimental characterisation still present a number of open problems. Thus, while this dissertation focuses on active composites, some of the research results presented here can be usefully applied to traditional fibre-reinforced composites or other shape memory alloy applications. The dissertation is composed of four chapters. In the first chapter, active fibre-reinforced composites are introduced by giving an overview of the most common choices available for the reinforcement, matrix and production process, together with a brief introduction and classification of active materials. The second chapter presents a number of original contributions regarding the modelling of fibre-reinforced composites. Different two-dimensional laminate theories are derived from a parent three-dimensional theory, introducing a procedure for the a posteriori reconstruction of transverse stresses along the laminate thickness. Accurate through the thickness stresses are crucial for the composite modelling as they are responsible for some common failure mechanisms. A new finite element based on the First-order Shear Deformation Theory and a hybrid stress approach is proposed for the numerical solution of the two-dimensional laminate problem. The element is simple and computationally efficient. The transverse stresses through the laminate thickness are reconstructed starting from a general finite element solution. A two stages procedure is devised, based on Recovery by Compatibility in Patches and three-dimensional equilibrium. Finally, the determination of the elastic parameters of laminated structures via numerical-experimental Bayesian techniques is investigated. Two different estimators are analysed and compared, leading to the definition of an alternative procedure to improve convergence of the estimation process. The third chapter focuses on shape memory alloys, describing their properties and applications. A number of constitutive models proposed in the literature, both one-dimensional and three-dimensional, are critically discussed and compared, underlining their potential and limitations, which are mainly related to the definition of the phase diagram and the choice of internal variables. Some new experimental results on shape memory alloy material characterisation are also presented. These experimental observations display some features of the shape memory alloy behaviour which are generally not included in the current models, thus some ideas are proposed for the development of a new constitutive model. The fourth chapter, finally, focuses on active composite plates with embedded shape memory alloy wires. A number of di®erent approaches can be used to predict the behaviour of such structures, each model presenting different advantages and drawbacks related to complexity and versatility. A simple model able to describe both shape and stiffness control configurations within the same context is proposed and implemented. The model is then validated considering the shape control configuration, which is the most sensitive to model parameters. The experimental work is divided in two parts. In the first part, an active composite is built by gluing prestrained shape memory alloy wires on a carbon fibre laminate strip. This structure is relatively simple to build, however it is useful in order to experimentally demonstrate the feasibility of the concept proposed in the first part of the chapter. In the second part, the making of a fibre-reinforced composite with embedded shape memory alloy wires is investigated, considering different possible choices of materials and manufacturing processes. Although a number of technological issues still need to be faced, the experimental results allow to demonstrate the mechanism of shape control via embedded shape memory alloy wires, while showing a good agreement with the proposed model predictions.
Resumo:
The velocity and mixing field of two turbulent jets configurations have been experimentally characterized by means of cold- and hot-wire anemometry in order to investigate the effects of the initial conditions on the flow development. In particular, experiments have been focused on the effect of the separation wall between the two streams on the flow field. The results of the experiments have pointed out that the wake behind a thick wall separating wall has a strong influence on the flow field evolution. For instance, for nearly unitary velocity ratios, a clear vortex shedding from the wall is observable. This phenomenon enhances the mixing between the inner and outer shear layer. This enhancement in the fluctuating activity is a consequence of a local absolute instability of the flow which, for a small range of velocity ratios, behaves as an hydrodynamic oscillator with no sensibility to external perturbations. It has been suggested indeed that this absolute instability can be used as a passive method to control the flow evolution. Finally, acoustic excitation has been applied to the near field in order to verify whether or not the observed vortex shedding behind the separating wall is due to a global oscillating mode as predicted by the theory. A new scaling relationship has been also proposed to determine the preferred frequency for nearly unitary velocity ratios. The proposed law takes into account both the Reynolds number and the velocity ratio dependence of this frequency and, therefore, improves all the previously proposed relationships.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
The Gaia space mission is a major project for the European astronomical community. As challenging as it is, the processing and analysis of the huge data-flow incoming from Gaia is the subject of thorough study and preparatory work by the DPAC (Data Processing and Analysis Consortium), in charge of all aspects of the Gaia data reduction. This PhD Thesis was carried out in the framework of the DPAC, within the team based in Bologna. The task of the Bologna team is to define the calibration model and to build a grid of spectro-photometric standard stars (SPSS) suitable for the absolute flux calibration of the Gaia G-band photometry and the BP/RP spectrophotometry. Such a flux calibration can be performed by repeatedly observing each SPSS during the life-time of the Gaia mission and by comparing the observed Gaia spectra to the spectra obtained by our ground-based observations. Due to both the different observing sites involved and the huge amount of frames expected (≃100000), it is essential to maintain the maximum homogeneity in data quality, acquisition and treatment, and a particular care has to be used to test the capabilities of each telescope/instrument combination (through the “instrument familiarization plan”), to devise methods to keep under control, and eventually to correct for, the typical instrumental effects that can affect the high precision required for the Gaia SPSS grid (a few % with respect to Vega). I contributed to the ground-based survey of Gaia SPSS in many respects: with the observations, the instrument familiarization plan, the data reduction and analysis activities (both photometry and spectroscopy), and to the maintenance of the data archives. However, the field I was personally responsible for was photometry and in particular relative photometry for the production of short-term light curves. In this context I defined and tested a semi-automated pipeline which allows for the pre-reduction of imaging SPSS data and the production of aperture photometry catalogues ready to be used for further analysis. A series of semi-automated quality control criteria are included in the pipeline at various levels, from pre-reduction, to aperture photometry, to light curves production and analysis.
Resumo:
This Thesis focuses on the X-ray study of the inner regions of Active Galactic Nuclei, in particular on the formation of high velocity winds by the accretion disk itself. Constraining AGN winds physical parameters is of paramount importance both for understanding the physics of the accretion/ejection flow onto supermassive black holes, and for quantifying the amount of feedback between the SMBH and its environment across the cosmic time. The sources selected for the present study are BAL, mini-BAL, and NAL QSOs, known to host high-velocity winds associated to the AGN nuclear regions. Observationally, a three-fold strategy has been adopted: - substantial samples of distant sources have been analyzed through spectral, photometric, and statistical techniques, to gain insights into their mean properties as a population; - a moderately sized sample of bright sources has been studied through detailed X-ray spectral analysis, to give a first flavor of the general spectral properties of these sources, also from a temporally resolved point of view; - the best nearby candidate has been thoroughly studied using the most sophisticated spectral analysis techniques applied to a large dataset with a high S/N ratio, to understand the details of the physics of its accretion/ejection flow. There are three main channels through which this Thesis has been developed: - [Archival Studies]: the XMM-Newton public archival data has been extensively used to analyze both a large sample of distant BAL QSOs, and several individual bright sources, either BAL, mini-BAL, or NAL QSOs. - [New Observational Campaign]: I proposed and was awarded with new X-ray pointings of the mini-BAL QSOs PG 1126-041 and PG 1351+640 during the XMM-Newton AO-7 and AO-8. These produced the biggest X-ray observational campaign ever made on a mini-BAL QSO (PG 1126-041), including the longest exposure so far. Thanks to the exceptional dataset, a whealth of informations have been obtained on both the intrinsic continuum and on the complex reprocessing media that happen to be in the inner regions of this AGN. Furthermore, the temporally resolved X-ray spectral analysis field has been finally opened for mini-BAL QSOs. - [Theoretical Studies]: some issues about the connection between theories and observations of AGN accretion disk winds have been investigated, through theoretical arguments and synthetic absorption line profiles studies.
Resumo:
A fundamental gap in the current understanding of collapsed structures in the universe concerns the thermodynamical evolution of the ordinary, baryonic component. Unopposed radiative cooling of plasma would lead to the cooling catastrophe, a massive inflow of condensing gas toward the centre of galaxies, groups and clusters. The last generation of multiwavelength observations has radically changed our view on baryons, suggesting that the heating linked to the active galactic nucleus (AGN) may be the balancing counterpart of cooling. In this Thesis, I investigate the engine of the heating regulated by the central black hole. I argue that the mechanical feedback, based on massive subrelativistic outflows, is the key to solving the cooling flow problem, i.e. dramatically quenching the cooling rates for several billion years without destroying the cool-core structure. Using an upgraded version of the parallel 3D hydrodynamic code FLASH, I show that anisotropic AGN outflows can further reproduce fundamental observed features, such as buoyant bubbles, cocoon shocks, sonic ripples, metals dredge-up, and subsonic turbulence. The latter is an essential ingredient to drive nonlinear thermal instabilities, which cause cold gas condensation, a residual of the quenched cooling flow and, later, fuel for the AGN feedback engine. The self-regulated outflows are systematically tested on the scales of massive clusters, groups and isolated elliptical galaxies: in lighter less bound objects the feedback needs to be gentler and less efficient, in order to avoid drastic overheating. In this Thesis, I describe in depth the complex hydrodynamics, involving the coupling of the feedback energy to that of the surrounding hot medium. Finally, I present the merits and flaws of all the proposed models, with a critical eye toward observational concordance.
Resumo:
A novel design based on electric field-free open microwell arrays for the automated continuous-flow sorting of single or small clusters of cells is presented. The main feature of the proposed device is the parallel analysis of cell-cell and cell-particle interactions in each microwell of the array. High throughput sample recovery with a fast and separate transfer from the microsites to standard microtiter plates is also possible thanks to the flexible printed circuit board technology which permits to produce cost effective large area arrays featuring geometries compatible with laboratory equipment. The particle isolation is performed via negative dielectrophoretic forces which convey the particles’ into the microwells. Particles such as cells and beads flow in electrically active microchannels on whose substrate the electrodes are patterned. The introduction of particles within the microwells is automatically performed by generating the required feedback signal by a microscope-based optical counting and detection routine. In order to isolate a controlled number of particles we created two particular configurations of the electric field within the structure. The first one permits their isolation whereas the second one creates a net force which repels the particles from the microwell entrance. To increase the parallelism at which the cell-isolation function is implemented, a new technique based on coplanar electrodes to detect particle presence was implemented. A lock-in amplifying scheme was used to monitor the impedance of the channel perturbed by flowing particles in high-conductivity suspension mediums. The impedance measurement module was also combined with the dielectrophoretic focusing stage situated upstream of the measurement stage, to limit the measured signal amplitude dispersion due to the particles position variation within the microchannel. In conclusion, the designed system complies with the initial specifications making it suitable for cellomics and biotechnology applications.