721 resultados para Informatics Engineering
Resumo:
[EN] This paper proposes the incorporation of engineering knowledge through both (a) advanced state-of-the-art preference handling decision-making tools integrated in multiobjective evolutionary algorithms and (b) engineering knowledge-based variance reduction simulation as enhancing tools for the robust optimum design of structural frames taking uncertainties into consideration in the design variables.The simultaneous minimization of the constrained weight (adding structuralweight and average distribution of constraint violations) on the one hand and the standard deviation of the distribution of constraint violation on the other are handled with multiobjective optimization-based evolutionary computation in two different multiobjective algorithms. The optimum design values of the deterministic structural problem in question are proposed as a reference point (the aspiration level) in reference-point-based evolutionary multiobjective algorithms (here g-dominance is used). Results including
Resumo:
[ES] El reto de conseguir una red eléctrica más eficiente pasa por la introducción masiva de energías renovables en la red eléctrica, disminuyendo así las emisiones de CO2. Por ello, se propone no sólo controlar la producción, como se ha hecho hasta ahora, sino que también se propone controlar la demanda. Por ello, en esta investigación se evalúa el uso de la Ingeniería Dirigida por Modelos para gestionar la complejidad en el modelado de redes eléctricas, la Inteligencia de Negocio para analizar la gran cantidad de datos de simulaciones y la Inteligencia Colectiva para optimizar el reparto de energía entre los millones de dispositivos que se encuentran en el lado de la demanda.
Resumo:
This research investigated someone of the main problems connected to the application of Tissue Engineering in the prosthetic field, in particular about the characterization of the scaffolding materials and biomimetic strategies adopted in order to promote the implant integration. The spectroscopic and thermal analysis techniques were usefully applied to characterize the chemico-physical properties of the materials such as – crystallinity; – relative composition in case of composite materials; – Structure and conformation of polymeric and peptidic chains; – mechanism and degradation rate; – Intramolecular and intermolecular interactions (hydrogen bonds, aliphatic interactions). This kind of information are of great importance in the comprehension of the interactions that scaffold undergoes when it is in contact with biological tissues; this information are fundamental to predict biodegradation mechanisms and to understand how chemico-physical properties change during the degradation process. In order to fully characterize biomaterials, this findings must be integrated by information relative to mechanical aspects and in vitro and in vivo behavior thanks to collaborations with biomedical engineers and biologists. This study was focussed on three different systems that correspond to three different strategies adopted in Tissue Engineering: biomimetic replica of fibrous 3-D structure of extracellular matrix (PCL-PLLA), incorporation of an apatitic phase similar to bone inorganic phase to promote biomineralization (PCL-HA), surface modification with synthetic oligopeptides that elicit the interaction with osteoblasts. The characterization of the PCL-PLLA composite underlined that the degradation started along PLLA fibres, which are more hydrophylic, and they serve as a guide for tissue regeneration. Moreover it was found that some cellular lines are more active in the colonization of the scaffold. In the PCL-HA composite, the weight ratio between the polymeric and the inorganic phase plays an essential role both in the degradation process and in the biomineralization of the material. The study of self-assembling peptides allowed to clarify the influence of primary structure on intermolecular and intermolecular interactions, that lead to the formation of the secondary structure and it was possible to find a new class of oligopeptides useful to functionalize materials surface. Among the analytical techniques used in this study, Raman vibrational spectroscopy played a major role, being non-destructive and non-invasive, two properties that make it suitable to degradation studies and to morphological characterization. Also micro-IR spectroscopy was useful in the comprehension of peptide structure on oxidized titanium: up to date this study was one of the first to employ this relatively new technique in the biomedical field.
Resumo:
Asset Management (AM) is a set of procedures operable at the strategic-tacticaloperational level, for the management of the physical asset’s performance, associated risks and costs within its whole life-cycle. AM combines the engineering, managerial and informatics points of view. In addition to internal drivers, AM is driven by the demands of customers (social pull) and regulators (environmental mandates and economic considerations). AM can follow either a top-down or a bottom-up approach. Considering rehabilitation planning at the bottom-up level, the main issue would be to rehabilitate the right pipe at the right time with the right technique. Finding the right pipe may be possible and practicable, but determining the timeliness of the rehabilitation and the choice of the techniques adopted to rehabilitate is a bit abstruse. It is a truism that rehabilitating an asset too early is unwise, just as doing it late may have entailed extra expenses en route, in addition to the cost of the exercise of rehabilitation per se. One is confronted with a typical ‘Hamlet-isque dilemma’ – ‘to repair or not to repair’; or put in another way, ‘to replace or not to replace’. The decision in this case is governed by three factors, not necessarily interrelated – quality of customer service, costs and budget in the life cycle of the asset in question. The goal of replacement planning is to find the juncture in the asset’s life cycle where the cost of replacement is balanced by the rising maintenance costs and the declining level of service. System maintenance aims at improving performance and maintaining the asset in good working condition for as long as possible. Effective planning is used to target maintenance activities to meet these goals and minimize costly exigencies. The main objective of this dissertation is to develop a process-model for asset replacement planning. The aim of the model is to determine the optimal pipe replacement year by comparing, temporally, the annual operating and maintenance costs of the existing asset and the annuity of the investment in a new equivalent pipe, at the best market price. It is proposed that risk cost provide an appropriate framework to decide the balance between investment for replacing or operational expenditures for maintaining an asset. The model describes a practical approach to estimate when an asset should be replaced. A comprehensive list of criteria to be considered is outlined, the main criteria being a visà- vis between maintenance and replacement expenditures. The costs to maintain the assets should be described by a cost function related to the asset type, the risks to the safety of people and property owing to declining condition of asset, and the predicted frequency of failures. The cost functions reflect the condition of the existing asset at the time the decision to maintain or replace is taken: age, level of deterioration, risk of failure. The process model is applied in the wastewater network of Oslo, the capital city of Norway, and uses available real-world information to forecast life-cycle costs of maintenance and rehabilitation strategies and support infrastructure management decisions. The case study provides an insight into the various definitions of ‘asset lifetime’ – service life, economic life and physical life. The results recommend that one common value for lifetime should not be applied to the all the pipelines in the stock for investment planning in the long-term period; rather it would be wiser to define different values for different cohorts of pipelines to reduce the uncertainties associated with generalisations for simplification. It is envisaged that more criteria the municipality is able to include, to estimate maintenance costs for the existing assets, the more precise will the estimation of the expected service life be. The ability to include social costs enables to compute the asset life, not only based on its physical characterisation, but also on the sensitivity of network areas to social impact of failures. The type of economic analysis is very sensitive to model parameters that are difficult to determine accurately. The main value of this approach is the effort to demonstrate that it is possible to include, in decision-making, factors as the cost of the risk associated with a decline in level of performance, the level of this deterioration and the asset’s depreciation rate, without looking at age as the sole criterion for making decisions regarding replacements.
Resumo:
Il seguente elaborato è la diretta conseguenza di un periodo di stage, pari a cinque mesi, svolto presso l’azienda INTERTABA S.p.A., a Zola Predosa (BO). Il Dipartimento presso cui è stato svolto il tirocinio è l’Engineering. In particolare è stata compiuta un’analisi dei KPIs presenti e sono state proposte delle azioni migliorative. Il lavoro si è sviluppato in tre fasi. Nella prima fase è stata fatta un’analisi dei KPIs attuali per ciascuna funzione appartenente all’Engineering: Engineering Support, Project Engineering, Plant & Utilities Maintenance, Technical Warehouse e General Services. Nella seconda fase sono state descritte, per ciascuna funzione, alcune proposte migliorative per i KPIs presenti. Infine, per alcune funzioni, sono state proposte alcune iniziative in merito all’implementazione di nuovi KPIs.
Resumo:
Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.
Resumo:
The inherent stochastic character of most of the physical quantities involved in engineering models has led to an always increasing interest for probabilistic analysis. Many approaches to stochastic analysis have been proposed. However, it is widely acknowledged that the only universal method available to solve accurately any kind of stochastic mechanics problem is Monte Carlo Simulation. One of the key parts in the implementation of this technique is the accurate and efficient generation of samples of the random processes and fields involved in the problem at hand. In the present thesis an original method for the simulation of homogeneous, multi-dimensional, multi-variate, non-Gaussian random fields is proposed. The algorithm has proved to be very accurate in matching both the target spectrum and the marginal probability. The computational efficiency and robustness are very good too, even when dealing with strongly non-Gaussian distributions. What is more, the resulting samples posses all the relevant, welldefined and desired properties of “translation fields”, including crossing rates and distributions of extremes. The topic of the second part of the thesis lies in the field of non-destructive parametric structural identification. Its objective is to evaluate the mechanical characteristics of constituent bars in existing truss structures, using static loads and strain measurements. In the cases of missing data and of damages that interest only a small portion of the bar, Genetic Algorithm have proved to be an effective tool to solve the problem.
Resumo:
[EN]A new algorithm for evaluating the top event probability of large fault trees (FTs) is presented. This algorithm does not require any previous qualitative analysis of the FT. Indeed, its efficiency is independent of the FT logic, and it only depends on the number n of basic system components and on their failure probabilities. Our method provides exact lower and upper bounds on the top event probability by using new properties of the intrinsic order relation between binary strings. The intrinsic order enables one to select binary n-tuples with large occurrence probabilities without necessity to evaluate them. This drastically reduces the complexity of the problem from exponential (2n binary n-tuples) to linear (n Boolean variables)...
Resumo:
The progresses of electron devices integration have proceeded for more than 40 years following the well–known Moore’s law, which states that the transistors density on chip doubles every 24 months. This trend has been possible due to the downsizing of the MOSFET dimensions (scaling); however, new issues and new challenges are arising, and the conventional ”bulk” architecture is becoming inadequate in order to face them. In order to overcome the limitations related to conventional structures, the researchers community is preparing different solutions, that need to be assessed. Possible solutions currently under scrutiny are represented by: • devices incorporating materials with properties different from those of silicon, for the channel and the source/drain regions; • new architectures as Silicon–On–Insulator (SOI) transistors: the body thickness of Ultra-Thin-Body SOI devices is a new design parameter, and it permits to keep under control Short–Channel–Effects without adopting high doping level in the channel. Among the solutions proposed in order to overcome the difficulties related to scaling, we can highlight heterojunctions at the channel edge, obtained by adopting for the source/drain regions materials with band–gap different from that of the channel material. This solution allows to increase the injection velocity of the particles travelling from the source into the channel, and therefore increase the performance of the transistor in terms of provided drain current. The first part of this thesis work addresses the use of heterojunctions in SOI transistors: chapter 3 outlines the basics of the heterojunctions theory and the adoption of such approach in older technologies as the heterojunction–bipolar–transistors; moreover the modifications introduced in the Monte Carlo code in order to simulate conduction band discontinuities are described, and the simulations performed on unidimensional simplified structures in order to validate them as well. Chapter 4 presents the results obtained from the Monte Carlo simulations performed on double–gate SOI transistors featuring conduction band offsets between the source and drain regions and the channel. In particular, attention has been focused on the drain current and to internal quantities as inversion charge, potential energy and carrier velocities. Both graded and abrupt discontinuities have been considered. The scaling of devices dimensions and the adoption of innovative architectures have consequences on the power dissipation as well. In SOI technologies the channel is thermally insulated from the underlying substrate by a SiO2 buried–oxide layer; this SiO2 layer features a thermal conductivity that is two orders of magnitude lower than the silicon one, and it impedes the dissipation of the heat generated in the active region. Moreover, the thermal conductivity of thin semiconductor films is much lower than that of silicon bulk, due to phonon confinement and boundary scattering. All these aspects cause severe self–heating effects, that detrimentally impact the carrier mobility and therefore the saturation drive current for high–performance transistors; as a consequence, thermal device design is becoming a fundamental part of integrated circuit engineering. The second part of this thesis discusses the problem of self–heating in SOI transistors. Chapter 5 describes the causes of heat generation and dissipation in SOI devices, and it provides a brief overview on the methods that have been proposed in order to model these phenomena. In order to understand how this problem impacts the performance of different SOI architectures, three–dimensional electro–thermal simulations have been applied to the analysis of SHE in planar single and double–gate SOI transistors as well as FinFET, featuring the same isothermal electrical characteristics. In chapter 6 the same simulation approach is extensively employed to study the impact of SHE on the performance of a FinFET representative of the high–performance transistor of the 45 nm technology node. Its effects on the ON–current, the maximum temperatures reached inside the device and the thermal resistance associated to the device itself, as well as the dependence of SHE on the main geometrical parameters have been analyzed. Furthermore, the consequences on self–heating of technological solutions such as raised S/D extensions regions or reduction of fin height are explored as well. Finally, conclusions are drawn in chapter 7.
Resumo:
Impairment of postural control is a common consequence of Parkinson's disease (PD) that becomes more and more critical with the progression of the disease, in spite of the available medications. Postural instability is one of the most disabling features of PD and induces difficulties with postural transitions, initiation of movements, gait disorders, inability to live independently at home, and is the major cause of falls. Falls are frequent (with over 38% falling each year) and may induce adverse consequences like soft tissue injuries, hip fractures, and immobility due to fear of falling. As the disease progresses, both postural instability and fear of falling worsen, which leads patients with PD to become increasingly immobilized. The main aims of this dissertation are to: 1) detect and assess, in a quantitative way, impairments of postural control in PD subjects, investigate the central mechanisms that control such motor performance, and how these mechanism are affected by levodopa; 2) develop and validate a protocol, using wearable inertial sensors, to measure postural sway and postural transitions prior to step initiation; 3) find quantitative measures sensitive to impairments of postural control in early stages of PD and quantitative biomarkers of disease progression; and 4) test the feasibility and effects of a recently-developed audio-biofeedback system in maintaining balance in subjects with PD. In the first set of studies, we showed how PD reduces functional limits of stability as well as the magnitude and velocity of postural preparation during voluntary, forward and backward leaning while standing. Levodopa improves the limits of stability but not the postural strategies used to achieve the leaning. Further, we found a strong relationship between backward voluntary limits of stability and size of automatic postural response to backward perturbations in control subjects and in PD subjects ON medication. Such relation might suggest that the central nervous system presets postural response parameters based on perceived maximum limits and this presetting is absent in PD patients OFF medication but restored with levodopa replacement. Furthermore, we investigated how the size of preparatory postural adjustments (APAs) prior to step initiation depend on initial stance width. We found that patients with PD did not scale up the size of their APA with stance width as much as control subjects so they had much more difficulty initiating a step from a wide stance than from a narrow stance. This results supports the hypothesis that subjects with PD maintain a narrow stance as a compensation for their inability to sufficiently increase the size of their lateral APA to allow speedy step initiation in wide stance. In the second set of studies, we demonstrated that it is possible to use wearable accelerometers to quantify postural performance during quiet stance and step initiation balance tasks in healthy subjects. We used a model to predict center of pressure displacements associated with accelerations at the upper and lower back and thigh. This approach allows the measurement of balance control without the use of a force platform outside the laboratory environment. We used wearable accelerometers on a population of early, untreated PD patients, and found that postural control in stance and postural preparation prior to a step are impaired early in the disease when the typical balance and gait intiation symptoms are not yet clearly manifested. These novel results suggest that technological measures of postural control can be more sensitive than clinical measures. Furthermore, we assessed spontaneous sway and step initiation longitudinally across 1 year in patients with early, untreated PD. We found that changes in trunk sway, and especially movement smoothness, measured as Jerk, could be used as an objective measure of PD and its progression. In the third set of studies, we studied the feasibility of adapting an existing audio-biofeedback device to improve balance control in patients with PD. Preliminary results showed that PD subjects found the system easy-to-use and helpful, and they were able to correctly follow the audio information when available. Audiobiofeedback improved the properties of trunk sway during quiet stance. Our results have many implications for i) the understanding the central mechanisms that control postural motor performance, and how these mechanisms are affected by levodopa; ii) the design of innovative protocols for measuring and remote monitoring of motor performance in the elderly or subjects with PD; and iii) the development of technologies for improving balance, mobility, and consequently quality of life in patients with balance disorders, such as PD patients with augmented biofeedback paradigms.
Resumo:
Tissue engineering is a discipline that aims at regenerating damaged biological tissues by using a cell-construct engineered in vitro made of cells grown into a porous 3D scaffold. The role of the scaffold is to guide cell growth and differentiation by acting as a bioresorbable temporary substrate that will be eventually replaced by new tissue produced by cells. As a matter or fact, the obtainment of a successful engineered tissue requires a multidisciplinary approach that must integrate the basic principles of biology, engineering and material science. The present Ph.D. thesis aimed at developing and characterizing innovative polymeric bioresorbable scaffolds made of hydrolysable polyesters. The potentialities of both commercial polyesters (i.e. poly-e-caprolactone, polylactide and some lactide copolymers) and of non-commercial polyesters (i.e. poly-w-pentadecalactone and some of its copolymers) were explored and discussed. Two techniques were employed to fabricate scaffolds: supercritical carbon dioxide (scCO2) foaming and electrospinning (ES). The former is a powerful technology that enables to produce 3D microporous foams by avoiding the use of solvents that can be toxic to mammalian cells. The scCO2 process, which is commonly applied to amorphous polymers, was successfully modified to foam a highly crystalline poly(w-pentadecalactone-co-e-caprolactone) copolymer and the effect of process parameters on scaffold morphology and thermo-mechanical properties was investigated. In the course of the present research activity, sub-micrometric fibrous non-woven meshes were produced using ES technology. Electrospun materials are considered highly promising scaffolds because they resemble the 3D organization of native extra cellular matrix. A careful control of process parameters allowed to fabricate defect-free fibres with diameters ranging from hundreds of nanometers to several microns, having either smooth or porous surface. Moreover, versatility of ES technology enabled to produce electrospun scaffolds from different polyesters as well as “composite” non-woven meshes by concomitantly electrospinning different fibres in terms of both fibre morphology and polymer material. The 3D-architecture of the electrospun scaffolds fabricated in this research was controlled in terms of mutual fibre orientation by properly modifying the instrumental apparatus. This aspect is particularly interesting since the micro/nano-architecture of the scaffold is known to affect cell behaviour. Since last generation scaffolds are expected to induce specific cell response, the present research activity also explored the possibility to produce electrospun scaffolds bioactive towards cells. Bio-functionalized substrates were obtained by loading polymer fibres with growth factors (i.e. biomolecules that elicit specific cell behaviour) and it was demonstrated that, despite the high voltages applied during electrospinning, the growth factor retains its biological activity once released from the fibres upon contact with cell culture medium. A second fuctionalization approach aiming, at a final stage, at controlling cell adhesion on electrospun scaffolds, consisted in covering fibre surface with highly hydrophilic polymer brushes of glycerol monomethacrylate synthesized by Atom Transfer Radical Polymerization. Future investigations are going to exploit the hydroxyl groups of the polymer brushes for functionalizing the fibre surface with desired biomolecules. Electrospun scaffolds were employed in cell culture experiments performed in collaboration with biochemical laboratories aimed at evaluating the biocompatibility of new electrospun polymers and at investigating the effect of fibre orientation on cell behaviour. Moreover, at a preliminary stage, electrospun scaffolds were also cultured with tumour mammalian cells for developing in vitro tumour models aimed at better understanding the role of natural ECM on tumour malignity in vivo.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
Deutsch:In der vorliegenden Arbeit konnten neue Methoden zur Synthese anorganischer Materialien mit neuartiger Architektur im Mikrometer und Nanometer Maßstab beschrieben werden. Die zentrale Rolle der Formgebung basiert dabei auf der templatinduzierten Abscheidung der anorganischen Materialien auf selbstorganisierten Monoschichten. Als Substrate eignen sich goldbedampfte Glasträger und Goldkolloide, die eine Mittelstellung in der Welt der Atome bzw. Moleküle und der makroskopischen Welt der ausgedehnten Festkörper einnehmen. Auf diesen Substraten lassen sich Thiole zu einer monomolekularen Schicht adsorbieren und damit die Oberflächeneigenschaften des Substrates ändern. Ein besonderer Schwerpunkt bei dieser Arbeit stellt die Synthese speziell auf die Bedürfnisse der jeweiligen Anwendung ausgerichteten Thiole dar.Im ersten Teil der Arbeit wurden goldbedampfte Glasoberflächen als Template verwendet. Die Abscheidung von Calciumcarbonat wurde in Abhängigkeit der Schichtdicke der adsorbierten Monolage untersucht. Aragonit, eine der drei Hauptphasen des Calciumcarbonat Systems, wurde auf polyaromatischen Amid - Oberflächen mit Schichtdicken von 5 - 400 nm Dicke unter milden Bedingung abgeschieden. Die einstellbaren Parameter waren dabei die Kettenlänge des Polymers, der w-Substituent, die Bindung an die Goldoberfläche über Verwendung verschiedener Aminothiole und die Kristallisationstemperatur. Die Schichtdickeneinstellung der Polymerfilme erfolgte hierbei über einen automatisierten Synthesezyklus.Titanoxid Filme konnten auf Oberflächen strukturiert werden. Dabei kam ein speziell synthetisiertes Thiol zum Einsatz, das die Funktionalität einer Styroleinheit an der Oberflächen Grenze als auch eine Möglichkeit zur späteren Entfernung von der Oberfläche in sich vereinte. Die PDMS Stempeltechnik erzeugte dabei Mikrostrukturen auf der Goldoberfläche im Bereich von 5 bis 10 µm, die ihrerseits über die Polymerisation und Abscheidung des Polymers in den Titanoxid Film überführt werden konnten. Drei dimensionale Strukturen wurden über Goldkolloid Template erhalten. Tetraethylenglykol konnte mit einer Thiolgruppe im Austausch zu einer Hydroxylgruppe monofunktionalisiert werden. Das erhaltene Molekül wurde auf kolloidalem Gold selbstorganisiert; es entstand dabei ein wasserlösliches Goldkolloid. Die Darstellung erfolgte dabei in einer Einphasenreaktion. Die so erhaltenen Goldkolloide wurden als Krstallisationstemplate für die drei dimensionale Abscheidung von Calciumcarbonat verwendet. Es zeigte sich, dass Glykol die Kristallisation bzw. den Habitus des krsitalls bei niedrigem pH Wert modifiziert. Bei erhöhtem pH Wert (pH = 12) jedoch agieren die Glykol belegten Goldkolloide als Template und führen zu sphärisch Aggregaten. Werden Goldkolloide langkettigen Dithiolen ausgesetzt, so führt dies zu einer Aggregation und Ausfällung der Kolloide aufgrund der Vernetzung mehrer Goldkolloide mit den Thiolgruppen der Alkyldithiole. Zur Vermeidung konnte in dieser Arbeit ein halbseitig geschütztes Dithiol synthetisiert werden, mit dessen Hilfe die Aggregation unterbunden werden konnte. Das nachfolgende Entschützten der Thiolfunktion führte zu Goldkolloiden, deren Oberfläche Thiol funktionalisiert werden konnte. Die thiolaktiven Goldkolloide fungierten als template für die Abscheidung von Bleisulfid aus organisch/wässriger Lösung. Die Funktionsweise der Schutzgruppe und die Entschützung konnte mittels Plasmonenresonanz Spektroskopie verdeutlicht werden. Titanoxid / Gold / Polystyrol Komposite in Röhrenform konnten synthetisiert werden. Dazu wurde ein menschliches Haar als biologisches Templat für die Formgebung gewählt.. Durch Bedampfung des Haares mit Gold, Assemblierung eines Stryrolmonomers, welches zusätzlich eine Thiolfunktionalität trug, Polymerisation auf der Oberfläche, Abscheidung des Titanoxid Films und anschließendem Auflösen des biologischen Templates konnte eine Röhrenstruktur im Mikrometer Bereich dargestellt werden. Goldkolloide fungierten in dieser Arbeit nicht nur als Kristallisationstemplate und Formgeber, auch sie selbst wurden dahingehend modifiziert, dass sie drahtförmige Agglormerate im Nanometerbereich ausbilden. Dazu wurden Template aus Siliziumdioxid benutzt. Zum einen konnten Nanoröhren aus amorphen SiO2 in einer Sol Gel Methode dargestellt werden, zum anderen bediente sich diese Arbeit biologischer Siliziumoxid Hohlnadeln aus marinen Schwämmen isoliert. Goldkolloide wurden in die Hohlstrukturen eingebettet und die Struktur durch Ausbildung von Kolloid - Thiol Netzwerken mittels Dithiol Zugabe gefestigt. Die Gold-Nanodrähte im Bereich von 100 bis 500 nm wurden durch Auflösen des SiO2 - Templates freigelegt.