15 resultados para Systems development

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The main goal of this thesis is to facilitate the process of industrial automated systems development applying formal methods to ensure the reliability of systems. A new formulation of distributed diagnosability problem in terms of Discrete Event Systems theory and automata framework is presented, which is then used to enforce the desired property of the system, rather then just verifying it. This approach tackles the state explosion problem with modeling patterns and new algorithms, aimed for verification of diagnosability property in the context of the distributed diagnosability problem. The concepts are validated with a newly developed software tool.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the last years of research, I focused my studies on different physiological problems. Together with my supervisors, I developed/improved different mathematical models in order to create valid tools useful for a better understanding of important clinical issues. The aim of all this work is to develop tools for learning and understanding cardiac and cerebrovascular physiology as well as pathology, generating research questions and developing clinical decision support systems useful for intensive care unit patients. I. ICP-model Designed for Medical Education We developed a comprehensive cerebral blood flow and intracranial pressure model to simulate and study the complex interactions in cerebrovascular dynamics caused by multiple simultaneous alterations, including normal and abnormal functional states of auto-regulation of the brain. Individual published equations (derived from prior animal and human studies) were implemented into a comprehensive simulation program. Included in the normal physiological modelling was: intracranial pressure, cerebral blood flow, blood pressure, and carbon dioxide (CO2) partial pressure. We also added external and pathological perturbations, such as head up position and intracranial haemorrhage. The model performed clinically realistically given inputs of published traumatized patients, and cases encountered by clinicians. The pulsatile nature of the output graphics was easy for clinicians to interpret. The manoeuvres simulated include changes of basic physiological inputs (e.g. blood pressure, central venous pressure, CO2 tension, head up position, and respiratory effects on vascular pressures) as well as pathological inputs (e.g. acute intracranial bleeding, and obstruction of cerebrospinal outflow). Based on the results, we believe the model would be useful to teach complex relationships of brain haemodynamics and study clinical research questions such as the optimal head-up position, the effects of intracranial haemorrhage on cerebral haemodynamics, as well as the best CO2 concentration to reach the optimal compromise between intracranial pressure and perfusion. We believe this model would be useful for both beginners and advanced learners. It could be used by practicing clinicians to model individual patients (entering the effects of needed clinical manipulations, and then running the model to test for optimal combinations of therapeutic manoeuvres). II. A Heterogeneous Cerebrovascular Mathematical Model Cerebrovascular pathologies are extremely complex, due to the multitude of factors acting simultaneously on cerebral haemodynamics. In this work, the mathematical model of cerebral haemodynamics and intracranial pressure dynamics, described in the point I, is extended to account for heterogeneity in cerebral blood flow. The model includes the Circle of Willis, six regional districts independently regulated by autoregulation and CO2 reactivity, distal cortical anastomoses, venous circulation, the cerebrospinal fluid circulation, and the intracranial pressure-volume relationship. Results agree with data in the literature and highlight the existence of a monotonic relationship between transient hyperemic response and the autoregulation gain. During unilateral internal carotid artery stenosis, local blood flow regulation is progressively lost in the ipsilateral territory with the presence of a steal phenomenon, while the anterior communicating artery plays the major role to redistribute the available blood flow. Conversely, distal collateral circulation plays a major role during unilateral occlusion of the middle cerebral artery. In conclusion, the model is able to reproduce several different pathological conditions characterized by heterogeneity in cerebrovascular haemodynamics and can not only explain generalized results in terms of physiological mechanisms involved, but also, by individualizing parameters, may represent a valuable tool to help with difficult clinical decisions. III. Effect of Cushing Response on Systemic Arterial Pressure. During cerebral hypoxic conditions, the sympathetic system causes an increase in arterial pressure (Cushing response), creating a link between the cerebral and the systemic circulation. This work investigates the complex relationships among cerebrovascular dynamics, intracranial pressure, Cushing response, and short-term systemic regulation, during plateau waves, by means of an original mathematical model. The model incorporates the pulsating heart, the pulmonary circulation and the systemic circulation, with an accurate description of the cerebral circulation and the intracranial pressure dynamics (same model as in the first paragraph). Various regulatory mechanisms are included: cerebral autoregulation, local blood flow control by oxygen (O2) and/or CO2 changes, sympathetic and vagal regulation of cardiovascular parameters by several reflex mechanisms (chemoreceptors, lung-stretch receptors, baroreceptors). The Cushing response has been described assuming a dramatic increase in sympathetic activity to vessels during a fall in brain O2 delivery. With this assumption, the model is able to simulate the cardiovascular effects experimentally observed when intracranial pressure is artificially elevated and maintained at constant level (arterial pressure increase and bradicardia). According to the model, these effects arise from the interaction between the Cushing response and the baroreflex response (secondary to arterial pressure increase). Then, patients with severe head injury have been simulated by reducing intracranial compliance and cerebrospinal fluid reabsorption. With these changes, oscillations with plateau waves developed. In these conditions, model results indicate that the Cushing response may have both positive effects, reducing the duration of the plateau phase via an increase in cerebral perfusion pressure, and negative effects, increasing the intracranial pressure plateau level, with a risk of greater compression of the cerebral vessels. This model may be of value to assist clinicians in finding the balance between clinical benefits of the Cushing response and its shortcomings. IV. Comprehensive Cardiopulmonary Simulation Model for the Analysis of Hypercapnic Respiratory Failure We developed a new comprehensive cardiopulmonary model that takes into account the mutual interactions between the cardiovascular and the respiratory systems along with their short-term regulatory mechanisms. The model includes the heart, systemic and pulmonary circulations, lung mechanics, gas exchange and transport equations, and cardio-ventilatory control. Results show good agreement with published patient data in case of normoxic and hyperoxic hypercapnia simulations. In particular, simulations predict a moderate increase in mean systemic arterial pressure and heart rate, with almost no change in cardiac output, paralleled by a relevant increase in minute ventilation, tidal volume and respiratory rate. The model can represent a valid tool for clinical practice and medical research, providing an alternative way to experience-based clinical decisions. In conclusion, models are not only capable of summarizing current knowledge, but also identifying missing knowledge. In the former case they can serve as training aids for teaching the operation of complex systems, especially if the model can be used to demonstrate the outcome of experiments. In the latter case they generate experiments to be performed to gather the missing data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Synthetic biology is a young field of applicative research aiming to design and build up artificial biological devices, useful for human applications. How synthetic biology emerged in past years and how the development of the Registry of Standard Biological Parts aimed to introduce one practical starting solution to apply the basics of engineering to molecular biology is presented in chapter 1 in the thesis The same chapter recalls how biological parts can make up a genetic program, the molecular cloning tecnique useful for this purpose, and an overview of the mathematical modeling adopted to describe gene circuit behavior. Although the design of gene circuits has become feasible the increasing complexity of gene networks asks for a rational approach to design gene circuits. A bottom-up approach was proposed, suggesting that the behavior of a complicated system can be predicted from the features of its parts. The option to use modular parts in large-scale networks will be facilitated by a detailed and shared characterization of their functional properties. Such a prediction, requires well-characterized mathematical models of the parts and of how they behave when assembled together. In chapter 2, the feasibility of the bottom-up approach in the design of a synthetic program in Escherichia coli bacterial cells is described. The rational design of gene networks is however far from being established. The synthetic biology approach can used the mathematical formalism to identify biological information not assessable with experimental measurements. In this context, chapter 3 describes the design of a synthetic sensor for identifying molecules of interest inside eukaryotic cells. The Registry of Standard parts collects standard and modular biological parts. To spread the use of BioBricks the iGEM competition was started. The ICM Laboratory, where Francesca Ceroni completed her Ph.D, partecipated with teams of students and Chapter 4 summarizes the projects developed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Cost, performance and availability considerations are forcing even the most conservative high-integrity embedded real-time systems industry to migrate from simple hardware processors to ones equipped with caches and other acceleration features. This migration disrupts the practices and solutions that industry had developed and consolidated over the years to perform timing analysis. Industry that are confident with the efficiency/effectiveness of their verification and validation processes for old-generation processors, do not have sufficient insight on the effects of the migration to cache-equipped processors. Caches are perceived as an additional source of complexity, which has potential for shattering the guarantees of cost- and schedule-constrained qualification of their systems. The current industrial approach to timing analysis is ill-equipped to cope with the variability incurred by caches. Conversely, the application of advanced WCET analysis techniques on real-world industrial software, developed without analysability in mind, is hardly feasible. We propose a development approach aimed at minimising the cache jitters, as well as at enabling the application of advanced WCET analysis techniques to industrial systems. Our approach builds on:(i) identification of those software constructs that may impede or complicate timing analysis in industrial-scale systems; (ii) elaboration of practical means, under the model-driven engineering (MDE) paradigm, to enforce the automated generation of software that is analyzable by construction; (iii) implementation of a layout optimisation method to remove cache jitters stemming from the software layout in memory, with the intent of facilitating incremental software development, which is of high strategic interest to industry. The integration of those constituents in a structured approach to timing analysis achieves two interesting properties: the resulting software is analysable from the earliest releases onwards - as opposed to becoming so only when the system is final - and more easily amenable to advanced timing analysis by construction, regardless of the system scale and complexity.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This PhD work was aimed to design, develop, and characterize gelatin-based scaffolds, for the repair of defects in the muscle-skeletal system. Gelatin is a biopolymer widely used for pharmaceutical and medical applications, thanks to its biodegradability and biocompatibility. It is obtained from collagen via thermal denaturation or chemical-physical degradation. Despite its high potential as biomaterial, gelatin exhibits poor mechanical properties and a low resistance in aqueous environment. Crosslinking treatment and enrichment with reinforcement materials are thus required for biomedical applications. In this work, gelatin based scaffolds were prepared following three different strategies: films were prepared through the solvent casting method, electrospinning technique was applied for the preparation of porous mats, and 3D porous scaffolds were prepared through freeze-drying. The results obtained on films put into evidence the influence of pH, crosslinking and reinforcement with montmorillonite (MMT), on the structure, stability and mechanical properties of gelatin and MMT/gelatin composites. The information acquired on the effect of crosslinking in different conditions was utilized to optimize the preparation procedure of electrospun and freeze-dried scaffolds. A successful method was developed to prepare gelatin nanofibrous scaffolds electrospun from acetic acid/water solution and stabilized with a non-toxic crosslinking agent, genipin, able to preserve their original morphology after exposure to water. Moreover, the co-electrospinning technique was used to prepare nanofibrous scaffolds at variable content of gelatin and polylactic acid. Preliminary in vitro tests indicated that the scaffolds are suitable for cartilage tissue engineering, and that their potential applications can be extended to cartilage-bone interface tissue engineering. Finally, 3D porous gelatin scaffolds, enriched with calcium phosphate, were prepared with the freeze-drying method. The results indicated that the crystallinity of the inorganic phase influences porosity, interconnectivity and mechanical properties. Preliminary in vitro tests show good osteoblast response in terms of proliferation and adhesion on all the scaffolds.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A control-oriented model of a Dual Clutch Transmission was developed for real-time Hardware In the Loop (HIL) applications, to support model-based development of the DCT controller. The model is an innovative attempt to reproduce the fast dynamics of the actuation system while maintaining a step size large enough for real-time applications. The model comprehends a detailed physical description of hydraulic circuit, clutches, synchronizers and gears, and simplified vehicle and internal combustion engine sub-models. As the oil circulating in the system has a large bulk modulus, the pressure dynamics are very fast, possibly causing instability in a real-time simulation; the same challenge involves the servo valves dynamics, due to the very small masses of the moving elements. Therefore, the hydraulic circuit model has been modified and simplified without losing physical validity, in order to adapt it to the real-time simulation requirements. The results of offline simulations have been compared to on-board measurements to verify the validity of the developed model, that was then implemented in a HIL system and connected to the TCU (Transmission Control Unit). Several tests have been performed: electrical failure tests on sensors and actuators, hydraulic and mechanical failure tests on hydraulic valves, clutches and synchronizers, and application tests comprehending all the main features of the control performed by the TCU. Being based on physical laws, in every condition the model simulates a plausible reaction of the system. The first intensive use of the HIL application led to the validation of the new safety strategies implemented inside the TCU software. A test automation procedure has been developed to permit the execution of a pattern of tests without the interaction of the user; fully repeatable tests can be performed for non-regression verification, allowing the testing of new software releases in fully automatic mode.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

With the increasing importance that nanotechnologies have in everyday life, it is not difficult to realize that also a single molecule, if properly designed, can be a device able to perform useful functions: such a chemical species is called chemosensor, that is a molecule of abiotic origin that signals the presence of matter or energy. Signal transduction is the mechanism by which an interaction of a sensor with an analyte yields a measurable form of energy. When dealing with the design of a chemosensor, we need to take into account a “communication requirement” between its three component: the receptor unit, responsible for the selective analyte binding, the spacer, which controls the geometry of the system and modulates the electronic interaction between the receptor and the signalling unit, whose physico-chemical properties change upon complexation. A luminescent chemosensor communicates a variation of the physico-chemical properties of the receptor unit with a luminescence output signal. This thesis work consists in the characterization of new molecular and nanoparticle-based system which can be used as sensitive materials for the construction of new optical transduction devices able to provide information about the concentration of analytes in solution. In particular two direction were taken. The first is to continue in the development of new chemosensors, that is the first step for the construction of reliable and efficient devices, and in particular the work will be focused on chemosensors for metal ions for biomedical and environmental applications. The second is to study more efficient and complex organized systems, such as derivatized silica nanoparticles. These system can potentially have higher sensitivity than molecular systems, and present many advantages, like the possibility to be ratiometric, higher Stokes shifts and lower signal-to-noise ratio.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of the Ph.D. research project was to explore Dual Fuel combustion and hybridization. Natural gas-diesel Dual Fuel combustion was experimentally investigated on a 4-Stroke, 2.8 L, turbocharged, light-duty Diesel engine, considering four operating points in the range between low to medium-high loads at 3000 rpm. Then, a numerical analysis was carried out using a customized version of the KIVA-3V code, in order to optimize the diesel injection strategy of the highest investigated load. A second KIVA-3V model was used to analyse the interchangeability between natural gas and biogas on an intermediate operating point. Since natural gas-diesel Dual Fuel combustion suffers from poor combustion efficiency at low loads, the effects of hydrogen enriched natural gas on Dual Fuel combustion were investigated using a validated Ansys Forte model, followed by an optimization of the diesel injection strategy and a sensitivity analysis to the swirl ratio, on the lowest investigated load. Since one of the main issues of Low Temperature Combustion engines is the low power density, 2-Stroke engines, thanks to the double frequency compared to 4-Stroke engines, may be more suitable to operate in Dual Fuel mode. Therefore, the application of gasoline-diesel Dual Fuel combustion to a modern 2-Stroke Diesel engine was analysed, starting from the investigation of gasoline injection and mixture formation. As far as hybridization is concerned, a MATLAB-Simulink model was built to compare a conventional (combustion) and a parallel-hybrid powertrain applied to a Formula SAE race car.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Aedes albopictus is a vector able to transmit several arboviruses. Due to its high impact on human health, it is important to develop an efficient control strategy for this pest. Nowadays, control based on chemical insecticides is limited by the number of available active principles and the occurrence of resistance. A valuable alternative to the conventional control strategies is the sterile insect technique (SIT) which relies on releasing sterile males of the target insect. Mating between wild females and sterile males results in no viable offspring. A crucial aspect of SIT is the production of a large number of sterile males with a low presence of females that can bite and transmit viruses. The present thesis aimed to find, implement and study the most reliable mechanical sex sorter and protocol to implement male productivity and reduce female contamination. In addition, I evaluated different variables and sorting protocols to enable female recovery for breeding purposes. Furthermore, I studied the creation of a hyper-protandric strain potentially able to produce only males. I also assessed the integration of artificial intelligence with an optical unit to identify sexes at the adult stage. All these applications helped to realise a mass production model in Italy with a potential weekly production of 1 million males. Moreover, I studied and applied for aerial sterile male release in an urban environment. This technology could allow the release of males in a wide area, overcoming environmental and urban obstacles. However, the development and application of drone technologies in a metropolitan area close to airports, such as in Bologna area, must fit specific requirements. Lastly, at Réunion Island, during a Short Term Scientific Mission France (AIM-COST Action), Indian Ocean, I studied the Boosted SIT application. Coating sterile males with Pyriproxyfen may help spread the insecticide into the larval breeding sites.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Legionella is a Gram-negative bacterium that represent a public health issue, with heavy social and economic impact. Therefore, it is mandatory to provide a proper environmental surveillance and risk assessment plan to perform Legionella control in water distribution systems in hospital and community buildings. The thesis joins several methodologies in a unique workflow applied for the identification of non-pneumophila Legionella species (n-pL), starting from standard methods as culture and gene sequencing (mip and rpoB), and passing through innovative approaches as MALDI-TOF MS technique and whole genome sequencing (WGS). The results obtained, were compared to identify the Legionella isolates, and lead to four presumptive novel Legionella species identification. One of these four new isolates was characterized and recognized at taxonomy level with the name of Legionella bononiensis (the 64th Legionella species). The workflow applied in this thesis, help to increase the knowledge of Legionella environmental species, improving the description of the environment itself and the events that promote the growth of Legionella in their ecological niche. The correct identification and characterization of the isolates permit to prevent their spread in man-made environment and contain the occurrence of cases, clusters, or outbreaks. Therefore, the experimental work undertaken, could support the preventive measures during environmental and clinical surveillance, improving the study of species often underestimated or still unknown.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis describes the development of the Sample Fetch Rover (SFR), studied for Mars Sample Return (MSR), an international campaign carried out in cooperation between the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA). The focus of this document is the design of the electro-mechanical systems of the rover. After placing this work into the general context of robotic planetary exploration and summarising the state of the art for what concerns Mars rovers, the architecture of the Mars Sample Return Campaign is presented. A complete overview of the current SFR architecture is provided, touching upon all the main subsystems of the spacecraft. For each area, it is discussed what are the design drivers, the chosen solutions and whether they use heritage technology (in particular from the ExoMars Rover) or new developments. This research focuses on two topics of particular interest, due to their relevance for the mission and the novelty of their design: locomotion and sample acquisition, which are discussed in depth. The early SFR locomotion concepts are summarised, covering the initial trade-offs and discarded designs for higher traverse performance. Once a consolidated architecture was reached, the locomotion subsystem was developed further, defining the details of the suspension, actuators, deployment mechanisms and wheels. This technology is presented here in detail, including some key analysis and test results that support the design and demonstrate how it responds to the mission requirements. Another major electro-mechanical system developed as part of this work is the one dedicated to sample tube acquisition. The concept of operations of this machinery was defined to be robust against the unknown conditions that characterise the mission. The design process led to a highly automated robotic system which is described here in its main components: vision system, robotic arm and tube storage.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In pursuit of aligning with the European Union's ambitious target of achieving a carbon-neutral economy by 2050, researchers, vehicle manufacturers, and original equipment manufacturers have been at the forefront of exploring cutting-edge technologies for internal combustion engines. The introduction of these technologies has significantly increased the effort required to calibrate the models implemented in the engine control units. Consequently the development of tools that reduce costs and the time required during the experimental phases, has become imperative. Additionally, to comply with ever-stricter limits on 〖"CO" 〗_"2" emissions, it is crucial to develop advanced control systems that enhance traditional engine management systems in order to reduce fuel consumption. Furthermore, the introduction of new homologation cycles, such as the real driving emissions cycle, compels manufacturers to bridge the gap between engine operation in laboratory tests and real-world conditions. Within this context, this thesis showcases the performance and cost benefits achievable through the implementation of an auto-adaptive closed-loop control system, leveraging in-cylinder pressure sensors in a heavy-duty diesel engine designed for mining applications. Additionally, the thesis explores the promising prospect of real-time self-adaptive machine learning models, particularly neural networks, to develop an automatic system, using in-cylinder pressure sensors for the precise calibration of the target combustion phase and optimal spark advance in a spark-ignition engines. To facilitate the application of these combustion process feedback-based algorithms in production applications, the thesis discusses the results obtained from the development of a cost-effective sensor for indirect cylinder pressure measurement. Finally, to ensure the quality control of the proposed affordable sensor, the thesis provides a comprehensive account of the design and validation process for a piezoelectric washer test system.