21 resultados para Integrated Hydropyrolysis and Hydroconversion process
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
This thesis presents the outcomes of my Ph.D. course in telecommunications engineering. The focus of my research has been on Global Navigation Satellite Systems (GNSS) and in particular on the design of aiding schemes operating both at position and physical level and the evaluation of their feasibility and advantages. Assistance techniques at the position level are considered to enhance receiver availability in challenging scenarios where satellite visibility is limited. Novel positioning techniques relying on peer-to-peer interaction and exchange of information are thus introduced. More specifically two different techniques are proposed: the Pseudorange Sharing Algorithm (PSA), based on the exchange of GNSS data, that allows to obtain coarse positioning where the user has scarce satellite visibility, and the Hybrid approach, which also permits to improve the accuracy of the positioning solution. At the physical level, aiding schemes are investigated to improve the receiver’s ability to synchronize with satellite signals. An innovative code acquisition strategy for dual-band receivers, the Cross-Band Aiding (CBA) technique, is introduced to speed-up initial synchronization by exploiting the exchange of time references between the two bands. In addition vector configurations for code tracking are analyzed and their feedback generation process thoroughly investigated.
Resumo:
This study is focused on radio-frequency inductively coupled thermal plasma (ICP) synthesis of nanoparticles, combining experimental and modelling approaches towards process optimization and industrial scale-up, in the framework of the FP7-NMP SIMBA European project (Scaling-up of ICP technology for continuous production of Metallic nanopowders for Battery Applications). First the state of the art of nanoparticle production through conventional and plasma routes is summarized, then results for the characterization of the plasma source and on the investigation of the nanoparticle synthesis phenomenon, aiming at highlighting fundamental process parameters while adopting a design oriented modelling approach, are presented. In particular, an energy balance of the torch and of the reaction chamber, employing a calorimetric method, is presented, while results for three- and two-dimensional modelling of an ICP system are compared with calorimetric and enthalpy probe measurements to validate the temperature field predicted by the model and used to characterize the ICP system under powder-free conditions. Moreover, results from the modeling of critical phases of ICP synthesis process, such as precursor evaporation, vapour conversion in nanoparticles and nanoparticle growth, are presented, with the aim of providing useful insights both for the design and optimization of the process and on the underlying physical phenomena. Indeed, precursor evaporation, one of the phases holding the highest impact on industrial feasibility of the process, is discussed; by employing models to describe particle trajectories and thermal histories, adapted from the ones originally developed for other plasma technologies or applications, such as DC non-transferred arc torches and powder spherodization, the evaporation of micro-sized Si solid precursor in a laboratory scale ICP system is investigated. Finally, a discussion on the role of thermo-fluid dynamic fields on nano-particle formation is presented, as well as a study on the effect of the reaction chamber geometry on produced nanoparticle characteristics and process yield.
Resumo:
Emissions of CO2 are constantly growing since the beginning of industrial era. Interruption of the production of major emitters sectors (energy and agriculture) is not a viable way and reducing all the emission through carbon capture and storage (CCS) is not economically viable and little publicly accepted, therefore, it becomes fundamentals to take actions like retrofitting already developed infrastructure employing cleanest resources, modify the actual processes limiting the emissions, and reduce the emissions already present through direct air capture. The present thesis will deeply discuss the aspects mentioned in regard to syngas and hydrogen production since they have a central role in the market of energy and chemicals. Among the strategies discussed, greater emphasis is given to the application of looping technologies and to direct air capture processes, as they have been the main point of this work. Particularly, chemical looping methane reforming to syngas was studied with Aspen Plus thermodynamic simulations, thermogravimetric analysis characterization (TGA) and testing in a fixed bed reactor. The process was studied cyclically exploiting the redox properties of a Ce-based oxide oxygen carrier synthetized with a simple forming procedure. The two steps of the looping cycles were studied isothermally at 900 °C and 950° C with a mixture of 10 %CH4 in N2 and of 3% O2 in N2, for carrier reduction and oxidation, respectively. During the stay abroad, in collaboration with the EHT of Zurich, a CO2 capture process in presence of amine solid sorbents was investigated, studying the difference in the performance achievable with the use of contactors of different geometry. The process was studied at two concentrations (382 ppm CO2 in N2 and 5.62% CO2 in N2) and at different flow rates, to understand the dynamics of the adsorption process and to define the mass transfer limiting step.
Resumo:
The integration of distributed and ubiquitous intelligence has emerged over the last years as the mainspring of transformative advancements in mobile radio networks. As we approach the era of “mobile for intelligence”, next-generation wireless networks are poised to undergo significant and profound changes. Notably, the overarching challenge that lies ahead is the development and implementation of integrated communication and learning mechanisms that will enable the realization of autonomous mobile radio networks. The ultimate pursuit of eliminating human-in-the-loop constitutes an ambitious challenge, necessitating a meticulous delineation of the fundamental characteristics that artificial intelligence (AI) should possess to effectively achieve this objective. This challenge represents a paradigm shift in the design, deployment, and operation of wireless networks, where conventional, static configurations give way to dynamic, adaptive, and AI-native systems capable of self-optimization, self-sustainment, and learning. This thesis aims to provide a comprehensive exploration of the fundamental principles and practical approaches required to create autonomous mobile radio networks that seamlessly integrate communication and learning components. The first chapter of this thesis introduces the notion of Predictive Quality of Service (PQoS) and adaptive optimization and expands upon the challenge to achieve adaptable, reliable, and robust network performance in dynamic and ever-changing environments. The subsequent chapter delves into the revolutionary role of generative AI in shaping next-generation autonomous networks. This chapter emphasizes achieving trustworthy uncertainty-aware generation processes with the use of approximate Bayesian methods and aims to show how generative AI can improve generalization while reducing data communication costs. Finally, the thesis embarks on the topic of distributed learning over wireless networks. Distributed learning and its declinations, including multi-agent reinforcement learning systems and federated learning, have the potential to meet the scalability demands of modern data-driven applications, enabling efficient and collaborative model training across dynamic scenarios while ensuring data privacy and reducing communication overhead.
Resumo:
The work of the present thesis is focused on the implementation of microelectronic voltage sensing devices, with the purpose of transmitting and extracting analog information between devices of different nature at short distances or upon contact. Initally, chip-to-chip communication has been studied, and circuitry for 3D capacitive coupling has been implemented. Such circuits allow the communication between dies fabricated in different technologies. Due to their novelty, they are not standardized and currently not supported by standard CAD tools. In order to overcome such burden, a novel approach for the characterization of such communicating links has been proposed. This results in shorter design times and increased accuracy. Communication between an integrated circuit (IC) and a probe card has been extensively studied as well. Today wafer probing is a costly test procedure with many drawbacks, which could be overcome by a different communication approach such as capacitive coupling. For this reason wireless wafer probing has been investigated as an alternative approach to standard on-contact wafer probing. Interfaces between integrated circuits and biological systems have also been investigated. Active electrodes for simultaneous electroencephalography (EEG) and electrical impedance tomography (EIT) have been implemented for the first time in a 0.35 um process. Number of wires has been minimized by sharing the analog outputs and supply on a single wire, thus implementing electrodes that require only 4 wires for their operation. Minimization of wires reduces the cable weight and thus limits the patient's discomfort. The physical channel for communication between an IC and a biological medium is represented by the electrode itself. As this is a very crucial point for biopotential acquisitions, large efforts have been carried in order to investigate the different electrode technologies and geometries and an electromagnetic model is presented in order to characterize the properties of the electrode to skin interface.
Resumo:
This work illustrates a soil-tunnel-structure interaction study performed by an integrated,geotechnical and structural,approach based on 3D finite element analyses and validated against experimental observations.The study aims at analysing the response of reinforced concrete framed buildings on discrete foundations in interaction with metro lines.It refers to the case of the twin tunnels of the Milan (Italy) metro line 5,recently built in coarse grained materials using EPB machines,for which subsidence measurements collected along ground and building sections during tunnelling were available.Settlements measured under freefield conditions are firstly back interpreted using Gaussian empirical predictions. Then,the in situ measurements’ analysis is extended to include the evolving response of a 9 storey reinforced concrete building while being undercrossed by the metro line.In the finite element study,the soil mechanical behaviour is described using an advanced constitutive model. This latter,when combined with a proper simulation of the excavation process, proves to realistically reproduce the subsidence profiles under free field conditions and to capture the interaction phenomena occurring between the twin tunnels during the excavation. Furthermore, when the numerical model is extended to include the building, schematised in a detailed manner, the results are in good agreement with the monitoring data for different stages of the twin tunnelling. Thus, they indirectly confirm the satisfactory performance of the adopted numerical approach which also allows a direct evaluation of the structural response as an outcome of the analysis. Further analyses are also carried out modelling the building with different levels of detail. The results highlight that, in this case, the simplified approach based on the equivalent plate schematisation is inadequate to capture the real tunnelling induced displacement field. The overall behaviour of the system proves to be mainly influenced by the buried portion of the building which plays an essential role in the interaction mechanism, due to its high stiffness.
Resumo:
This research aims at contributing to a better understanding of changes in local governments’ accounting and reporting practices. Particularly, ‘why’, ‘what’ and ‘how’ environmental aspects are included and the significance of changes across time. It adopts an interpretative approach to conduct a longitudinal analysis of case studies. Pettigrew and Whipp’s framework on context, content and process is used as a lens to distinguish changes under each dimension and analyse their interconnections. Data is collected from official documents and triangulated with semi-structured interviews. The legal framework defines as boundaries of the accounting information the territory under local governments’ jurisdiction and their immediate surrounding area. Organisational environmental performance and externalities are excluded from the requirements. An interplay between the local outer context, political commitment and organisational culture justifies the implementation of changes beyond what is regulated and the implementation of transformational changes. Local governments engage in international networks to gain access to funding and implement changes, leading to adopting the dominant environmental agenda. Key stakeholders, like citizens, are not engaged in the accounting and reporting process. Thus, there is no evidence that the environmental aspects addressed and related changes align with stakeholders’ needs and expectations, which jeopardises its significance. Findings from the current research have implications in other EU member states due to the harmonisation of accounting and reporting practices and the common practice across the EU in using external funding to conceptualise and implement changes. This implies that other local governments could also be representing a limited account related to environmental aspects.
Resumo:
The design optimization of industrial products has always been an essential activity to improve product quality while reducing time-to-market and production costs. Although cost management is very complex and comprises all phases of the product life cycle, the control of geometrical and dimensional variations, known as Dimensional Management (DM), allows compliance with product and process requirements. Hence, the tolerance-cost optimization becomes the main practice to provide an effective application of Design for Tolerancing (DfT) and Design to Cost (DtC) approaches by enabling a connection between product tolerances and associated manufacturing costs. However, despite the growing interest in this topic, a profitable application in the industry of these techniques is hampered by their complexity: the definition of a systematic framework is the key element to improving design optimization, enhancing the concurrent use of Computer-Aided tools and Model-Based Definition (MBD) practices. The present doctorate research aims to define and develop an integrated methodology for product/process design optimization, to better exploit the new capabilities of advanced simulations and tools. By implementing predictive models and multi-disciplinary optimization, a Computer-Aided Integrated framework for tolerance-cost optimization has been proposed to allow the integration of DfT and DtC approaches and their direct application for the design of automotive components. Several case studies have been considered, with the final application of the integrated framework on a high-performance V12 engine assembly, to achieve both functional targets and cost reduction. From a scientific point of view, the proposed methodology provides an improvement for the tolerance-cost optimization of industrial components. The integration of theoretical approaches and Computer-Aided tools allows to analyse the influence of tolerances on both product performance and manufacturing costs. The case studies proved the suitability of the methodology for its application in the industrial field, providing the identification of further areas for improvement and refinement.
Resumo:
The research project aims to improve the Design for Additive Manufacturing of metal components. Firstly, the scenario of Additive Manufacturing is depicted, describing its role in Industry 4.0 and in particular focusing on Metal Additive Manufacturing technologies and the Automotive sector applications. Secondly, the state of the art in Design for Additive Manufacturing is described, contextualizing the methodologies, and classifying guidelines, rules, and approaches. The key phases of product design and process design to achieve lightweight functional designs and reliable processes are deepened together with the Computer-Aided Technologies to support the approaches implementation. Therefore, a general Design for Additive Manufacturing workflow based on product and process optimization has been systematically defined. From the analysis of the state of the art, the use of a holistic approach has been considered fundamental and thus the use of integrated product-process design platforms has been evaluated as a key element for its development. Indeed, a computer-based methodology exploiting integrated tools and numerical simulations to drive the product and process optimization has been proposed. A validation of CAD platform-based approaches has been performed, as well as potentials offered by integrated tools have been evaluated. Concerning product optimization, systematic approaches to integrate topology optimization in the design have been proposed and validated through product optimization of an automotive case study. Concerning process optimization, the use of process simulation techniques to prevent manufacturing flaws related to the high thermal gradients of metal processes is developed, providing case studies to validate results compared to experimental data, and application to process optimization of an automotive case study. Finally, an example of the product and process design through the proposed simulation-driven integrated approach is provided to prove the method's suitability for effective redesigns of Additive Manufacturing based high-performance metal products. The results are then outlined, and further developments are discussed.
Resumo:
The corpus luteum (CL) lifespan is characterized by a rapid growth, differentiation and controlled regression of the luteal tissue, accompanied by an intense angiogenesis and angioregression. Indeed, the CL is one of the most highly vascularised tissue in the body with a proliferation rate of the endothelial cells 4- to 20-fold more intense than in some of the most malignant human tumours. This angiogenic process should be rigorously controlled to allow the repeated opportunities of fertilization. After a first period of rapid growth, the tissue becomes stably organized and prepares itself to switch to the phenotype required for its next apoptotic regression. In pregnant swine, the lifespan of the CLs must be extended to support embryonic and foetal development and vascularisation is necessary for the maintenance of luteal function. Among the molecules involved in the angiogenesis, Vascular Endothelial Growth Factor (VEGF) is the main regulator, promoting endothelial cells proliferation, differentiation and survival as well as vascular permeability and vessel lumen formation. During vascular invasion and apoptosis process, the remodelling of the extracellular matrix is essential for the correct evolution of the CL, particularly by the action of specific class of proteolytic enzymes known as matrix metalloproteinases (MMPs). Another important factor that plays a role in the processes of angiogenesis and angioregression during the CL formation and luteolysis is the isopeptide Endothelin-1 (ET-1), which is well-known to be a potent vasoconstrictor and mitogen for endothelial cells. The goal of the present thesis was to study the role and regulation of vascularisation in an adult vascular bed. For this purpose, using a precisely controlled in vivo model of swine CL development and regression, we determined the levels of expression of the members of VEGF system (VEGF total and specific isoforms; VEGF receptor-1, VEGFR-1; VEGF receptor-2, VEGFR-2) and ET- 1 system (ET-1; endothelin converting enzyme-1, ECE-1; endothelin receptor type A, ET-A) as well as the activity of the Ca++/Mg++-dependent endonucleases and gelatinases (MMP-2 and MMP-9). Three experiments were conducted to reach such objectives in CLs isolated from ovaries of cyclic, pregnant or fasted gilts. In the Experiment I, we evaluated the influence of acute fasting on VEGF production and VEGF, VEGFR-2, ET-1, ECE-1 and ET-A mRNA expressions in CLs collected on day 6 after ovulation (midluteal phase). The results indicated a down-regulation of VEGF, VEGFR-2, ET-1 and ECE-1 mRNA expression, although no change was observed for VEGF protein. Furthermore, we observed that fasting stimulated steroidogenesis by luteal cells. On the basis of the main effects of VEGF (stimulation of vessel growth and endothelial permeability) and ET-1 (stimulation of endothelial cell proliferation and vasoconstriction, as well as VEGF stimulation), we concluded that feed restriction possibly inhibited luteal vessel development. This could be, at least in part, compensated by a decrease of vasal tone due to a diminution of ET-1, thus ensuring an adequate blood flow and the production of steroids by the luteal cells. In the Experiment II, we investigated the relationship between VEGF, gelatinases and Ca++/Mg++-dependent endonucleases activities with the functional CL stage throughout the oestrous cycle and at pregnancy. The results demonstrated differential patterns of expression of those molecules in correspondence to the different phases of the oestrous cycle. Immediately after ovulation, VEGF mRNA/protein levels and MMP-9 activity are maximal. On days 5–14 after ovulation, VEGF expression and MMP-2 and -9 activities are at basal levels, while Ca++/Mg++-dependent endonuclease levels increased significantly in relation to day 1. Only at luteolysis (day 17), Ca++/Mg++-dependent endonuclease and MMP-2 spontaneous activity increased significantly. At pregnancy, high levels of MMP-9 and VEGF were observed. These results suggested that during the very early luteal phase, high MMPs activities coupled with high VEGF levels drive the tissue to an angiogenic phenotype, allowing CL growth under LH (Luteinising Hormone) stimulus, while during the late luteal phase, low VEGF and elevate MMPs levels may play a role in the apoptotic tissue and extracellular matrix remodelling during structural luteolysis. In the Experiment III, we described the expression patterns of all distinct VEGF isoforms throughout the oestrous cycle. Furthermore, the mRNA expression and protein levels of both VEGF receptors were also evaluated. Four novel VEGF isoforms (VEGF144, VEGF147, VEGF182, and VEGF164b) were found for the first time in swine and the seven identified isoforms presented four different patterns of expression. All isoforms showed their highest mRNA levels in newly formed CLs (day 1), followed by a decrease during mid-late luteal phase (days 10–17), except for VEGF182, VEGF188 and VEGF144 that showed a differential regulation during late luteal phase (day 14) or at luteolysis (day 17). VEGF protein levels paralleled the most expressed and secreted VEGF120 and VEGF164 isoforms. The VEGF receptors mRNAs showed a different pattern of expression in relation to their ligands, increasing between day 1 and 3 and gradually decreasing during the mid-late luteal phase. The differential regulation of some VEGF isoforms principally during the late luteal phase and luteolysis suggested a specific role of VEGF during tissue remodelling process that occurs either for CL maintenance in case of pregnancy or for noncapillary vessel development essential for tissue removal during structural luteolysis. In summary, our findings allow us to determine relationships among factors involved in the angiogenesis and angioregression mechanisms that take place during the formation and regression of the CL. Thus, CL provides a very interesting model for studying such factors in different fields of the basic research.
Resumo:
This dissertation concerns active fibre-reinforced composites with embedded shape memory alloy wires. The structural application of active materials allows to develop adaptive structures which actively respond to changes in the environment, such as morphing structures, self-healing structures and power harvesting devices. In particular, shape memory alloy actuators integrated within a composite actively control the structural shape or stiffness, thus influencing the composite static and dynamic properties. Envisaged applications include, among others, the prevention of thermal buckling of the outer skin of air vehicles, shape changes in panels for improved aerodynamic characteristics and the deployment of large space structures. The study and design of active composites is a complex and multidisciplinary topic, requiring in-depth understanding of both the coupled behaviour of active materials and the interaction between the different composite constituents. Both fibre-reinforced composites and shape memory alloys are extremely active research topics, whose modelling and experimental characterisation still present a number of open problems. Thus, while this dissertation focuses on active composites, some of the research results presented here can be usefully applied to traditional fibre-reinforced composites or other shape memory alloy applications. The dissertation is composed of four chapters. In the first chapter, active fibre-reinforced composites are introduced by giving an overview of the most common choices available for the reinforcement, matrix and production process, together with a brief introduction and classification of active materials. The second chapter presents a number of original contributions regarding the modelling of fibre-reinforced composites. Different two-dimensional laminate theories are derived from a parent three-dimensional theory, introducing a procedure for the a posteriori reconstruction of transverse stresses along the laminate thickness. Accurate through the thickness stresses are crucial for the composite modelling as they are responsible for some common failure mechanisms. A new finite element based on the First-order Shear Deformation Theory and a hybrid stress approach is proposed for the numerical solution of the two-dimensional laminate problem. The element is simple and computationally efficient. The transverse stresses through the laminate thickness are reconstructed starting from a general finite element solution. A two stages procedure is devised, based on Recovery by Compatibility in Patches and three-dimensional equilibrium. Finally, the determination of the elastic parameters of laminated structures via numerical-experimental Bayesian techniques is investigated. Two different estimators are analysed and compared, leading to the definition of an alternative procedure to improve convergence of the estimation process. The third chapter focuses on shape memory alloys, describing their properties and applications. A number of constitutive models proposed in the literature, both one-dimensional and three-dimensional, are critically discussed and compared, underlining their potential and limitations, which are mainly related to the definition of the phase diagram and the choice of internal variables. Some new experimental results on shape memory alloy material characterisation are also presented. These experimental observations display some features of the shape memory alloy behaviour which are generally not included in the current models, thus some ideas are proposed for the development of a new constitutive model. The fourth chapter, finally, focuses on active composite plates with embedded shape memory alloy wires. A number of di®erent approaches can be used to predict the behaviour of such structures, each model presenting different advantages and drawbacks related to complexity and versatility. A simple model able to describe both shape and stiffness control configurations within the same context is proposed and implemented. The model is then validated considering the shape control configuration, which is the most sensitive to model parameters. The experimental work is divided in two parts. In the first part, an active composite is built by gluing prestrained shape memory alloy wires on a carbon fibre laminate strip. This structure is relatively simple to build, however it is useful in order to experimentally demonstrate the feasibility of the concept proposed in the first part of the chapter. In the second part, the making of a fibre-reinforced composite with embedded shape memory alloy wires is investigated, considering different possible choices of materials and manufacturing processes. Although a number of technological issues still need to be faced, the experimental results allow to demonstrate the mechanism of shape control via embedded shape memory alloy wires, while showing a good agreement with the proposed model predictions.
Resumo:
In the recent years it is emerged that peripheral arterial disease (PAD) has become a growing health problem in Western countries. This is a progressive manifestation of atherothrombotic vascular disease, which results into the narrowing of the blood vessels of the lower limbs and, as final consequence, in critical leg ischemia. PAD often occurs along with other cardiovascular risk factors, including diabetes mellitus (DM), low-grade inflammation, hypertension, and lipid disorders. Patients with DM have an increased risk of developing PAD, and that risk increases with the duration of DM. Moreover, there is a growing population of patients identified with insulin resistance (IR), impaired glucose tolerance, and obesity, a pathological condition known as “metabolic syndrome”, which presents increased cardiovascular risk. Atherosclerosis is the earliest symptom of PAD and is a dynamic and progressive disease arising from the combination of endothelial dysfunction and inflammation. Endothelial dysfunction is a broad term that implies diminished production or availability of nitric oxide (NO) and/or an imbalance in the relative contribution of endothelium-derived relaxing factors. The secretion of these agents is considerably reduced in association with the major risks of atherosclerosis, especially hyperglycaemia and diabetes, and a reduced vascular repair has been observed in response to wound healing and to ischemia. Neovascularization does not only rely on the proliferation of local endothelial cells, but also involves bone marrow-derived stem cells, referred to as endothelial progenitor cells (EPCs), since they exhibit endothelial surface markers and properties. They can promote postnatal vasculogenesis by homing to, differentiating into an endothelial phenotype, proliferating and incorporating into new vessels. Consequently, EPCs are critical to endothelium maintenance and repair and their dysfunction contributes to vascular disease. The aim of this study has been the characterization of EPCs from healthy peripheral blood, in terms of proliferation, differentiation and function. Given the importance of NO in neovascularization and homing process, it has been investigated the expression of NO synthase (NOS) isoforms, eNOS, nNOS and iNOS, and the effects of their inhibition on EPC function. Moreover, it has been examined the expression of NADPH oxidase (Nox) isoforms which are the principal source of ROS in the cell. In fact, a number of evidences showed the correlation between ROS and NO metabolism, since oxidative stress causes NOS inactivation via enzyme uncoupling. In particular, it has been studied the expression of Nox2 and Nox4, constitutively expressed in endothelium, and Nox1. The second part of this research was focused on the study of EPCs under pathological conditions. Firstly, EPCs isolated from healthy subject were cultured in a hyperglycaemic medium, in order to evaluate the effects of high glucose concentration on EPCs. Secondly, EPCs were isolated from the peripheral blood of patients affected with PAD, both diabetic or not, and it was assessed their capacity to proliferate, differentiate, and to participate to neovasculogenesis. Furthermore, it was investigated the expression of NOS and Nox in these cells. Mononuclear cells isolated from peripheral blood of healthy patients, if cultured under differentiating conditions, differentiate into EPCs. These cells are not able to form capillary-like structures ex novo, but participate to vasculogenesis by incorporation into the new vessels formed by mature endothelial cells, such as HUVECs. With respect to NOS expression, these cells have high levels of iNOS, the inducible isoform of NOS, 3-4 fold higher than in HUVECs. While the endothelial isoform, eNOS, is poorly expressed in EPCs. The higher iNOS expression could be a form of compensation of lower eNOS levels. Under hyperglycaemic conditions, both iNOS and eNOS expression are enhanced compared to control EPCs, as resulted from experimental studies in animal models. In patients affected with PAD, the EPCs may act in different ways. Non-diabetic patients and diabetic patients with a higher vascular damage, evidenced by a higher number of circulating endothelial cells (CECs), show a reduced proliferation and ability to participate to vasculogenesis. On the other hand, diabetic patients with lower CEC number have proliferative and vasculogenic capacity more similar to healthy EPCs. eNOS levels in both patient types are equivalent to those of control, while iNOS expression is enhanced. Interestingly, nNOS is not detected in diabetic patients, analogously to other cell types in diabetics, which show a reduced or no nNOS expression. Concerning Nox expression, EPCs present higher levels of both Nox1 and Nox2, in comparison with HUVECs, while Nox4 is poorly expressed, probably because of uncompleted differentiation into an endothelial phenotype. Nox1 is more expressed in PAD patients, diabetic or not, than in controls, suggesting an increased ROS production. Nox2, instead, is lower in patients than in controls. Being Nox2 involved in cellular response to VEGF, its reduced expression can be referable to impaired vasculogenic potential of PAD patients.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.
Resumo:
Management and organization literature has extensively noticed the crucial role that improvisation assumes in organizations, both as a learning process (Miner, Bassoff & Moorman, 2001), a creative process (Fisher & Amabile, 2008), a capability (Vera & Crossan, 2005), and a personal disposition (Hmielesky & Corbett, 2006; 2008). My dissertation aims to contribute to the existing literature on improvisation, addressing two general research questions: 1) How does improvisation unfold at an individual level? 2) What are the potential antecedents and consequences of individual proclivity to improvise? This dissertation is based on a mixed methodology that allowed me to deal with these two general research questions and enabled a constant interaction between the theoretical framework and the empirical results. The selected empirical field is haute cuisine and the respondents are the executive chefs of the restaurants awarded by Michelin Guide in 2010 in Italy. The qualitative section of the dissertation is based on the analysis of 26 inductive case studies and offers a multifaceted contribution. First, I describe how improvisation works both as a learning and creative process. Second, I introduce a new categorization of individual improvisational scenarios (demanded creative improvisation, problem solving improvisation, and pure creative improvisation). Third, I describe the differences between improvisation and other creative processes detected in the field (experimentation, brainstorming, trial and error through analytical procedure, trial and error, and imagination). The quantitative inquiry is founded on a Structural Equation Model, which allowed me to test simultaneously the relationships between proclivity to improvise and its antecedents and consequences. In particular, using a newly developed scale to measure individual proclivity to improvise, I test the positive influence of industry experience, self-efficacy, and age on proclivity to improvise and the negative impact of proclivity to improvise on outcome deviation. Theoretical contributions and practical implications of the results are discussed.