51 resultados para Shape Design Optimization
Resumo:
Shape memory materials (SMMs) represent an important class of smart materials that have the ability to return from a deformed state to their original shape. Thanks to such a property, SMMs are utilized in a wide range of innovative applications. The increasing number of applications and the consequent involvement of industrial players in the field have motivated researchers to formulate constitutive models able to catch the complex behavior of these materials and to develop robust computational tools for design purposes. Such a research field is still under progress, especially in the prediction of shape memory polymer (SMP) behavior and of important effects characterizing shape memory alloy (SMA) applications. Moreover, the frequent use of shape memory and metallic materials in biomedical devices, particularly in cardiovascular stents, implanted in the human body and experiencing millions of in-vivo cycles by the blood pressure, clearly indicates the need for a deeper understanding of fatigue/fracture failure in microsize components. The development of reliable stent designs against fatigue is still an open subject in scientific literature. Motivated by the described framework, the thesis focuses on several research issues involving the advanced constitutive, numerical and fatigue modeling of elastoplastic and shape memory materials. Starting from the constitutive modeling, the thesis proposes to develop refined phenomenological models for reliable SMA and SMP behavior descriptions. Then, concerning the numerical modeling, the thesis proposes to implement the models into numerical software by developing implicit/explicit time-integration algorithms, to guarantee robust computational tools for practical purposes. The described modeling activities are completed by experimental investigations on SMA actuator springs and polyethylene polymers. Finally, regarding the fatigue modeling, the thesis proposes the introduction of a general computational approach for the fatigue-life assessment of a classical stent design, in order to exploit computer-based simulations to prevent failures and modify design, without testing numerous devices.
Resumo:
During the last few decades an unprecedented technological growth has been at the center of the embedded systems design paramount, with Moore’s Law being the leading factor of this trend. Today in fact an ever increasing number of cores can be integrated on the same die, marking the transition from state-of-the-art multi-core chips to the new many-core design paradigm. Despite the extraordinarily high computing power, the complexity of many-core chips opens the door to several challenges. As a result of the increased silicon density of modern Systems-on-a-Chip (SoC), the design space exploration needed to find the best design has exploded and hardware designers are in fact facing the problem of a huge design space. Virtual Platforms have always been used to enable hardware-software co-design, but today they are facing with the huge complexity of both hardware and software systems. In this thesis two different research works on Virtual Platforms are presented: the first one is intended for the hardware developer, to easily allow complex cycle accurate simulations of many-core SoCs. The second work exploits the parallel computing power of off-the-shelf General Purpose Graphics Processing Units (GPGPUs), with the goal of an increased simulation speed. The term Virtualization can be used in the context of many-core systems not only to refer to the aforementioned hardware emulation tools (Virtual Platforms), but also for two other main purposes: 1) to help the programmer to achieve the maximum possible performance of an application, by hiding the complexity of the underlying hardware. 2) to efficiently exploit the high parallel hardware of many-core chips in environments with multiple active Virtual Machines. This thesis is focused on virtualization techniques with the goal to mitigate, and overtake when possible, some of the challenges introduced by the many-core design paradigm.
Resumo:
This Thesis aims at building and discussing mathematical models applications focused on Energy problems, both on the thermal and electrical side. The objective is to show how mathematical programming techniques developed within Operational Research can give useful answers in the Energy Sector, how they can provide tools to support decision making processes of Companies operating in the Energy production and distribution and how they can be successfully used to make simulations and sensitivity analyses to better understand the state of the art and convenience of a particular technology by comparing it with the available alternatives. The first part discusses the fundamental mathematical background followed by a comprehensive literature review about mathematical modelling in the Energy Sector. The second part presents mathematical models for the District Heating strategic network design and incremental network design. The objective is the selection of an optimal set of new users to be connected to an existing thermal network, maximizing revenues, minimizing infrastructure and operational costs and taking into account the main technical requirements of the real world application. Results on real and randomly generated benchmark networks are discussed with particular attention to instances characterized by big networks dimensions. The third part is devoted to the development of linear programming models for optimal battery operation in off-grid solar power schemes, with consideration of battery degradation. The key contribution of this work is the inclusion of battery degradation costs in the optimisation models. As available data on relating degradation costs to the nature of charge/discharge cycles are limited, we concentrate on investigating the sensitivity of operational patterns to the degradation cost structure. The objective is to investigate the combination of battery costs and performance at which such systems become economic. We also investigate how the system design should change when battery degradation is taken into account.
Resumo:
Combinatorial Optimization is becoming ever more crucial, in these days. From natural sciences to economics, passing through urban centers administration and personnel management, methodologies and algorithms with a strong theoretical background and a consolidated real-word effectiveness is more and more requested, in order to find, quickly, good solutions to complex strategical problems. Resource optimization is, nowadays, a fundamental ground for building the basements of successful projects. From the theoretical point of view, Combinatorial Optimization rests on stable and strong foundations, that allow researchers to face ever more challenging problems. However, from the application point of view, it seems that the rate of theoretical developments cannot cope with that enjoyed by modern hardware technologies, especially with reference to the one of processors industry. In this work we propose new parallel algorithms, designed for exploiting the new parallel architectures available on the market. We found that, exposing the inherent parallelism of some resolution techniques (like Dynamic Programming), the computational benefits are remarkable, lowering the execution times by more than an order of magnitude, and allowing to address instances with dimensions not possible before. We approached four Combinatorial Optimization’s notable problems: Packing Problem, Vehicle Routing Problem, Single Source Shortest Path Problem and a Network Design problem. For each of these problems we propose a collection of effective parallel solution algorithms, either for solving the full problem (Guillotine Cuts and SSSPP) or for enhancing a fundamental part of the solution method (VRP and ND). We endorse our claim by presenting computational results for all problems, either on standard benchmarks from the literature or, when possible, on data from real-world applications, where speed-ups of one order of magnitude are usually attained, not uncommonly scaling up to 40 X factors.
Resumo:
This doctorate was funded by the Regione Emilia Romagna, within a Spinner PhD project coordinated by the University of Parma, and involving the universities of Bologna, Ferrara and Modena. The aim of the project was: - Production of polymorphs, solvates, hydrates and co-crystals of active pharmaceutical ingredients (APIs) and agrochemicals with green chemistry methods; - Optimization of molecular and crystalline forms of APIs and pesticides in relation to activity, bioavailability and patentability. In the last decades, a growing interest in the solid-state properties of drugs in addition to their solution chemistry has blossomed. The achievement of the desired and/or the more stable polymorph during the production process can be a challenge for the industry. The study of crystalline forms could be a valuable step to produce new polymorphs and/or co-crystals with better physical-chemical properties such as solubility, permeability, thermal stability, habit, bulk density, compressibility, friability, hygroscopicity and dissolution rate in order to have potential industrial applications. Selected APIs (active pharmaceutical ingredients) were studied and their relationship between crystal structure and properties investigated, both in the solid state and in solution. Polymorph screening and synthesis of solvates and molecular/ionic co-crystals were performed according to green chemistry principles. Part of this project was developed in collaboration with chemical/pharmaceutical companies such as BASF (Germany) and UCB (Belgium). We focused on on the optimization of conditions and parameters of crystallization processes (additives, concentration, temperature), and on the synthesis and characterization of ionic co-crystals. Moreover, during a four-months research period in the laboratories of Professor Nair Rodriguez-Hormedo (University of Michigan), the stability in aqueous solution at the equilibrium of ionic co-crystals (ICCs) of the API piracetam was investigated, to understand the relationship between their solid-state and solution properties, in view of future design of new crystalline drugs with predefined solid and solution properties.
Resumo:
A servo-controlled automatic machine can perform tasks that involve synchronized actuation of a significant number of servo-axes, namely one degree-of-freedom (DoF) electromechanical actuators. Each servo-axis comprises a servo-motor, a mechanical transmission and an end-effector, and is responsible for generating the desired motion profile and providing the power required to achieve the overall task. The design of a such a machine must involve a detailed study from a mechatronic viewpoint, due to its electric and mechanical nature. The first objective of this thesis is the development of an overarching electromechanical model for a servo-axis. Every loss source is taken into account, be it mechanical or electrical. The mechanical transmission is modeled by means of a sequence of lumped-parameter blocks. The electric model of the motor and the inverter takes into account winding losses, iron losses and controller switching losses. No experimental characterizations are needed to implement the electric model, since the parameters are inferred from the data available in commercial catalogs. With the global model at disposal, a second objective of this work is to perform the optimization analysis, in particular, the selection of the motor-reducer unit. The optimal transmission ratios that minimize several objective functions are found. An optimization process is carried out and repeated for each candidate motor. Then, we present a novel method where the discrete set of available motor is extended to a continuous domain, by fitting manufacturer data. The problem becomes a two-dimensional nonlinear optimization subject to nonlinear constraints, and the solution gives the optimal choice for the motor-reducer system. The presented electromechanical model, along with the implementation of optimization algorithms, forms a complete and powerful simulation tool for servo-controlled automatic machines. The tool allows for determining a wide range of electric and mechanical parameters and the behavior of the system in different operating conditions.
Resumo:
Several decision and control tasks in cyber-physical networks can be formulated as large- scale optimization problems with coupling constraints. In these "constraint-coupled" problems, each agent is associated to a local decision variable, subject to individual constraints. This thesis explores the use of primal decomposition techniques to develop tailored distributed algorithms for this challenging set-up over graphs. We first develop a distributed scheme for convex problems over random time-varying graphs with non-uniform edge probabilities. The approach is then extended to unknown cost functions estimated online. Subsequently, we consider Mixed-Integer Linear Programs (MILPs), which are of great interest in smart grid control and cooperative robotics. We propose a distributed methodological framework to compute a feasible solution to the original MILP, with guaranteed suboptimality bounds, and extend it to general nonconvex problems. Monte Carlo simulations highlight that the approach represents a substantial breakthrough with respect to the state of the art, thus representing a valuable solution for new toolboxes addressing large-scale MILPs. We then propose a distributed Benders decomposition algorithm for asynchronous unreliable networks. The framework has been then used as starting point to develop distributed methodologies for a microgrid optimal control scenario. We develop an ad-hoc distributed strategy for a stochastic set-up with renewable energy sources, and show a case study with samples generated using Generative Adversarial Networks (GANs). We then introduce a software toolbox named ChoiRbot, based on the novel Robot Operating System 2, and show how it facilitates simulations and experiments in distributed multi-robot scenarios. Finally, we consider a Pickup-and-Delivery Vehicle Routing Problem for which we design a distributed method inspired to the approach of general MILPs, and show the efficacy through simulations and experiments in ChoiRbot with ground and aerial robots.
Resumo:
The topic of the Ph.D project focuses on the modelling of the soil-water dynamics inside an instrumented embankment section along Secchia River (Cavezzo (MO)) in the period from 2017 to 2018 and the quantification of the performance of the direct and indirect simulations . The commercial code Hydrus2D by Pc-Progress has been chosen to run the direct simulations. Different soil-hydraulic models have been adopted and compared. The parameters of the different hydraulic models are calibrated using a local optimization method based on the Levenberg - Marquardt algorithm implemented in the Hydrus package. The calibration program is carried out using different types of dataset of observation points, different weighting distributions, different combinations of optimized parameters and different initial sets of parameters. The final goal is an in-depth study of the potentialities and limits of the inverse analysis when applied to a complex geotechnical problem as the case study. The second part of the research focuses on the effects of plant roots and soil-vegetation-atmosphere interaction on the spatial and temporal distribution of pore water pressure in soil. The investigated soil belongs to the West Charlestown Bypass embankment, Newcastle, Australia, that showed in the past years shallow instabilities and the use of long stem planting is intended to stabilize the slope. The chosen plant species is the Malaleuca Styphelioides, native of eastern Australia. The research activity included the design and realization of a specific large scale apparatus for laboratory experiments. Local suction measurements at certain intervals of depth and radial distances from the root bulb are recorded within the vegetated soil mass under controlled boundary conditions. The experiments are then reproduced numerically using the commercial code Hydrus 2D. Laboratory data are used to calibrate the RWU parameters and the parameters of the hydraulic model.
Resumo:
The following thesis focused on the dry grinding process modelling and optimization for automotive gears production. A FEM model was implemented with the aim at predicting process temperatures and preventing grinding thermal defects on the material surface. In particular, the model was conceived to facilitate the choice of the grinding parameters during the design and the execution of the dry-hard finishing process developed and patented by the company Samputensili Machine Tools (EMAG Group) on automotive gears. The proposed model allows to analyse the influence of the technological parameters, comprising the grinding wheel specifications. Automotive gears finished by dry-hard finishing process are supposed to reach the same quality target of the gears finished through the conventional wet grinding process with the advantage of reducing production costs and environmental pollution. But, the grinding process allows very high values of specific pressure and heat absorbed by the material, therefore, removing the lubricant increases the risk of thermal defects occurrence. An incorrect design of the process parameters set could cause grinding burns, which affect the mechanical performance of the ground component inevitably. Therefore, a modelling phase of the process could allow to enhance the mechanical characteristics of the components and avoid waste during production. A hierarchical FEM model was implemented to predict dry grinding temperatures and was represented by the interconnection of a microscopic and a macroscopic approach. A microscopic single grain grinding model was linked to a macroscopic thermal model to predict the dry grinding process temperatures and so to forecast the thermal cycle effect caused by the process parameters and the grinding wheel specification choice. Good agreement between the model and the experiments was achieved making the dry-hard finishing an efficient and reliable technology to implement in the gears automotive industry.
Resumo:
The present thesis is focused on wave energy, which is a particular kind of ocean energy, and is based on the activity carried out during the EU project SEA TITAN. The main scope of this work is the design of a power electronic section for an innovative wave energy extraction system based on a switched-reluctance machine. In the first chapter, the general features of marine wave energy harvesting are treated. The concept of Wave Energy Converter (WEC) is introduced as well as the mathematical description of the waves, their characterization and measurement, the WEC classification, the operating principles and the standardization framework. Also, detailed considerations on the environmental impact are presented. The SEA TITAN project is briefly described. The second chapter is dedicated to the technical issues of the SEA TITAN project, such as the operating principle, the performance optimization carried out in the project, the main innovations as well as interesting demonstrations on the behavior of the generator and its control. In the third chapter, the power electronics converters of SEA TITAN are described, and the design choices, procedures and calculations are shown, with a further insight into the application given by analyzing the MATLAB Simulink model of the system and its control scheme. Experimental tests are reported in the fourth chapter, with graphs and illustrations of the power electronic apparatus interfaced with the real machine. Finally, the conclusion in the fifth chapter offers a global overview of the project and opens further development pathways.
Resumo:
In Cystic Fibrosis (CF) the deletion of phenylalanine 508 (F508del) in the CFTR anion channel is associated to misfolding and defective gating of the mutant protein. Among the known proteins involved in CFTR processing, one of the most promising drug target is the ubiquitin ligase RNF5, which normally promotes F508del-CFTR degradation. In this context, a small molecule RNF5 inhibitor is expected to chemically mimic a condition of RNF5 silencing, thus preventing mutant CFTR degradation and causing its stabilization and plasma membrane trafficking. Hence, by exploiting a virtual screening (VS) campaign, the hit compound inh-2 was discovered as the first-in-class inhibitor of RNF5. Evaluation of inh-2 efficacy on CFTR rescue showed that it efficiently decreases ubiquitination of mutant CFTR and increases chloride current in human primary bronchial epithelia. Based on the promising biological results obtained with inh-2, this thesis reports the structure-based design of potential RNF5 inhibitors having improved potency and efficacy. The optimization of general synthetic strategies gave access to a library of analogues of the 1,2,4-thiadiazol-5-ylidene inh-2 for SAR investigation. The new analogues were tested for their corrector activity in CFBE41o- cells by using the microfluorimetric HS-YFP assay as a primary screen. Then, the effect of putative RNF5 inhibitors on proliferation, apoptosis and the formation of autophagic vacuoles was evaluated. Some of the new analogs significantly increased the basal level of autophagy, reproducing RNF5 silencing effect in cell. Among them, one compound also displayed a greater rescue of the F508del-CFTR trafficking defect than inh-2. Our preliminary results suggest that the 1,2,4-thiadiazolylidene could be a suitable scaffold for the discovery of potential RNF5 inhibitors able to rescue mutant CFTRs. Biological tests are still ongoing to acquire in-depth knowledge about the mechanism of action and therapeutic relevance of this unprecedented pharmacological strategy.
Resumo:
The research project aims to improve the Design for Additive Manufacturing of metal components. Firstly, the scenario of Additive Manufacturing is depicted, describing its role in Industry 4.0 and in particular focusing on Metal Additive Manufacturing technologies and the Automotive sector applications. Secondly, the state of the art in Design for Additive Manufacturing is described, contextualizing the methodologies, and classifying guidelines, rules, and approaches. The key phases of product design and process design to achieve lightweight functional designs and reliable processes are deepened together with the Computer-Aided Technologies to support the approaches implementation. Therefore, a general Design for Additive Manufacturing workflow based on product and process optimization has been systematically defined. From the analysis of the state of the art, the use of a holistic approach has been considered fundamental and thus the use of integrated product-process design platforms has been evaluated as a key element for its development. Indeed, a computer-based methodology exploiting integrated tools and numerical simulations to drive the product and process optimization has been proposed. A validation of CAD platform-based approaches has been performed, as well as potentials offered by integrated tools have been evaluated. Concerning product optimization, systematic approaches to integrate topology optimization in the design have been proposed and validated through product optimization of an automotive case study. Concerning process optimization, the use of process simulation techniques to prevent manufacturing flaws related to the high thermal gradients of metal processes is developed, providing case studies to validate results compared to experimental data, and application to process optimization of an automotive case study. Finally, an example of the product and process design through the proposed simulation-driven integrated approach is provided to prove the method's suitability for effective redesigns of Additive Manufacturing based high-performance metal products. The results are then outlined, and further developments are discussed.
Resumo:
Several decision and control tasks involve networks of cyber-physical systems that need to be coordinated and controlled according to a fully-distributed paradigm involving only local communications without any central unit. This thesis focuses on distributed optimization and games over networks from a system theoretical perspective. In the addressed frameworks, we consider agents communicating only with neighbors and running distributed algorithms with optimization-oriented goals. The distinctive feature of this thesis is to interpret these algorithms as dynamical systems and, thus, to resort to powerful system theoretical tools for both their analysis and design. We first address the so-called consensus optimization setup. In this context, we provide an original system theoretical analysis of the well-known Gradient Tracking algorithm in the general case of nonconvex objective functions. Then, inspired by this method, we provide and study a series of extensions to improve the performance and to deal with more challenging settings like, e.g., the derivative-free framework or the online one. Subsequently, we tackle the recently emerged framework named distributed aggregative optimization. For this setup, we develop and analyze novel schemes to handle (i) online instances of the problem, (ii) ``personalized'' optimization frameworks, and (iii) feedback optimization settings. Finally, we adopt a system theoretical approach to address aggregative games over networks both in the presence or absence of linear coupling constraints among the decision variables of the players. In this context, we design and inspect novel fully-distributed algorithms, based on tracking mechanisms, that outperform state-of-the-art methods in finding the Nash equilibrium of the game.
Resumo:
Conventional chromatographic columns are packed with porous beads by the universally employed slurry-packing method. The lack of precise control of the particle size distribution, shape and position inside the column have dramatic effects on the separation efficiency. In the first part the thesis an ordered, three-dimensional, pillar-array structure was designed by a CAD software. Several columns, characterized by different fluid distributors and bed length, were produced by a stereolithographic 3D printer and compared in terms of pressure drop and height equivalent to a theroretical plate (HETP). To prevent the release of unwanted substances and to provide a surface for immobilizing a ligand, pillars were coated with one or more of the following materials: titanium dioxide, nanofibrillated cellulose (NFC) and polystyrene. The external NFC layer was functionalized with Cibacron Blue and the dynamic binding capacity of the column was measured by performing three chromatographic cycles, using bovine serum albumin (BSA) as target molecule. The second part of the thesis deals with Covid-19 pandemic related research activities. In early 2020, due to the pandemic outbreak, surgical face masks became an essential non-pharmaceutical intervention to limit the spread. To address the consequent shortage and to support the reconversion of the Italian industry, in late March 2020 a multidisciplinary group of the University of Bologna created the first Italian laboratory able to perform all the tests required for the evaluation and certification of surgical masks. More than 1200 tests were performed on about 350 prototypes, according to the standard EN 14683:2019. The results were analyzed to define the best material properties and masks composition for the production of masks with excellent efficiency. To optimize the usage of surgical masks and to reduce their environmental burden, the variation of their performance over time of usage were investigated as to determine the maximum lifetime.
Resumo:
The first topic analyzed in the thesis will be Neural Architecture Search (NAS). I will focus on two different tools that I developed, one to optimize the architecture of Temporal Convolutional Networks (TCNs), a convolutional model for time-series processing that has recently emerged, and one to optimize the data precision of tensors inside CNNs. The first NAS proposed explicitly targets the optimization of the most peculiar architectural parameters of TCNs, namely dilation, receptive field, and the number of features in each layer. Note that this is the first NAS that explicitly targets these networks. The second NAS proposed instead focuses on finding the most efficient data format for a target CNN, with the granularity of the layer filter. Note that applying these two NASes in sequence allows an "application designer" to minimize the structure of the neural network employed, minimizing the number of operations or the memory usage of the network. After that, the second topic described is the optimization of neural network deployment on edge devices. Importantly, exploiting edge platforms' scarce resources is critical for NN efficient execution on MCUs. To do so, I will introduce DORY (Deployment Oriented to memoRY) -- an automatic tool to deploy CNNs on low-cost MCUs. DORY, in different steps, can manage different levels of memory inside the MCU automatically, offload the computation workload (i.e., the different layers of a neural network) to dedicated hardware accelerators, and automatically generates ANSI C code that orchestrates off- and on-chip transfers with the computation phases. On top of this, I will introduce two optimized computation libraries that DORY can exploit to deploy TCNs and Transformers on edge efficiently. I conclude the thesis with two different applications on bio-signal analysis, i.e., heart rate tracking and sEMG-based gesture recognition.