15 resultados para Shape Design Optimization
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
The aim of Tissue Engineering is to develop biological substitutes that will restore lost morphological and functional features of diseased or damaged portions of organs. Recently computer-aided technology has received considerable attention in the area of tissue engineering and the advance of additive manufacture (AM) techniques has significantly improved control over the pore network architecture of tissue engineering scaffolds. To regenerate tissues more efficiently, an ideal scaffold should have appropriate porosity and pore structure. More sophisticated porous configurations with higher architectures of the pore network and scaffolding structures that mimic the intricate architecture and complexity of native organs and tissues are then required. This study adopts a macro-structural shape design approach to the production of open porous materials (Titanium foams), which utilizes spatial periodicity as a simple way to generate the models. From among various pore architectures which have been studied, this work simulated pore structure by triply-periodic minimal surfaces (TPMS) for the construction of tissue engineering scaffolds. TPMS are shown to be a versatile source of biomorphic scaffold design. A set of tissue scaffolds using the TPMS-based unit cell libraries was designed. TPMS-based Titanium foams were meant to be printed three dimensional with the relative predicted geometry, microstructure and consequently mechanical properties. Trough a finite element analysis (FEA) the mechanical properties of the designed scaffolds were determined in compression and analyzed in terms of their porosity and assemblies of unit cells. The purpose of this work was to investigate the mechanical performance of TPMS models trying to understand the best compromise between mechanical and geometrical requirements of the scaffolds. The intention was to predict the structural modulus in open porous materials via structural design of interconnected three-dimensional lattices, hence optimising geometrical properties. With the aid of FEA results, it is expected that the effective mechanical properties for the TPMS-based scaffold units can be used to design optimized scaffolds for tissue engineering applications. Regardless of the influence of fabrication method, it is desirable to calculate scaffold properties so that the effect of these properties on tissue regeneration may be better understood.
Resumo:
In a world focused on the need to produce energy for a growing population, while reducing atmospheric emissions of carbon dioxide, organic Rankine cycles represent a solution to fulfil this goal. This study focuses on the design and optimization of axial-flow turbines for organic Rankine cycles. From the turbine designer point of view, most of this fluids exhibit some peculiar characteristics, such as small enthalpy drop, low speed of sound, large expansion ratio. A computational model for the prediction of axial-flow turbine performance is developed and validated against experimental data. The model allows to calculate turbine performance within a range of accuracy of ±3%. The design procedure is coupled with an optimization process, performed using a genetic algorithm where the turbine total-to-static efficiency represents the objective function. The computational model is integrated in a wider analysis of thermodynamic cycle units, by providing the turbine optimal design. First, the calculation routine is applied in the context of the Draugen offshore platform, where three heat recovery systems are compared. The turbine performance is investigated for three competing bottoming cycles: organic Rankine cycle (operating cyclopentane), steam Rankine cycle and air bottoming cycle. Findings indicate the air turbine as the most efficient solution (total-to-static efficiency = 0.89), while the cyclopentane turbine results as the most flexible and compact technology (2.45 ton/MW and 0.63 m3/MW). Furthermore, the study shows that, for organic and steam Rankine cycles, the optimal design configurations for the expanders do not coincide with those of the thermodynamic cycles. This suggests the possibility to obtain a more accurate analysis by including the computational model in the simulations of the thermodynamic cycles. Afterwards, the performance analysis is carried out by comparing three organic fluids: cyclopentane, MDM and R245fa. Results suggest MDM as the most effective fluid from the turbine performance viewpoint (total-to-total efficiency = 0.89). On the other hand, cyclopentane guarantees a greater net power output of the organic Rankine cycle (P = 5.35 MW), while R245fa represents the most compact solution (1.63 ton/MW and 0.20 m3/MW). Finally, the influence of the composition of an isopentane/isobutane mixture on both the thermodynamic cycle performance and the expander isentropic efficiency is investigated. Findings show how the mixture composition affects the turbine efficiency and so the cycle performance. Moreover, the analysis demonstrates that the use of binary mixtures leads to an enhancement of the thermodynamic cycle performance.
Resumo:
The aim of this work is to present a general overview of state-of-the-art related to design for uncertainty with a focus on aerospace structures. In particular, a simulation on a FCCZ lattice cell and on the profile shape of a nozzle will be performed. Optimization under uncertainty is characterized by the need to make decisions without complete knowledge of the problem data. When dealing with a complex problem, non-linearity, or optimization, two main issues are raised: the uncertainty of the feasibility of the solution and the uncertainty of the objective value of the function. In the first part, the Design Of Experiments (DOE) methodologies, Uncertainty Quantification (UQ), and then Uncertainty optimization will be deepened. The second part will show an application of the previous theories on through a commercial software. Nowadays multiobjective optimization on high non-linear problem can be a powerful tool to approach new concept solutions or to develop cutting-edge design. In this thesis an effective improvement have been reached on a rocket nozzle. Future work could include the introduction of multi scale modelling, multiphysics approach and every strategy useful to simulate as much possible real operative condition of the studied design.
Resumo:
Constant developments in the field of offshore wind energy have increased the range of water depths at which wind farms are planned to be installed. Therefore, in addition to monopile support structures suitable in shallow waters (up to 30 m), different types of support structures, able to withstand severe sea conditions at the greater water depths, have been developed. For water depths above 30 m, the jacket is one of the preferred support types. Jacket represents a lightweight support structure, which, in combination with complex nature of environmental loads, is prone to highly dynamic behavior. As a consequence, high stresses with great variability in time can be observed in all structural members. The highest concentration of stresses occurs in joints due to their nature (structural discontinuities) and due to the existence of notches along the welds present in the joints. This makes them the weakest elements of the jacket in terms of fatigue. In the numerical modeling of jackets for offshore wind turbines, a reduction of local stresses at the chord-brace joints, and consequently an optimization of the model, can be achieved by implementing joint flexibility in the chord-brace joints. Therefore, in this work, the influence of joint flexibility on the fatigue damage in chord-brace joints of a numerical jacket model, subjected to advanced load simulations, is studied.
Resumo:
The aim of the thesis is to design and verify a doubler for the Airbus A350XWB cargo door surround. The software used for the design is Catia and the software used for the doubler verification are Patran and Nastran.
Resumo:
Globalization has increased the pressure on organizations and companies to operate in the most efficient and economic way. This tendency promotes that companies concentrate more and more on their core businesses, outsource less profitable departments and services to reduce costs. By contrast to earlier times, companies are highly specialized and have a low real net output ratio. For being able to provide the consumers with the right products, those companies have to collaborate with other suppliers and form large supply chains. An effect of large supply chains is the deficiency of high stocks and stockholding costs. This fact has lead to the rapid spread of Just-in-Time logistic concepts aimed minimizing stock by simultaneous high availability of products. Those concurring goals, minimizing stock by simultaneous high product availability, claim for high availability of the production systems in the way that an incoming order can immediately processed. Besides of design aspects and the quality of the production system, maintenance has a strong impact on production system availability. In the last decades, there has been many attempts to create maintenance models for availability optimization. Most of them concentrated on the availability aspect only without incorporating further aspects as logistics and profitability of the overall system. However, production system operator’s main intention is to optimize the profitability of the production system and not the availability of the production system. Thus, classic models, limited to represent and optimize maintenance strategies under the light of availability, fail. A novel approach, incorporating all financial impacting processes of and around a production system, is needed. The proposed model is subdivided into three parts, maintenance module, production module and connection module. This subdivision provides easy maintainability and simple extendability. Within those modules, all aspect of production process are modeled. Main part of the work lies in the extended maintenance and failure module that offers a representation of different maintenance strategies but also incorporates the effect of over-maintaining and failed maintenance (maintenance induced failures). Order release and seizing of the production system are modeled in the production part. Due to computational power limitation, it was not possible to run the simulation and the optimization with the fully developed production model. Thus, the production model was reduced to a black-box without higher degree of details.
Resumo:
Nowadays the number of hip joints arthroplasty operations continues to increase because the elderly population is growing. Moreover, the global life expectancy is increasing and people adopt a more active way of life. For this reasons, the demand of implant revision operations is becoming more frequent. The operation procedure includes the surgical removal of the old implant and its substitution with a new one. Every time a new implant is inserted, it generates an alteration in the internal femur strain distribution, jeopardizing the remodeling process with the possibility of bone tissue loss. This is of major concern, particularly in the proximal Gruen zones, which are considered critical for implant stability and longevity. Today, different implant designs exist in the market; however there is not a clear understanding of which are the best implant design parameters to achieve mechanical optimal conditions. The aim of the study is to investigate the stress shielding effect generated by different implant design parameters on proximal femur, evaluating which ranges of those parameters lead to the most physiological conditions.
Resumo:
This thesis project studies the agent identity privacy problem in the scalar linear quadratic Gaussian (LQG) control system. For the agent identity privacy problem in the LQG control, privacy models and privacy measures have to be established first. It depends on a trajectory of correlated data rather than a single observation. I propose here privacy models and the corresponding privacy measures by taking into account the two characteristics. The agent identity is a binary hypothesis: Agent A or Agent B. An eavesdropper is assumed to make a hypothesis testing on the agent identity based on the intercepted environment state sequence. The privacy risk is measured by the Kullback-Leibler divergence between the probability distributions of state sequences under two hypotheses. By taking into account both the accumulative control reward and privacy risk, an optimization problem of the policy of Agent B is formulated. The optimal deterministic privacy-preserving LQG policy of Agent B is a linear mapping. A sufficient condition is given to guarantee that the optimal deterministic privacy-preserving policy is time-invariant in the asymptotic regime. An independent Gaussian random variable cannot improve the performance of Agent B. The numerical experiments justify the theoretic results and illustrate the reward-privacy trade-off. Based on the privacy model and the LQG control model, I have formulated the mathematical problems for the agent identity privacy problem in LQG. The formulated problems address the two design objectives: to maximize the control reward and to minimize the privacy risk. I have conducted theoretic analysis on the LQG control policy in the agent identity privacy problem and the trade-off between the control reward and the privacy risk.Finally, the theoretic results are justified by numerical experiments. From the numerical results, I expected to have some interesting observations and insights, which are explained in the last chapter.
Resumo:
The aim of this thesis is to use the developments, advantages and applications of "Building Information Modelling" (BIM) with emphasis on the discipline of structural design for steel building located in Perugia. BIM was mainly considered as a new way of planning, constructing and operating buildings or infrastructures. It has been found to offer greater opportunities for increased efficiency, optimization of resources and generally better management throughout the life cycle of a facility. BIM increases the digitalization of processes and offers integrated and collaborative technologies for design, construction and operation. To understand BIM and its benefits, one must consider all phases of a project. Higher initial design costs often lead to lower construction and operation costs. Creating data-rich digital models helps to better predict and coordinate the construction phases and operation of a building. One of the main limitations identified in the implementation of BIM is the lack of knowledge and qualified professionals. Certain disciplines such as structural and mechanical design depend on whether the main contractor, owner, general contractor or architect need to use or apply BIM to their projects. The existence of a supporting or mandatory BIM guideline may then eventually lead to its adoption. To test the potential of the BIM adoption in the steel design process, some models were developed taking advantage of a largely diffuse authoring software (Autodesk Revit), to produce construction drawings and also material schedule that were needed in order to estimate quantities and features of a real steel building. Once the model has been built the whole process has been analyzed and then compared with the traditional design process of steel structure. Many relevant aspect in term of clearness and also in time spent were shown and lead to final conclusions about the benefits from BIM methodology.
Resumo:
Nowadays, product development in all its phases plays a fundamental role in the industrial chain. The need for a company to compete at high levels, the need to be quick in responding to market demands and therefore to be able to engineer the product quickly and with a high level of quality, has led to the need to get involved in new more advanced methods/ processes. In recent years, we are moving away from the concept of 2D-based design and production and approaching the concept of Model Based Definition. By using this approach, increasingly complex systems turn out to be easier to deal with but above all cheaper in obtaining them. Thanks to the Model Based Definition it is possible to share data in a lean and simple way to the entire engineering and production chain of the product. The great advantage of this approach is precisely the uniqueness of the information. In this specific thesis work, this approach has been exploited in the context of tolerances with the aid of CAD / CAT software. Tolerance analysis or dimensional variation analysis is a way to understand how sources of variation in part size and assembly constraints propagate between parts and assemblies and how that range affects the ability of a project to meet its requirements. It is critically important to note how tolerance directly affects the cost and performance of products. Worst Case Analysis (WCA) and Statistical analysis (RSS) are the two principal methods in DVA. The thesis aims to show the advantages of using statistical dimensional analysis by creating and examining various case studies, using PTC CREO software for CAD modeling and CETOL 6σ for tolerance analysis. Moreover, it will be provided a comparison between manual and 3D analysis, focusing the attention to the information lost in the 1D case. The results obtained allow us to highlight the need to use this approach from the early stages of the product design cycle.
Resumo:
In the metal industry, and more specifically in the forging one, scrap material is a crucial issue and reducing it would be an important goal to reach. Not only would this help the companies to be more environmentally friendly and more sustainable, but it also would reduce the use of energy and lower costs. At the same time, the techniques for Industry 4.0 and the advancements in Artificial Intelligence (AI), especially in the field of Deep Reinforcement Learning (DRL), may have an important role in helping to achieve this objective. This document presents the thesis work, a contribution to the SmartForge project, that was performed during a semester abroad at Karlstad University (Sweden). This project aims at solving the aforementioned problem with a business case of the company Bharat Forge Kilsta, located in Karlskoga (Sweden). The thesis work includes the design and later development of an event-driven architecture with microservices, to support the processing of data coming from sensors set up in the company's industrial plant, and eventually the implementation of an algorithm with DRL techniques to control the electrical power to use in it.
Resumo:
One of the major issues for power converters that are connected to the electric grid are the measurement of three phase Conduced Emissions (CE), which are regulated by international and regional standards. CE are composed of two components which are Common Mode (CM) noise and Differential Mode (DM) noise. To achieve compliance with these regulations the Equipment Under Test (EUT) includes filtering and other electromagnetic emission control strategies. The separation of differential mode and common mode noise in Electromagnetic Interference (EMI) analysis is a well-known procedure which is useful especially for the optimization of the EMI filter, to improve the CM or DM attenuation depending on which component of the conducted emissions is predominant, and for the analysis and the understanding of interference phenomena of switched mode power converters. However, separating both components is rarely done during measurements. Therefore, in this thesis an active device for the separation of the CM and DM EMI noise in three phase power electronic systems has been designed and experimentally analysed.
Resumo:
Additive Manufacturing (AM), also known as “3D printing”, is a recent production technique that allows the creation of three-dimensional elements by depositing multiple layers of material. This technology is widely used in various industrial sectors, such as automotive, aerospace and aviation. With AM, it is possible to produce particularly complex elements for which traditional techniques cannot be used. These technologies are not yet widespread in the civil engineering sector, which is slowly changing thanks to the advantages of AM, such as the possibility of realizing elements without geometric restrictions, with less material usage and a higher efficiency, in particular employing Wire-and-Arc Additive Manufacturing (WAAM) technology. Buildings that benefit most from AM are all those structures designed using form-finding and free-form techniques. These include gridshells, where joints are the most critical and difficult elements to design, as the overall behaviour of the structure depends on them. It must also be considered that, during the design, the engineer must try to minimize the structure's own weight. Self-weight reductions can be achieved by Topological Optimization (TO) of the joint itself, which generates complex geometries that could not be made using traditional techniques. To sum up, weight reductions through TO combined with AM allow for several potential benefits, including economic ones. In this thesis, the roof of the British Museum is considered as a case study, analysing the gridshell structure of which a joint will be chosen to be designed and manufactured, using TO and WAAM techniques. Then, the designed joint will be studied in order to understand its structural behaviour in terms of stiffness and strength. Finally, a printing test will be performed to assess the production feasibility using WAAM technology. The computational design and fabrication stages were carried out at Technische Universität Braunschweig in Germany.
Resumo:
In recent years, global supply chains have increasingly suffered from reliability issues due to various external and difficult to-manage events. The following paper aims to build an integrated approach for the design of a Supply Chain under the risk of disruption and demand fluctuation. The study is divided in two parts: a mathematical optimization model, to identify the optimal design and assignments customer-facility, and a discrete-events simulation of the resulting network. The first one describes a model in which plant location decisions are influenced by variables such as distance to customers, investments needed to open plants and centralization phenomena that help contain the risk of demand variability (Risk Pooling). The entire model has been built with a proactive approach to manage the risk of disruptions assigning to each customer two types of open facilities: one that will serve it under normal conditions and a back-up facility, which comes into operation when the main facility has failed. The study is conducted on a relatively small number of instances due to the computational complexity, a matheuristic approach can be found in part A of the paper to evaluate the problem with a larger set of players. Once the network is built, a discrete events Supply Chain simulation (SCS) has been implemented to analyze the stock flow within the facilities warehouses, the actual impact of disruptions and the role of the back-up facilities which suffer a great stress on their inventory due to a large increase in demand caused by the disruptions. Therefore, simulation follows a reactive approach, in which customers are redistributed among facilities according to the interruptions that may occur in the system and to the assignments deriving from the design model. Lastly, the most important results of the study will be reported, analyzing the role of lead time in a reactive approach for the occurrence of disruptions and comparing the two models in terms of costs.