970 resultados para Design optimization
Resumo:
Several decision and control tasks involve networks of cyber-physical systems that need to be coordinated and controlled according to a fully-distributed paradigm involving only local communications without any central unit. This thesis focuses on distributed optimization and games over networks from a system theoretical perspective. In the addressed frameworks, we consider agents communicating only with neighbors and running distributed algorithms with optimization-oriented goals. The distinctive feature of this thesis is to interpret these algorithms as dynamical systems and, thus, to resort to powerful system theoretical tools for both their analysis and design. We first address the so-called consensus optimization setup. In this context, we provide an original system theoretical analysis of the well-known Gradient Tracking algorithm in the general case of nonconvex objective functions. Then, inspired by this method, we provide and study a series of extensions to improve the performance and to deal with more challenging settings like, e.g., the derivative-free framework or the online one. Subsequently, we tackle the recently emerged framework named distributed aggregative optimization. For this setup, we develop and analyze novel schemes to handle (i) online instances of the problem, (ii) ``personalized'' optimization frameworks, and (iii) feedback optimization settings. Finally, we adopt a system theoretical approach to address aggregative games over networks both in the presence or absence of linear coupling constraints among the decision variables of the players. In this context, we design and inspect novel fully-distributed algorithms, based on tracking mechanisms, that outperform state-of-the-art methods in finding the Nash equilibrium of the game.
Resumo:
The first topic analyzed in the thesis will be Neural Architecture Search (NAS). I will focus on two different tools that I developed, one to optimize the architecture of Temporal Convolutional Networks (TCNs), a convolutional model for time-series processing that has recently emerged, and one to optimize the data precision of tensors inside CNNs. The first NAS proposed explicitly targets the optimization of the most peculiar architectural parameters of TCNs, namely dilation, receptive field, and the number of features in each layer. Note that this is the first NAS that explicitly targets these networks. The second NAS proposed instead focuses on finding the most efficient data format for a target CNN, with the granularity of the layer filter. Note that applying these two NASes in sequence allows an "application designer" to minimize the structure of the neural network employed, minimizing the number of operations or the memory usage of the network. After that, the second topic described is the optimization of neural network deployment on edge devices. Importantly, exploiting edge platforms' scarce resources is critical for NN efficient execution on MCUs. To do so, I will introduce DORY (Deployment Oriented to memoRY) -- an automatic tool to deploy CNNs on low-cost MCUs. DORY, in different steps, can manage different levels of memory inside the MCU automatically, offload the computation workload (i.e., the different layers of a neural network) to dedicated hardware accelerators, and automatically generates ANSI C code that orchestrates off- and on-chip transfers with the computation phases. On top of this, I will introduce two optimized computation libraries that DORY can exploit to deploy TCNs and Transformers on edge efficiently. I conclude the thesis with two different applications on bio-signal analysis, i.e., heart rate tracking and sEMG-based gesture recognition.
Resumo:
Nowadays, the chemical industry has reached significant goals to produce essential components for human being. The growing competitiveness of the market caused an important acceleration in R&D activities, introducing new opportunities and procedures for the definition of process improvement and optimization. In this dynamicity, sustainability is becoming one of the key aspects for the technological progress encompassing economic, environmental protection and safety aspects. With respect to the conceptual definition of sustainability, literature reports an extensive discussion of the strategies, as well as sets of specific principles and guidelines. However, literature procedures are not completely suitable and applicable to process design activities. Therefore, the development and introduction of sustainability-oriented methodologies is a necessary step to enhance process and plant design. The definition of key drivers as support system is a focal point for early process design decisions or implementation of process modifications. In this context, three different methodologies are developed to support design activities providing criteria and guidelines in a sustainable perspective. In this framework, a set of key Performance Indicators is selected and adopted to characterize the environmental, safety, economic and energetic aspects of a reference process. The methodologies are based on heat and material balances and the level of detailed for input data are compatible with available information of the specific application. Multiple case-studies are defined to prove the effectiveness of the methodologies. The principal application is the polyolefin productive lifecycle chain with particular focus on polymerization technologies. In this context, different design phases are investigated spanning from early process feasibility study to operative and improvements assessment. This flexibility allows to apply the methodologies at any level of design, providing supporting guidelines for design activities, compare alternative solutions, monitor operating process and identify potential for improvements.
Resumo:
Water Distribution Networks (WDNs) play a vital importance rule in communities, ensuring well-being band supporting economic growth and productivity. The need for greater investment requires design choices will impact on the efficiency of management in the coming decades. This thesis proposes an algorithmic approach to address two related problems:(i) identify the fundamental asset of large WDNs in terms of main infrastructure;(ii) sectorize large WDNs into isolated sectors in order to respect the minimum service to be guaranteed to users. Two methodologies have been developed to meet these objectives and subsequently they were integrated to guarantee an overall process which allows to optimize the sectorized configuration of WDN taking into account the needs to integrated in a global vision the two problems (i) and (ii). With regards to the problem (i), the methodology developed introduces the concept of primary network to give an answer with a dual approach, of connecting main nodes of WDN in terms of hydraulic infrastructures (reservoirs, tanks, pumps stations) and identifying hypothetical paths with the minimal energy losses. This primary network thus identified can be used as an initial basis to design the sectors. The sectorization problem (ii) has been faced using optimization techniques by the development of a new dedicated Tabu Search algorithm able to deal with real case studies of WDNs. For this reason, three new large WDNs models have been developed in order to test the capabilities of the algorithm on different and complex real cases. The developed methodology also allows to automatically identify the deficient parts of the primary network and dynamically includes new edges in order to support a sectorized configuration of the WDN. The application of the overall algorithm to the new real case studies and to others from literature has given applicable solutions even in specific complex situations.
Resumo:
Neuronal microtubules assembly and dynamics are regulated by several proteins including (MT)-associated protein tau, whose aberrant hyperphosphorylation promotes its dissociation from MTs and its abnormal deposition into neurofibrillary tangles, a common neurotoxic hallmarks of neurodegenerative tauopathies. To date, no disease-modifying drugs have been approved to combat CNS tau-related diseases. The multifactorial etiology of these conditions represents one of the major limits in the discovery of effective therapeutic options. In addition, tau protein functions are orchestrated by diverse post-translational modifications among which phosphorylation mediated by PKs plays a leading role. In this context, conventional single-target therapies are often inadequate in restoring perturbed networks and fraught with adverse side-effects. This thesis reports two distinct approaches to hijack MT defects in neurons. The first is focused on the rational design and synthesis of first-in-class triple inhibitors of GSK-3β, FYN, and DYRK1A, three close-related PKs, which act as master regulators of aberrant tau hyperphosphorylation. A merged multi-target pharmacophore strategy was applied to simultaneously modulate all three targets and achieve a disease-modifying effect. Optimization of ARN25068 by a computationally and crystallographic driven SAR exploration, allowed to rationalize the key structural modifications to maintain a balanced potency against all three targets and develop a new generation of quite well-balanced analogs exhibiting improved physicochemical properties, a good in vitro ADME profile, and promising cell-based anti-tau phosphorylation activity. In Part II, MT-stabilizing compounds have been developed to compensate MT defects in tau-related pathologies. Intensive chemical effort has been devoted to scaling up BL-0884, identified as a promising MT-normalizing TPD, which exhibited favorable ADME-PK, including brain penetration, oral bioavailability, and brain pharmacodynamic activity. A suitable functionalization of the exposed hydroxyl moiety of BL-0884 was carried out to generate corresponding esters and amides possessing a wide range of applications as prodrugs and active targeting for cancer chemotherapy.
Resumo:
The weight-transfer effect, consisting of the change in dynamic load distribution between the front and the rear tractor axles, is one of the most impairing phenomena for the performance, comfort, and safety of agricultural operations. Excessive weight transfer from the front to the rear tractor axle can occur during operation or maneuvering of implements connected to the tractor through the three-point hitch (TPH). In this respect, an optimal design of the TPH can ensure better dynamic load distribution and ultimately improve operational performance, comfort, and safety. In this study, a computational design tool (The Optimizer) for the determination of a TPH geometry that minimizes the weight-transfer effect is developed. The Optimizer is based on a constrained minimization algorithm. The objective function to be minimized is related to the tractor front-to-rear axle load transfer during a simulated reference maneuver performed with a reference implement on a reference soil. Simulations are based on a 3-degrees-of-freedom (DOF) dynamic model of the tractor-TPH-implement aggregate. The inertial, elastic, and viscous parameters of the dynamic model were successfully determined through a parameter identification algorithm. The geometry determined by the Optimizer complies with the ISO-730 Standard functional requirements and other design requirements. The interaction between the soil and the implement during the simulated reference maneuver was successfully validated against experimental data. Simulation results show that the adopted reference maneuver is effective in triggering the weight-transfer effect, with the front axle load exhibiting a peak-to-peak value of 27.1 kN during the maneuver. A benchmark test was conducted starting from four geometries of a commercially available TPH. As result, all the configurations were optimized by above 10%. The Optimizer, after 36 iterations, was able to find an optimized TPH geometry which allows to reduce the weight-transfer effect by 14.9%.
Resumo:
The current issue of the resource of energy combined with the tendency to give a green footprint to our lifestyle have prompted the research to focus the attention on alternative sources with great strides in the optimization of polymeric photovoltaic devices. The research work described in this dissertation consists in the study of different semiconducting π-conjugated materials based on polythiophenes (Chapter I). In detail, the GRIM polymerization was deepened defining the synthetic conditions to obtain regioregular poly(3-alkylthiophene) (Chapter II). Since the use of symmetrical monomers functionalized with oxygen atom(s) allows to adopt easy synthesis leading to performing materials, disubstituted poly(3,4-dialkoxythiophene)s were successfully prepared, characterized and tested as photoactive materials in solar cells (Chapter III). A “green” resource of energy should be employed through sustainable devices and, for this purpose, the research work was continued on the synthesis of thiophene derivatives soluble in eco-friendly solvents. To make this possible, the photoactive layer was completely tailored starting from the electron-acceptor material. A fullerene derivative soluble in alcohols was successfully synthetized and adopted for the realization of the new devices (Chapter IV). New water/alcohol soluble electron-donor materials with different functional groups were prepared and their properties were compared (Chapter V). Once found the best ionic functional group, a new double-cable material was synthetized optimizing the surface area between the different materials (Chapter VI). Finally, other water/alcohol soluble materials were synthetized, characterized and used as cathode interlayers in eco-friendly devices (Chapter VII). In this work, all prepared materials were characterized by spectroscopy analyses, gel permeation chromatography and thermal analyses. Cyclic voltammetry, X-ray diffraction, atomic force microscopy and external quantum efficiency were used to investigate some peculiar aspects.
Resumo:
The Structural Health Monitoring (SHM) research area is increasingly investigated due to its high potential in reducing the maintenance costs and in ensuring the systems safety in several industrial application fields. A growing demand of new SHM systems, permanently embedded into the structures, for savings in weight and cabling, comes from the aeronautical and aerospace application fields. As consequence, the embedded electronic devices are to be wirelessly connected and battery powered. As result, a low power consumption is requested. At the same time, high performance in defects or impacts detection and localization are to be ensured to assess the structural integrity. To achieve these goals, the design paradigms can be changed together with the associate signal processing. The present thesis proposes design strategies and unconventional solutions, suitable both for real-time monitoring and periodic inspections, relying on piezo-transducers and Ultrasonic Guided Waves. In the first context, arrays of closely located sensors were designed, according to appropriate optimality criteria, by exploiting sensors re-shaping and optimal positioning, to achieve improved damages/impacts localisation performance in noisy environments. An additional sensor re-shaping procedure was developed to tackle another well-known issue which arises in realistic scenario, namely the reverberation. A novel sensor, able to filter undesired mechanical boundaries reflections, was validated via simulations based on the Green's functions formalism and FEM. In the active SHM context, a novel design methodology was used to develop a single transducer, called Spectrum-Scanning Acoustic Transducer, to actively inspect a structure. It can estimate the number of defects and their distances with an accuracy of 2[cm]. It can also estimate the damage angular coordinate with an equivalent mainlobe aperture of 8[deg], when a 24[cm] radial gap between two defects is ensured. A suitable signal processing was developed in order to limit the computational cost, allowing its use with embedded electronic devices.
Resumo:
This thesis project studies the agent identity privacy problem in the scalar linear quadratic Gaussian (LQG) control system. For the agent identity privacy problem in the LQG control, privacy models and privacy measures have to be established first. It depends on a trajectory of correlated data rather than a single observation. I propose here privacy models and the corresponding privacy measures by taking into account the two characteristics. The agent identity is a binary hypothesis: Agent A or Agent B. An eavesdropper is assumed to make a hypothesis testing on the agent identity based on the intercepted environment state sequence. The privacy risk is measured by the Kullback-Leibler divergence between the probability distributions of state sequences under two hypotheses. By taking into account both the accumulative control reward and privacy risk, an optimization problem of the policy of Agent B is formulated. The optimal deterministic privacy-preserving LQG policy of Agent B is a linear mapping. A sufficient condition is given to guarantee that the optimal deterministic privacy-preserving policy is time-invariant in the asymptotic regime. An independent Gaussian random variable cannot improve the performance of Agent B. The numerical experiments justify the theoretic results and illustrate the reward-privacy trade-off. Based on the privacy model and the LQG control model, I have formulated the mathematical problems for the agent identity privacy problem in LQG. The formulated problems address the two design objectives: to maximize the control reward and to minimize the privacy risk. I have conducted theoretic analysis on the LQG control policy in the agent identity privacy problem and the trade-off between the control reward and the privacy risk.Finally, the theoretic results are justified by numerical experiments. From the numerical results, I expected to have some interesting observations and insights, which are explained in the last chapter.
Resumo:
The aim of this work is to present a general overview of state-of-the-art related to design for uncertainty with a focus on aerospace structures. In particular, a simulation on a FCCZ lattice cell and on the profile shape of a nozzle will be performed. Optimization under uncertainty is characterized by the need to make decisions without complete knowledge of the problem data. When dealing with a complex problem, non-linearity, or optimization, two main issues are raised: the uncertainty of the feasibility of the solution and the uncertainty of the objective value of the function. In the first part, the Design Of Experiments (DOE) methodologies, Uncertainty Quantification (UQ), and then Uncertainty optimization will be deepened. The second part will show an application of the previous theories on through a commercial software. Nowadays multiobjective optimization on high non-linear problem can be a powerful tool to approach new concept solutions or to develop cutting-edge design. In this thesis an effective improvement have been reached on a rocket nozzle. Future work could include the introduction of multi scale modelling, multiphysics approach and every strategy useful to simulate as much possible real operative condition of the studied design.
Resumo:
The aim of this thesis is to use the developments, advantages and applications of "Building Information Modelling" (BIM) with emphasis on the discipline of structural design for steel building located in Perugia. BIM was mainly considered as a new way of planning, constructing and operating buildings or infrastructures. It has been found to offer greater opportunities for increased efficiency, optimization of resources and generally better management throughout the life cycle of a facility. BIM increases the digitalization of processes and offers integrated and collaborative technologies for design, construction and operation. To understand BIM and its benefits, one must consider all phases of a project. Higher initial design costs often lead to lower construction and operation costs. Creating data-rich digital models helps to better predict and coordinate the construction phases and operation of a building. One of the main limitations identified in the implementation of BIM is the lack of knowledge and qualified professionals. Certain disciplines such as structural and mechanical design depend on whether the main contractor, owner, general contractor or architect need to use or apply BIM to their projects. The existence of a supporting or mandatory BIM guideline may then eventually lead to its adoption. To test the potential of the BIM adoption in the steel design process, some models were developed taking advantage of a largely diffuse authoring software (Autodesk Revit), to produce construction drawings and also material schedule that were needed in order to estimate quantities and features of a real steel building. Once the model has been built the whole process has been analyzed and then compared with the traditional design process of steel structure. Many relevant aspect in term of clearness and also in time spent were shown and lead to final conclusions about the benefits from BIM methodology.
Resumo:
Nowadays, product development in all its phases plays a fundamental role in the industrial chain. The need for a company to compete at high levels, the need to be quick in responding to market demands and therefore to be able to engineer the product quickly and with a high level of quality, has led to the need to get involved in new more advanced methods/ processes. In recent years, we are moving away from the concept of 2D-based design and production and approaching the concept of Model Based Definition. By using this approach, increasingly complex systems turn out to be easier to deal with but above all cheaper in obtaining them. Thanks to the Model Based Definition it is possible to share data in a lean and simple way to the entire engineering and production chain of the product. The great advantage of this approach is precisely the uniqueness of the information. In this specific thesis work, this approach has been exploited in the context of tolerances with the aid of CAD / CAT software. Tolerance analysis or dimensional variation analysis is a way to understand how sources of variation in part size and assembly constraints propagate between parts and assemblies and how that range affects the ability of a project to meet its requirements. It is critically important to note how tolerance directly affects the cost and performance of products. Worst Case Analysis (WCA) and Statistical analysis (RSS) are the two principal methods in DVA. The thesis aims to show the advantages of using statistical dimensional analysis by creating and examining various case studies, using PTC CREO software for CAD modeling and CETOL 6σ for tolerance analysis. Moreover, it will be provided a comparison between manual and 3D analysis, focusing the attention to the information lost in the 1D case. The results obtained allow us to highlight the need to use this approach from the early stages of the product design cycle.
Resumo:
In the metal industry, and more specifically in the forging one, scrap material is a crucial issue and reducing it would be an important goal to reach. Not only would this help the companies to be more environmentally friendly and more sustainable, but it also would reduce the use of energy and lower costs. At the same time, the techniques for Industry 4.0 and the advancements in Artificial Intelligence (AI), especially in the field of Deep Reinforcement Learning (DRL), may have an important role in helping to achieve this objective. This document presents the thesis work, a contribution to the SmartForge project, that was performed during a semester abroad at Karlstad University (Sweden). This project aims at solving the aforementioned problem with a business case of the company Bharat Forge Kilsta, located in Karlskoga (Sweden). The thesis work includes the design and later development of an event-driven architecture with microservices, to support the processing of data coming from sensors set up in the company's industrial plant, and eventually the implementation of an algorithm with DRL techniques to control the electrical power to use in it.
Resumo:
One of the major issues for power converters that are connected to the electric grid are the measurement of three phase Conduced Emissions (CE), which are regulated by international and regional standards. CE are composed of two components which are Common Mode (CM) noise and Differential Mode (DM) noise. To achieve compliance with these regulations the Equipment Under Test (EUT) includes filtering and other electromagnetic emission control strategies. The separation of differential mode and common mode noise in Electromagnetic Interference (EMI) analysis is a well-known procedure which is useful especially for the optimization of the EMI filter, to improve the CM or DM attenuation depending on which component of the conducted emissions is predominant, and for the analysis and the understanding of interference phenomena of switched mode power converters. However, separating both components is rarely done during measurements. Therefore, in this thesis an active device for the separation of the CM and DM EMI noise in three phase power electronic systems has been designed and experimentally analysed.
Resumo:
Additive Manufacturing (AM), also known as “3D printing”, is a recent production technique that allows the creation of three-dimensional elements by depositing multiple layers of material. This technology is widely used in various industrial sectors, such as automotive, aerospace and aviation. With AM, it is possible to produce particularly complex elements for which traditional techniques cannot be used. These technologies are not yet widespread in the civil engineering sector, which is slowly changing thanks to the advantages of AM, such as the possibility of realizing elements without geometric restrictions, with less material usage and a higher efficiency, in particular employing Wire-and-Arc Additive Manufacturing (WAAM) technology. Buildings that benefit most from AM are all those structures designed using form-finding and free-form techniques. These include gridshells, where joints are the most critical and difficult elements to design, as the overall behaviour of the structure depends on them. It must also be considered that, during the design, the engineer must try to minimize the structure's own weight. Self-weight reductions can be achieved by Topological Optimization (TO) of the joint itself, which generates complex geometries that could not be made using traditional techniques. To sum up, weight reductions through TO combined with AM allow for several potential benefits, including economic ones. In this thesis, the roof of the British Museum is considered as a case study, analysing the gridshell structure of which a joint will be chosen to be designed and manufactured, using TO and WAAM techniques. Then, the designed joint will be studied in order to understand its structural behaviour in terms of stiffness and strength. Finally, a printing test will be performed to assess the production feasibility using WAAM technology. The computational design and fabrication stages were carried out at Technische Universität Braunschweig in Germany.