773 resultados para minimization
Resumo:
In the last decade, manufacturing companies have been facing two significant challenges. First, digitalization imposes adopting Industry 4.0 technologies and allows creating smart, connected, self-aware, and self-predictive factories. Second, the attention on sustainability imposes to evaluate and reduce the impact of the implemented solutions from economic and social points of view. In manufacturing companies, the maintenance of physical assets assumes a critical role. Increasing the reliability and the availability of production systems leads to the minimization of systems’ downtimes; In addition, the proper system functioning avoids production wastes and potentially catastrophic accidents. Digitalization and new ICT technologies have assumed a relevant role in maintenance strategies. They allow assessing the health condition of machinery at any point in time. Moreover, they allow predicting the future behavior of machinery so that maintenance interventions can be planned, and the useful life of components can be exploited until the time instant before their fault. This dissertation provides insights on Predictive Maintenance goals and tools in Industry 4.0 and proposes a novel data acquisition, processing, sharing, and storage framework that addresses typical issues machine producers and users encounter. The research elaborates on two research questions that narrow down the potential approaches to data acquisition, processing, and analysis for fault diagnostics in evolving environments. The research activity is developed according to a research framework, where the research questions are addressed by research levers that are explored according to research topics. Each topic requires a specific set of methods and approaches; however, the overarching methodological approach presented in this dissertation includes three fundamental aspects: the maximization of the quality level of input data, the use of Machine Learning methods for data analysis, and the use of case studies deriving from both controlled environments (laboratory) and real-world instances.
Resumo:
Inverse problems are at the core of many challenging applications. Variational and learning models provide estimated solutions of inverse problems as the outcome of specific reconstruction maps. In the variational approach, the result of the reconstruction map is the solution of a regularized minimization problem encoding information on the acquisition process and prior knowledge on the solution. In the learning approach, the reconstruction map is a parametric function whose parameters are identified by solving a minimization problem depending on a large set of data. In this thesis, we go beyond this apparent dichotomy between variational and learning models and we show they can be harmoniously merged in unified hybrid frameworks preserving their main advantages. We develop several highly efficient methods based on both these model-driven and data-driven strategies, for which we provide a detailed convergence analysis. The arising algorithms are applied to solve inverse problems involving images and time series. For each task, we show the proposed schemes improve the performances of many other existing methods in terms of both computational burden and quality of the solution. In the first part, we focus on gradient-based regularized variational models which are shown to be effective for segmentation purposes and thermal and medical image enhancement. We consider gradient sparsity-promoting regularized models for which we develop different strategies to estimate the regularization strength. Furthermore, we introduce a novel gradient-based Plug-and-Play convergent scheme considering a deep learning based denoiser trained on the gradient domain. In the second part, we address the tasks of natural image deblurring, image and video super resolution microscopy and positioning time series prediction, through deep learning based methods. We boost the performances of supervised, such as trained convolutional and recurrent networks, and unsupervised deep learning strategies, such as Deep Image Prior, by penalizing the losses with handcrafted regularization terms.
Resumo:
Nowadays, the spreading of the air pollution crisis enhanced by greenhouse gases emission is leading to the worsening of global warming. Recently, several metropolitan cities introduced Zero-Emissions Zones where the use of the Internal Combustion Engine is forbidden to reduce localized pollutants emissions. This is particularly problematic for Plug-in Hybrid Electric Vehicles, which usually work in depleting mode. In order to address these issues, the present thesis presents a viable solution by exploiting vehicular connectivity to retrieve navigation data of the urban event along a selected route. The battery energy needed, in the form of a minimum State of Charge (SoC), is calculated by a Speed Profile Prediction algorithm and a Backward Vehicle Model. That value is then fed to both a Rule-Based Strategy, developed specifically for this application, and an Adaptive Equivalent Consumption Minimization Strategy (A-ECMS). The effectiveness of this approach has been tested with a Connected Hardware-in-the-Loop (C-HiL) on a driving cycle measured on-road, stimulating the predictions with multiple re-routings. However, even if hybrid electric vehicles have been recognized as a valid solution in response to increasingly tight regulations, the reduced engine load and the repeated engine starts and stops may reduce substantially the temperature of the exhaust after-treatment system (EATS), leading to relevant issues related to pollutant emission control. In this context, electrically heated catalysts (EHCs) represent a promising solution to ensure high pollutant conversion efficiency without affecting engine efficiency and performance. This work aims at studying the advantages provided by the introduction of a predictive EHC control function for a light-duty Diesel plug-in hybrid electric vehicle (PHEV) equipped with a Euro 7-oriented EATS. Based on the knowledge of future driving scenarios provided by vehicular connectivity, engine first start can be predicted and therefore an EATS pre-heating phase can be planned.
Resumo:
The weight-transfer effect, consisting of the change in dynamic load distribution between the front and the rear tractor axles, is one of the most impairing phenomena for the performance, comfort, and safety of agricultural operations. Excessive weight transfer from the front to the rear tractor axle can occur during operation or maneuvering of implements connected to the tractor through the three-point hitch (TPH). In this respect, an optimal design of the TPH can ensure better dynamic load distribution and ultimately improve operational performance, comfort, and safety. In this study, a computational design tool (The Optimizer) for the determination of a TPH geometry that minimizes the weight-transfer effect is developed. The Optimizer is based on a constrained minimization algorithm. The objective function to be minimized is related to the tractor front-to-rear axle load transfer during a simulated reference maneuver performed with a reference implement on a reference soil. Simulations are based on a 3-degrees-of-freedom (DOF) dynamic model of the tractor-TPH-implement aggregate. The inertial, elastic, and viscous parameters of the dynamic model were successfully determined through a parameter identification algorithm. The geometry determined by the Optimizer complies with the ISO-730 Standard functional requirements and other design requirements. The interaction between the soil and the implement during the simulated reference maneuver was successfully validated against experimental data. Simulation results show that the adopted reference maneuver is effective in triggering the weight-transfer effect, with the front axle load exhibiting a peak-to-peak value of 27.1 kN during the maneuver. A benchmark test was conducted starting from four geometries of a commercially available TPH. As result, all the configurations were optimized by above 10%. The Optimizer, after 36 iterations, was able to find an optimized TPH geometry which allows to reduce the weight-transfer effect by 14.9%.
Resumo:
This thesis explores the methods based on the free energy principle and active inference for modelling cognition. Active inference is an emerging framework for designing intelligent agents where psychological processes are cast in terms of Bayesian inference. Here, I appeal to it to test the design of a set of cognitive architectures, via simulation. These architectures are defined in terms of generative models where an agent executes a task under the assumption that all cognitive processes aspire to the same objective: the minimization of variational free energy. Chapter 1 introduces the free energy principle and its assumptions about self-organizing systems. Chapter 2 describes how from the mechanics of self-organization can emerge a minimal form of cognition able to achieve autopoiesis. In chapter 3 I present the method of how I formalize generative models for action and perception. The architectures proposed allow providing a more biologically plausible account of more complex cognitive processing that entails deep temporal features. I then present three simulation studies that aim to show different aspects of cognition, their associated behavior and the underlying neural dynamics. In chapter 4, the first study proposes an architecture that represents the visuomotor system for the encoding of actions during action observation, understanding and imitation. In chapter 5, the generative model is extended and is lesioned to simulate brain damage and neuropsychological patterns observed in apraxic patients. In chapter 6, the third study proposes an architecture for cognitive control and the modulation of attention for action selection. At last, I argue how active inference can provide a formal account of information processing in the brain and how the adaptive capabilities of the simulated agents are a mere consequence of the architecture of the generative models. Cognitive processing, then, becomes an emergent property of the minimization of variational free energy.
Resumo:
Imaging technologies are widely used in application fields such as natural sciences, engineering, medicine, and life sciences. A broad class of imaging problems reduces to solve ill-posed inverse problems (IPs). Traditional strategies to solve these ill-posed IPs rely on variational regularization methods, which are based on minimization of suitable energies, and make use of knowledge about the image formation model (forward operator) and prior knowledge on the solution, but lack in incorporating knowledge directly from data. On the other hand, the more recent learned approaches can easily learn the intricate statistics of images depending on a large set of data, but do not have a systematic method for incorporating prior knowledge about the image formation model. The main purpose of this thesis is to discuss data-driven image reconstruction methods which combine the benefits of these two different reconstruction strategies for the solution of highly nonlinear ill-posed inverse problems. Mathematical formulation and numerical approaches for image IPs, including linear as well as strongly nonlinear problems are described. More specifically we address the Electrical impedance Tomography (EIT) reconstruction problem by unrolling the regularized Gauss-Newton method and integrating the regularization learned by a data-adaptive neural network. Furthermore we investigate the solution of non-linear ill-posed IPs introducing a deep-PnP framework that integrates the graph convolutional denoiser into the proximal Gauss-Newton method with a practical application to the EIT, a recently introduced promising imaging technique. Efficient algorithms are then applied to the solution of the limited electrods problem in EIT, combining compressive sensing techniques and deep learning strategies. Finally, a transformer-based neural network architecture is adapted to restore the noisy solution of the Computed Tomography problem recovered using the filtered back-projection method.
Resumo:
The central aim of this dissertation is to introduce innovative methods, models, and tools to enhance the overall performance of supply chains responsible for handling perishable products. This concept of improved performance encompasses several critical dimensions, including enhanced efficiency in supply chain operations, product quality, safety, sustainability, waste generation minimization, and compliance with norms and regulations. The research is structured around three specific research questions that provide a solid foundation for delving into and narrowing down the array of potential solutions. These questions primarily concern enhancing the overall performance of distribution networks for perishable products and optimizing the package hierarchy, extending to unconventional packaging solutions. To address these research questions effectively, a well-defined research framework guides the approach. However, the dissertation adheres to an overarching methodological approach that comprises three fundamental aspects. The first aspect centers on the necessity of systematic data sampling and categorization, including identifying critical points within food supply chains. The data collected in this context must then be organized within a customized data structure designed to feed both cyber-physical and digital twins to quantify and analyze supply chain failures with a preventive perspective.
Resumo:
In the industry of steelmaking, the process of galvanizing is a treatment which is applied to protect the steel from corrosion. The air knife effect (AKE) occurs when nozzles emit a steam of air on the surfaces of a steel strip to remove excess zinc from it. In our work we formalized the problem to control the AKE and we implemented, with the R&D dept.of MarcegagliaSPA, a DL model able to drive the AKE. We call it controller. It takes as input the tuple (pres and dist) to drive the mechanical nozzles towards the (c). According to the requirements we designed the structure of the network. We collected and explored the data set of the historical data of the smart factory. Finally, we designed the loss function as sum of three components: the minimization between the coating addressed by the network and the target value we want to reach; and two weighted minimization components for both pressure and distance. In our solution we construct a second module, named coating net, to predict the coating of zinc