10 resultados para System failures (Engineering) Graphic methods

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

According to various studies, the effects of climate change will be a danger to ecosystems and the population, especially in coastal areas, increasing the risk of floods. Authorities are taking action to prevent future disasters using traditional engineering solutions. These solutions can have high environmental and economic costs, fixing the coastline, increasing the salinization of aquifers, and can be subject to failure mechanisms. For this reason, studies were made to use natural engineering solutions for coastal protection, instead of traditional solutions, to achieve the UN SDGs. Coastal ecosystems have the natural ability to repair and restore themselves, increasing soil elevation, and attenuating waves. One of these solutions is the Double Dyke System, consisting of creating a salt marsh between the first dyke and a second inland. The goal is to protect the coasts and to restore ecosystems. The purpose of this study is to compare the costs of natural engineering solutions with traditional ones. It is assumed that these solutions may be more effective and less expensive in the long run. For this evaluation, a suitability analysis of the polders in the Dutch Zeeland region to assess the costs and benefits under different SLR scenarios was made. A saline intrusion model was also created to analyze the effects of a salt marsh on the aquifers. From the analyzes conducted, the implementation of the DDS turns out to be the cheapest coastal defense system in all SLR scenarios. The presence of a salt marsh could also have a positive impact on the prevention of saline intrusion in the various scenarios considered. The DDS could have a positive economic and environmental impact in the long term, reducing the investment costs for coastal defense and bringing important benefits for the protection of man and nature. Despite the results, more studies are needed on the efficiency of this defense system and on the economic evaluation of non-marketable ecosystem services.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work described in this Master’s Degree thesis was born after the collaboration with the company Maserati S.p.a, an Italian luxury car maker with its headquarters located in Modena, in the heart of the Italian Motor Valley, where I worked as a stagiaire in the Virtual Engineering team between September 2021 and February 2022. This work proposes the validation using real-world ECUs of a Driver Drowsiness Detection (DDD) system prototype based on different detection methods with the goal to overcome input signal losses and system failures. Detection methods of different categories have been chosen from literature and merged with the goal of utilizing the benefits of each of them, overcoming their limitations and limiting as much as possible their degree of intrusiveness to prevent any kind of driving distraction: an image processing-based technique for human physical signals detection as well as methods based on driver-vehicle interaction are used. A Driver-In-the-Loop simulator is used to gather real data on which a Machine Learning-based algorithm will be trained and validated. These data come from the tests that the company conducts in its daily activities so confidential information about the simulator and the drivers will be omitted. Although the impact of the proposed system is not remarkable and there is still work to do in all its elements, the results indicate the main advantages of the system in terms of robustness against subsystem failures and signal losses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Driving simulators emulate a real vehicle drive in a virtual environment. One of the most challenging problems in this field is to create a simulated drive as real as possible to deceive the driver's senses and cause the believing to be in a real vehicle. This thesis first provides an overview of the Stuttgart driving simulator with a description of the overall system, followed by a theoretical presentation of the commonly used motion cueing algorithms. The second and predominant part of the work presents the implementation of the classical and optimal washout algorithms in a Simulink environment. The project aims to create a new optimal washout algorithm and compare the obtained results with the results of the classical washout. The classical washout algorithm, already implemented in the Stuttgart driving simulator, is the most used in the motion control of the simulator. This classical algorithm is based on a sequence of filters in which each parameter has a clear physical meaning and a unique assignment to a single degree of freedom. However, the effects on human perception are not exploited, and each parameter must be tuned online by an engineer in the control room, depending on the driver's feeling. To overcome this problem and also consider the driver's sensations, the optimal washout motion cueing algorithm was implemented. This optimal control-base algorithm treats motion cueing as a tracking problem, forcing the accelerations perceived in the simulator to track the accelerations that would have been perceived in a real vehicle, by minimizing the perception error within the constraints of the motion platform. The last chapter presents a comparison between the two algorithms, based on the driver's feelings after the test drive. Firstly it was implemented an off-line test with a step signal as an input acceleration to verify the behaviour of the simulator. Secondly, the algorithms were executed in the simulator during a test drive on several tracks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Miniaturized flying robotic platforms, called nano-drones, have the potential to revolutionize the autonomous robots industry sector thanks to their very small form factor. The nano-drones’ limited payload only allows for a sub-100mW microcontroller unit for the on-board computations. Therefore, traditional computer vision and control algorithms are too computationally expensive to be executed on board these palm-sized robots, and we are forced to rely on artificial intelligence to trade off accuracy in favor of lightweight pipelines for autonomous tasks. However, relying on deep learning exposes us to the problem of generalization since the deployment scenario of a convolutional neural network (CNN) is often composed by different visual cues and different features from those learned during training, leading to poor inference performances. Our objective is to develop and deploy and adaptation algorithm, based on the concept of latent replays, that would allow us to fine-tune a CNN to work in new and diverse deployment scenarios. To do so we start from an existing model for visual human pose estimation, called PULPFrontnet, which is used to identify the pose of a human subject in space through its 4 output variables, and we present the design of our novel adaptation algorithm, which features automatic data gathering and labeling and on-device deployment. We therefore showcase the ability of our algorithm to adapt PULP-Frontnet to new deployment scenarios, improving the R2 scores of the four network outputs, with respect to an unknown environment, from approximately [−0.2, 0.4, 0.0,−0.7] to [0.25, 0.45, 0.2, 0.1]. Finally we demonstrate how it is possible to fine-tune our neural network in real time (i.e., under 76 seconds), using the target parallel ultra-low power GAP 8 System-on-Chip on board the nano-drone, and we show how all adaptation operations can take place using less than 2mWh of energy, a small fraction of the available battery power.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The goal of the research is to provide an overview of those factors that play a major role in structural failures and also to focus on the importance that bracing has in construction accidents. A temporary bracing system is important to construction safety, yet it is often neglected. Structural collapses often occur due to the insufficient support of loads that are applied at the time of failure. The structural load is usually analyzed by conceiving the whole structure as a completed entity, and there is frequently a lack of design or proper implementation of systems that can provide stability during construction. Often, the specific provisions and requirements of temporary bracing systems are left to the workers on the job site that may not have the qualifications or expertise for proper execution. To effectively see if bracing design should get more attention in codes and standards, failures which could have been avoided with the presence and/or the correct design of a bracing system were searched and selected among a variety of cases existing in the engineering literature. Eleven major cases were found, which span in a time frame of almost 70 years, clearly showing that the topic should get more attention. The case studies are presented in chronological order and in a systematic way. The failed structure is described in its design components and the sequence of failure is reconstructed. Then, the causes and failure mechanism are presented. Advice on how to avoid similar failures from happening again and hypothetic solutions which could have prevented the collapses are identified. The findings shows that insufficient or nonexistent bracing mainly results from human negligence or miscalculation of the load analysis and show that time has come to fully acknowledge that temporary structures should be more accounted for in design and not left to contractors' means and methods of construction.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the recent decade, the request for structural health monitoring expertise increased exponentially in the United States. The aging issues that most of the transportation structures are experiencing can put in serious jeopardy the economic system of a region as well as of a country. At the same time, the monitoring of structures is a central topic of discussion in Europe, where the preservation of historical buildings has been addressed over the last four centuries. More recently, various concerns arose about security performance of civil structures after tragic events such the 9/11 or the 2011 Japan earthquake: engineers looks for a design able to resist exceptional loadings due to earthquakes, hurricanes and terrorist attacks. After events of such a kind, the assessment of the remaining life of the structure is at least as important as the initial performance design. Consequently, it appears very clear that the introduction of reliable and accessible damage assessment techniques is crucial for the localization of issues and for a correct and immediate rehabilitation. The System Identification is a branch of the more general Control Theory. In Civil Engineering, this field addresses the techniques needed to find mechanical characteristics as the stiffness or the mass starting from the signals captured by sensors. The objective of the Dynamic Structural Identification (DSI) is to define, starting from experimental measurements, the modal fundamental parameters of a generic structure in order to characterize, via a mathematical model, the dynamic behavior. The knowledge of these parameters is helpful in the Model Updating procedure, that permits to define corrected theoretical models through experimental validation. The main aim of this technique is to minimize the differences between the theoretical model results and in situ measurements of dynamic data. Therefore, the new model becomes a very effective control practice when it comes to rehabilitation of structures or damage assessment. The instrumentation of a whole structure is an unfeasible procedure sometimes because of the high cost involved or, sometimes, because it’s not possible to physically reach each point of the structure. Therefore, numerous scholars have been trying to address this problem. In general two are the main involved methods. Since the limited number of sensors, in a first case, it’s possible to gather time histories only for some locations, then to move the instruments to another location and replay the procedure. Otherwise, if the number of sensors is enough and the structure does not present a complicate geometry, it’s usually sufficient to detect only the principal first modes. This two problems are well presented in the works of Balsamo [1] for the application to a simple system and Jun [2] for the analysis of system with a limited number of sensors. Once the system identification has been carried, it is possible to access the actual system characteristics. A frequent practice is to create an updated FEM model and assess whether the structure fulfills or not the requested functions. Once again the objective of this work is to present a general methodology to analyze big structure using a limited number of instrumentation and at the same time, obtaining the most information about an identified structure without recalling methodologies of difficult interpretation. A general framework of the state space identification procedure via OKID/ERA algorithm is developed and implemented in Matlab. Then, some simple examples are proposed to highlight the principal characteristics and advantage of this methodology. A new algebraic manipulation for a prolific use of substructuring results is developed and implemented.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Questa tesi di laurea è stata redatta presso l’azienda Sacmi Imola S.C. ed in particolare all’interno della divisione Closures, che si occupa della progettazione e della realizzazione di linee per la produzione di varie tipologie di capsule. Lo scopo dell’elaborato è descrivere lo sviluppo di un sistema di tracciabilità di prodotto; sistemi di questo tipo, adottati inizialmente nel settore alimentare, stanno acquisendo sempre maggiore importanza anche in altri campi produttivi, poiché rivestono un ruolo strategico al fine della realizzazione di prodotti caratterizzati da livelli elevati di performance e di qualità, capaci di emergere nel mercato moderno caratterizzato da una concorrenza estesa a livello mondiale e molto attento alle esigenze dei clienti. Nel caso specifico di Sacmi il sistema di tracciabilità si rivolge ad una pressa, la CCM (Continuous Compression Moulder), realizzata dall’azienda per la produzione di capsule in materiale termoplastico tramite la tecnologia dello stampaggio a compressione. In particolare il sistema si concentra sugli stampi della macchina CCM, i quali ne rappresentano gli elementi critici dal punto di vista sia tecnico che economico. A livello generale, un sistema di tracciabilità è costituito da due componenti fondamentali: il primo è un sistema di identificazione che permetta di rendere distinguibili ed individuabili le unità da tracciare, mentre il secondo è un sistema di raccolta dati in grado di raccogliere le informazioni desiderate. Queste sono poi archiviate in un apposito database ed attribuite alle entità corrispondenti sfruttando le proprietà del sistema di identificazione. Il primo passo da compiere quando si intende sviluppare un sistema di tracciabilità all’interno di un contesto produttivo già consolidato è la ricostruzione del processo produttivo presente in azienda: si tratta di individuare tutti gli enti aziendali che concorrono al processo e che saranno interessati dall’introduzione del nuovo sistema. Una volta definiti gli attori, è necessario anche capire come questi siano collegati dai flussi di materiale e di informazioni. Il processo produttivo di Sacmi era caratterizzato dalla quasi totale assenza di un flusso strutturato di informazioni a supporto di quello di materiale, ed il sistema di tracciabilità ha provveduto a colmare proprio questa mancanza. Il sistema deve essere in grado di integrarsi perfettamente nel contesto produttivo aziendale: è necessario trovare il giusto compromesso per quanto riguarda la quantità di informazioni da raccogliere, che devono garantire una corretta copertura di tutto il processo senza però appesantirlo eccessivamente. E’ bene che la raccolta dati sia basata su procedure standard che assicurino la ripetibilità delle operazioni di prelievo delle informazioni. Come è logico immaginarsi, l’introduzione di numerose novità nell’ambito di un contesto già strutturato ha fatto emergere un certo numero di problematiche, come ad esempio difficoltà nello stoccaggio e ritardi di produzione; queste devono essere risolte chiedendo uno sforzo aggiuntivo agli enti interessati o, nel medio/lungo periodo, evolvendo ed affinando il sistema con soluzioni più snelle.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Every year, thousand of surgical treatments are performed in order to fix up or completely substitute, where possible, organs or tissues affected by degenerative diseases. Patients with these kind of illnesses stay long times waiting for a donor that could replace, in a short time, the damaged organ or the tissue. The lack of biological alternates, related to conventional surgical treatments as autografts, allografts, e xenografts, led the researchers belonging to different areas to collaborate to find out innovative solutions. This research brought to a new discipline able to merge molecular biology, biomaterial, engineering, biomechanics and, recently, design and architecture knowledges. This discipline is named Tissue Engineering (TE) and it represents a step forward towards the substitutive or regenerative medicine. One of the major challenge of the TE is to design and develop, using a biomimetic approach, an artificial 3D anatomy scaffold, suitable for cells adhesion that are able to proliferate and differentiate themselves as consequence of the biological and biophysical stimulus offered by the specific tissue to be replaced. Nowadays, powerful instruments allow to perform analysis day by day more accurateand defined on patients that need more precise diagnosis and treatments.Starting from patient specific information provided by TC (Computed Tomography) microCT and MRI(Magnetic Resonance Imaging), an image-based approach can be performed in order to reconstruct the site to be replaced. With the aid of the recent Additive Manufacturing techniques that allow to print tridimensional objects with sub millimetric precision, it is now possible to practice an almost complete control of the parametrical characteristics of the scaffold: this is the way to achieve a correct cellular regeneration. In this work, we focalize the attention on a branch of TE known as Bone TE, whose the bone is main subject. Bone TE combines osteoconductive and morphological aspects of the scaffold, whose main properties are pore diameter, structure porosity and interconnectivity. The realization of the ideal values of these parameters represents the main goal of this work: here we'll a create simple and interactive biomimetic design process based on 3D CAD modeling and generative algorithmsthat provide a way to control the main properties and to create a structure morphologically similar to the cancellous bone. Two different typologies of scaffold will be compared: the first is based on Triply Periodic MinimalSurface (T.P.M.S.) whose basic crystalline geometries are nowadays used for Bone TE scaffolding; the second is based on using Voronoi's diagrams and they are more often used in the design of decorations and jewellery for their capacity to decompose and tasselate a volumetric space using an heterogeneous spatial distribution (often frequent in nature). In this work, we will show how to manipulate the main properties (pore diameter, structure porosity and interconnectivity) of the design TE oriented scaffolding using the implementation of generative algorithms: "bringing back the nature to the nature".

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The dissertation starts by providing a description of the phenomena related to the increasing importance recently acquired by satellite applications. The spread of such technology comes with implications, such as an increase in maintenance cost, from which derives the interest in developing advanced techniques that favor an augmented autonomy of spacecrafts in health monitoring. Machine learning techniques are widely employed to lay a foundation for effective systems specialized in fault detection by examining telemetry data. Telemetry consists of a considerable amount of information; therefore, the adopted algorithms must be able to handle multivariate data while facing the limitations imposed by on-board hardware features. In the framework of outlier detection, the dissertation addresses the topic of unsupervised machine learning methods. In the unsupervised scenario, lack of prior knowledge of the data behavior is assumed. In the specific, two models are brought to attention, namely Local Outlier Factor and One-Class Support Vector Machines. Their performances are compared in terms of both the achieved prediction accuracy and the equivalent computational cost. Both models are trained and tested upon the same sets of time series data in a variety of settings, finalized at gaining insights on the effect of the increase in dimensionality. The obtained results allow to claim that both models, combined with a proper tuning of their characteristic parameters, successfully comply with the role of outlier detectors in multivariate time series data. Nevertheless, under this specific context, Local Outlier Factor results to be outperforming One-Class SVM, in that it proves to be more stable over a wider range of input parameter values. This property is especially valuable in unsupervised learning since it suggests that the model is keen to adapting to unforeseen patterns.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this thesis the design of a pressure regulation system for space propulsion engines (electric and cold gas) has been performed. The Bang-Bang Control (BBC) method has been implemented through the open/close command on a solenoid valve, and the mass flow rate of the propellant has been fixed with suitable flow restrictors. At the beginning, research for the comparison between mechanical and electronic (for BBC) pressure regulators has been performed, which resulted in enough advantages for the selection of the second valve type. The major advantage is about the possibility to have a variable outlet pressure with a variable inlet pressure through a simple remote command, while in mechanical pressure regulators the ratio between inlet and outlet pressures must be mechanically settled. Different pressure control schemes have been analyzed, changing number of solenoid valves, flow restrictors and plenums. For each scheme the valve’s frequencies were evaluated with simplified mathematical models and with the use of simulators implemented on Python; the results obtained from those two methods matched quiet well. From all the schemes it was possible to observe varying frequency and duty cycle, for changes in different parameters. This results, after experimental checks, can be used to design the control system for a given total number of cycles that a specific solenoid valve can guarantee. Finally, tests were performed and it was possible to verify the goodness of the control system. Moreover from the tests it was possible to deduce some tips in order to optimize the running of the simulator.