21 resultados para Point of interest


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the most recent years, Additive Manufacturing (AM) has drawn the attention of both academic research and industry, as it might deeply change and improve several industrial sectors. From the material point of view, AM results in a peculiar microstructure that strictly depends on the conditions of the additive process and directly affects mechanical properties. The present PhD research project aimed at investigating the process-microstructure-properties relationship of additively manufactured metal components. Two technologies belonging to the AM family were considered: Laser-based Powder Bed Fusion (LPBF) and Wire-and-Arc Additive Manufacturing (WAAM). The experimental activity was carried out on different metals of industrial interest: a CoCrMo biomedical alloy and an AlSi7Mg0.6 alloy processed by LPBF, an AlMg4.5Mn alloy and an AISI 304L austenitic stainless steel processed by WAAM. In case of LPBF, great attention was paid to the influence that feedstock material and process parameters exert on hardness, morphological and microstructural features of the produced samples. The analyses, targeted at minimizing microstructural defects, lead to process optimization. For heat-treatable LPBF alloys, innovative post-process heat treatments, tailored on the peculiar hierarchical microstructure induced by LPBF, were developed and deeply investigated. Main mechanical properties of as-built and heat-treated alloys were assessed and they were well-correlated to the specific LPBF microstructure. Results showed that, if properly optimized, samples exhibit a good trade-off between strength and ductility yet in the as-built condition. However, tailored heat treatments succeeded in improving the overall performance of the LPBF alloys. Characterization of WAAM alloys, instead, evidenced the microstructural and mechanical anisotropy typical of AM metals. Experiments revealed also an outstanding anisotropy in the elastic modulus of the austenitic stainless-steel that, along with other mechanical properties, was explained on the basis of microstructural analyses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The design optimization of industrial products has always been an essential activity to improve product quality while reducing time-to-market and production costs. Although cost management is very complex and comprises all phases of the product life cycle, the control of geometrical and dimensional variations, known as Dimensional Management (DM), allows compliance with product and process requirements. Hence, the tolerance-cost optimization becomes the main practice to provide an effective application of Design for Tolerancing (DfT) and Design to Cost (DtC) approaches by enabling a connection between product tolerances and associated manufacturing costs. However, despite the growing interest in this topic, a profitable application in the industry of these techniques is hampered by their complexity: the definition of a systematic framework is the key element to improving design optimization, enhancing the concurrent use of Computer-Aided tools and Model-Based Definition (MBD) practices. The present doctorate research aims to define and develop an integrated methodology for product/process design optimization, to better exploit the new capabilities of advanced simulations and tools. By implementing predictive models and multi-disciplinary optimization, a Computer-Aided Integrated framework for tolerance-cost optimization has been proposed to allow the integration of DfT and DtC approaches and their direct application for the design of automotive components. Several case studies have been considered, with the final application of the integrated framework on a high-performance V12 engine assembly, to achieve both functional targets and cost reduction. From a scientific point of view, the proposed methodology provides an improvement for the tolerance-cost optimization of industrial components. The integration of theoretical approaches and Computer-Aided tools allows to analyse the influence of tolerances on both product performance and manufacturing costs. The case studies proved the suitability of the methodology for its application in the industrial field, providing the identification of further areas for improvement and refinement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study of tides and their interactions with the complex dynamics of the global ocean represents a crucial challenge in ocean modelling. This thesis aims to deepen this study from a dynamical point of view, analysing what are the tidal effects on the general circulation of the ocean. We perform different experiments of a mesoscale-permitting global ocean model forced by both atmospheric fields and astronomical tidal potential, and we implement two parametrizations to include in the model tidal phenomena that are currently unresolved, with particular emphasis to the topographic wave drag for locally dissipating internal waves. An additional experiment using a mesoscale-resolving configuration is used to compare the simulated tides at different resolutions with observed data. We find that the accuracy of modelled tides strongly depends on the region and harmonic component of interest, even though the increased resolution allows to improve the modelled topography and resolve more intense internal waves. We then focus on the impact of tides in the Atlantic Ocean and find that tides weaken the overturning circulation during the analysed period from 1981 to 2007, even though the interannual differences strongly change in both amplitude and phase. The zonally integrated momentum balance shows that tide changes the water stratification at the zonal boundaries, modifying the pressure and therefore the geostrophic balance over the entire basin. Finally, we describe the overturning circulation in the Mediterranean Sea computing the meridional and zonal streamfunctions both in the Eulerian and residual frameworks. The circulation is characterised by different cells, and their forcing processes are described with particular emphasis to the role of mesoscale and a transient climatic event. We complete the description of the overturning circulation giving evidence for the first time to the connection between meridional and zonal cells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last decades, we saw a soaring interest in autonomous robots boosted not only by academia and industry, but also by the ever in- creasing demand from civil users. As a matter of fact, autonomous robots are fast spreading in all aspects of human life, we can see them clean houses, navigate through city traffic, or harvest fruits and vegetables. Almost all commercial drones already exhibit unprecedented and sophisticated skills which makes them suitable for these applications, such as obstacle avoidance, simultaneous localisation and mapping, path planning, visual-inertial odometry, and object tracking. The major limitations of such robotic platforms lie in the limited payload that can carry, in their costs, and in the limited autonomy due to finite battery capability. For this reason researchers start to develop new algorithms able to run even on resource constrained platforms both in terms of computation capabilities and limited types of endowed sensors, focusing especially on very cheap sensors and hardware. The possibility to use a limited number of sensors allowed to scale a lot the UAVs size, while the implementation of new efficient algorithms, performing the same task in lower time, allows for lower autonomy. However, the developed robots are not mature enough to completely operate autonomously without human supervision due to still too big dimensions (especially for aerial vehicles), which make these platforms unsafe for humans, and the high probability of numerical, and decision, errors that robots may make. In this perspective, this thesis aims to review and improve the current state-of-the-art solutions for autonomous navigation from a purely practical point of view. In particular, we deeply focused on the problems of robot control, trajectory planning, environments exploration, and obstacle avoidance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Long-term monitoring of acoustical environments is gaining popularity thanks to the relevant amount of scientific and engineering insights that it provides. The increasing interest is due to the constant growth of storage capacity and computational power to process large amounts of data. In this perspective, machine learning (ML) provides a broad family of data-driven statistical techniques to deal with large databases. Nowadays, the conventional praxis of sound level meter measurements limits the global description of a sound scene to an energetic point of view. The equivalent continuous level Leq represents the main metric to define an acoustic environment, indeed. Finer analyses involve the use of statistical levels. However, acoustic percentiles are based on temporal assumptions, which are not always reliable. A statistical approach, based on the study of the occurrences of sound pressure levels, would bring a different perspective to the analysis of long-term monitoring. Depicting a sound scene through the most probable sound pressure level, rather than portions of energy, brought more specific information about the activity carried out during the measurements. The statistical mode of the occurrences can capture typical behaviors of specific kinds of sound sources. The present work aims to propose an ML-based method to identify, separate and measure coexisting sound sources in real-world scenarios. It is based on long-term monitoring and is addressed to acousticians focused on the analysis of environmental noise in manifold contexts. The presented method is based on clustering analysis. Two algorithms, Gaussian Mixture Model and K-means clustering, represent the main core of a process to investigate different active spaces monitored through sound level meters. The procedure has been applied in two different contexts: university lecture halls and offices. The proposed method shows robust and reliable results in describing the acoustic scenario and it could represent an important analytical tool for acousticians.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim of the present study was to develop a statistical approach to define the best cut-off Copy number alterations (CNAs) calling from genomic data provided by high throughput experiments, able to predict a specific clinical end-point (early relapse, 18 months) in the context of Multiple Myeloma (MM). 743 newly diagnosed MM patients with SNPs array-derived genomic and clinical data were included in the study. CNAs were called both by a conventional (classic, CL) and an outcome-oriented (OO) method, and Progression Free Survival (PFS) hazard ratios of CNAs called by the two approaches were compared. The OO approach successfully identified patients at higher risk of relapse and the univariate survival analysis showed stronger prognostic effects for OO-defined high-risk alterations, as compared to that defined by CL approach, statistically significant for 12 CNAs. Overall, 155/743 patients relapsed within 18 months from the therapy start. A small number of OO-defined CNAs were significantly recurrent in early-relapsed patients (ER-CNAs) - amp1q, amp2p, del2p, del12p, del17p, del19p -. Two groups of patients were identified either carrying or not ≥1 ER-CNAs (249 vs. 494, respectively), the first one with significantly shorter PFS and overall survivals (OS) (PFS HR 2.15, p<0001; OS HR 2.37, p<0.0001). The risk of relapse defined by the presence of ≥1 ER-CNAs was independent from those conferred both by R-IIS 3 (HR=1.51; p=0.01) and by low quality (< stable disease) clinical response (HR=2.59 p=0.004). Notably, the type of induction therapy was not descriptive, suggesting that ER is strongly related to patients’ baseline genomic architecture. In conclusion, the OO- approach employed allowed to define CNAs-specific dynamic clonality cut-offs, improving the CNAs calls’ accuracy to identify MM patients with the highest probability to ER. As being outcome-dependent, the OO-approach is dynamic and might be adjusted according to the selected outcome variable of interest.