19 resultados para 3D printing,steel bars,calibration of design values,correlation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation contributes to the scholarly debate on temporary teams by exploring team interactions and boundaries.The fundamental challenge in temporary teams originates from temporary participation in the teams. First, as participants join the team for a short period of time, there is not enough time to build trust, share understanding, and have effective interactions. Consequently, team outputs and practices built on team interactions become vulnerable. Secondly, as team participants move on and off the teams, teams’ boundaries become blurred over time. It leads to uncertainty among team participants and leaders about who is/is not identified as a team member causing collective disagreement within the team. Focusing on the above mentioned challenges, we conducted this research in healthcare organisations since the use of temporary teams in healthcare and hospital setting is prevalent. In particular, we focused on orthopaedic teams that provide personalised treatments for patients using 3D printing technology. Qualitative and quantitative data were collected using interviews, observations, questionnaires and archival data at Rizzoli Orthopaedic Institute, Bologna, Italy. This study provides the following research outputs. The first is a conceptual study that explores temporary teams’ literature using bibliometric analysis and systematic literature review to highlight research gaps. The second paper qualitatively studies temporary relationships within the teams by collecting data using group interviews and observations. The results highlighted the role of short-term dyadic relationships as a ground to share and transfer knowledge at the team level. Moreover, hierarchical structure of the teams facilitates knowledge sharing by supporting dyadic relationships within and beyond the team meetings. The third paper investigates impact of blurred boundaries on temporary teams’ performance. Using quantitative data collected through questionnaires and archival data, we concluded that boundary blurring in terms of fluidity, overlap and dispersion differently impacts team performance at high and low levels of task complexity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bone disorders have severe impact on body functions and quality life, and no satisfying therapies exist yet. The current models for bone disease study are scarcely predictive and the options existing for therapy fail for complex systems. To mimic and/or restore bone, 3D printing/bioprinting allows the creation of 3D structures with different materials compositions, properties, and designs. In this study, 3D printing/bioprinting has been explored for (i) 3D in vitro tumor models and (ii) regenerative medicine. Tumor models have been developed by investigating different bioinks (i.e., alginate, modified gelatin) enriched by hydroxyapatite nanoparticles to increase printing fidelity and increase biomimicry level, thus mimicking the organic and inorganic phase of bone. High Saos-2 cell viability was obtained, and the promotion of spheroids clusters as occurring in vivo was observed. To develop new syntethic bone grafts, two approaches have been explored. In the first, novel magnesium-phosphate scaffolds have been investigated by extrusion-based 3D printing for spinal fusion. 3D printing process and parameters have been optimized to obtain custom-shaped structures, with competent mechanical properties. The 3D printed structures have been combined to alginate porous structures created by a novel ice-templating technique, to be loaded by antibiotic drug to address infection prevention. Promising results in terms of planktonic growth inhibition was obtained. In the second strategy, marine waste precursors have been considered for the conversion in biogenic HA by using a mild-wet conversion method with different parameters. The HA/carbonate ratio conversion efficacy was analysed for each precursor (by FTIR and SEM), and the best conditions were combined to alginate to develop a composite structure. The composite paste was successfully employed in custom-modified 3D printer for the obtainment of 3D printed stable scaffolds. In conclusion, the osteomimetic materials developed in this study for bone models and synthetic grafts are promising in bone field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Gaia space mission is a major project for the European astronomical community. As challenging as it is, the processing and analysis of the huge data-flow incoming from Gaia is the subject of thorough study and preparatory work by the DPAC (Data Processing and Analysis Consortium), in charge of all aspects of the Gaia data reduction. This PhD Thesis was carried out in the framework of the DPAC, within the team based in Bologna. The task of the Bologna team is to define the calibration model and to build a grid of spectro-photometric standard stars (SPSS) suitable for the absolute flux calibration of the Gaia G-band photometry and the BP/RP spectrophotometry. Such a flux calibration can be performed by repeatedly observing each SPSS during the life-time of the Gaia mission and by comparing the observed Gaia spectra to the spectra obtained by our ground-based observations. Due to both the different observing sites involved and the huge amount of frames expected (≃100000), it is essential to maintain the maximum homogeneity in data quality, acquisition and treatment, and a particular care has to be used to test the capabilities of each telescope/instrument combination (through the “instrument familiarization plan”), to devise methods to keep under control, and eventually to correct for, the typical instrumental effects that can affect the high precision required for the Gaia SPSS grid (a few % with respect to Vega). I contributed to the ground-based survey of Gaia SPSS in many respects: with the observations, the instrument familiarization plan, the data reduction and analysis activities (both photometry and spectroscopy), and to the maintenance of the data archives. However, the field I was personally responsible for was photometry and in particular relative photometry for the production of short-term light curves. In this context I defined and tested a semi-automated pipeline which allows for the pre-reduction of imaging SPSS data and the production of aperture photometry catalogues ready to be used for further analysis. A series of semi-automated quality control criteria are included in the pipeline at various levels, from pre-reduction, to aperture photometry, to light curves production and analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Oggetto della ricerca è lo studio del National Institute of Design (NID), progettato da Gautam Sarabhai e sua sorella Gira, ad Ahmedabad, assunta a paradigma del nuovo corso della politica che il Primo Ministro Nehru espresse nei primi decenni del governo postcoloniale. Obiettivo della tesi è di analizzare il fenomeno che unisce modernità e tradizione in architettura. La modernità indiana, infatti, nacque e si sviluppò con i caratteri di un Giano bifronte: da un lato, la politica del Primo Ministro Nehru favorì lo sviluppo dell’industria e della scienza; dall’altro, la visione di Gandhi mirava alla riscoperta del locale, delle tradizioni e dell’artigianato. Questi orientamenti influenzarono l’architettura postcoloniale. Negli anni ‘50 e ’60 Ahmedabad divenne la culla dell’architettura moderna indiana. Kanvinde, i Sarabhai, Correa, Doshi, Raje trovarono qui le condizioni per costruire la propria identità come progettisti e come intellettuali. I motori che resero possibile questo fermento furono principalmente due: una committenza di imprenditori illuminati, desiderosi di modernizzare la città; la presenza ad Ahmedabad, a partire dal 1951, dei maestri dell’architettura moderna, tra cui i più noti furono Le Corbusier e Kahn, invitati da quella stessa committenza, per la quale realizzarono edifici di notevole rilevanza. Ad Ahmedabad si confrontarono con forza entrambe le visioni dell’India moderna. Lo sforzo maggiore degli architetti indiani si espresse nel tentativo di conciliare i due aspetti, quelli che derivavano dalle influenze internazionali e quelli che provenivano dallo spirito della tradizione. Il progetto del NID è uno dei migliori esempi di questo esercizio di sintesi. Esso recupera nella composizione spaziale la lezione di Wright, Le Corbusier, Kahn, Eames ibridandola con elementi della tradizione indiana. Nell’uso sapiente della struttura modulare e a padiglione, della griglia ordinatrice a base quadrata, dell’integrazione costante fra spazi aperti, natura e architettura affiorano nell’edificio del NID echi di una cultura millenaria.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation focuses on “organizational efficacy”, in particular on employees’ beliefs of organizational capacity to be efficacious. Organizational efficacy is considered from two perspectives – competing values approach and collective efficacy, and evaluated in internationalized companies. The dissertation is composed of three studies. The data were collected in thirteen Italian companies on different stages of internationalization for a total number of respondents is 358. In the first study the factorial validity of the competing values instrument (Rohrbaugh, 1981) was investigated and confirmed. Two scales were used to measure collective efficacy: a general collective efficacy scale (Bohn, 2010), and a specific collective efficacy scale, developed following suggestions of Borgogni et al. (2001), it evaluates employees’ beliefs of efficacy of organizations in the international market. The findings suggest that competing values and collective organizational efficacy instruments may provide a multi-faceted measurement of employees’ beliefs of organizational efficacy. The second study examined the relationship between organizational efficacy and collective work engagement. To measure collective work engagement the UWES-9 (Schaufeli & Bakker, 2003) was adapted at the group level; its factor structure and reliability were similar to the standard UWES-9. The findings suggest that organizational efficacy fully predicts collective work engagement. Also we investigated whether leadership moderates the relationship between organizational efficacy and collective work engagement. We operationalized leadership style with MLQ (Bass & Avolio, 1995); the results suggest that intellectual stimulation and idealized influence (transformational leadership) and contingent reward (transactional leadership) enhance the impact of organizational efficacy on collective work engagement. In the third study we investigated organizational efficacy and collective work engagement in internationalized companies. The findings show that beliefs of organizational efficacy vary across companies in different stages of internationalization, while no significant difference was found for collective work engagement. Limitations, practical implications and future studies are discussed in the conclusion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

B:Glioblastoma multiforme(GBM) is one of the most prevalent and aggressive malignant primary brain tumors in adult patients. 64CuCl2 is an innovative radiopharmaceutical investigated as theranostic agent in GBM patients. The therapeutic scheme is still under evaluation, therefore the research focused on the possibility of radioresistance development. The actors responsible for modulating radioresistance could be miRNAs, thus their potential use was investigated both in radioresistant cell lines and in GBM patients plasma samples. M:Radioresistant cell lines were generated by exposing U87MG, U373MG lines to increasing doses of radiation for 32 weeks. Cell membrane permeability alterations and DNA damage were assessed to characterize the lines. Moreover, 64Cu cell incorporation and subcellular distribution were investigated measuring gamma-radiation emission. miRNA expression was evaluated: in parental and radioresistant cell lines, both in cell pellet and media exosomes; in plasma samples of GBM patients using TaqMan Array MicroRNA Cards. R:Radioresistant lines exhibited reduction in membrane permeability and in DNA DSBs indicating the capability to skip the drug killing effect. Cell uptake assays showed internalization of 64Cu both in the sensitive and radioresistant lines. Radioresistant lines showed a different miRNA expression profile compared to the parental lines. 5 miRNAs were selected as possible biomarkers of response to treatment (miR-339-3p, miR-133b, miR-103a-3p, miR-32-5p, miR-335-5p) and 6 miRNAs as possible predictive biomarkers of response to treatment (let-7e-5p, miR-15a-5p, miR-29c-3p, miR-495, miR-146b-5p, miR-199a-5p). miR-32-5p was selected as possible molecule to be used to restore 64CuCl2 responsiveness in the radioresistant cell lines. C: This is the first study describing the development and characterization of 64CuCl2 radioresistant cell lines useful to implement the approach for dosimetric analysis to avoid radioresistance uprising. miRNAs could bring to a better understanding of 64CuCl2 treatment, becoming a useful tool both in detection of treatment response and both as molecule that could restore responsiveness to 64CuCl2 treatment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The human movement analysis (HMA) aims to measure the abilities of a subject to stand or to walk. In the field of HMA, tests are daily performed in research laboratories, hospitals and clinics, aiming to diagnose a disease, distinguish between disease entities, monitor the progress of a treatment and predict the outcome of an intervention [Brand and Crowninshield, 1981; Brand, 1987; Baker, 2006]. To achieve these purposes, clinicians and researchers use measurement devices, like force platforms, stereophotogrammetric systems, accelerometers, baropodometric insoles, etc. This thesis focus on the force platform (FP) and in particular on the quality assessment of the FP data. The principal objective of our work was the design and the experimental validation of a portable system for the in situ calibration of FPs. The thesis is structured as follows: Chapter 1. Description of the physical principles used for the functioning of a FP: how these principles are used to create force transducers, such as strain gauges and piezoelectrics transducers. Then, description of the two category of FPs, three- and six-component, the signals acquisition (hardware structure), and the signals calibration. Finally, a brief description of the use of FPs in HMA, for balance or gait analysis. Chapter 2. Description of the inverse dynamics, the most common method used in the field of HMA. This method uses the signals measured by a FP to estimate kinetic quantities, such as joint forces and moments. The measures of these variables can not be taken directly, unless very invasive techniques; consequently these variables can only be estimated using indirect techniques, as the inverse dynamics. Finally, a brief description of the sources of error, present in the gait analysis. Chapter 3. State of the art in the FP calibration. The selected literature is divided in sections, each section describes: systems for the periodic control of the FP accuracy; systems for the error reduction in the FP signals; systems and procedures for the construction of a FP. In particular is detailed described a calibration system designed by our group, based on the theoretical method proposed by ?. This system was the “starting point” for the new system presented in this thesis. Chapter 4. Description of the new system, divided in its parts: 1) the algorithm; 2) the device; and 3) the calibration procedure, for the correct performing of the calibration process. The algorithm characteristics were optimized by a simulation approach, the results are here presented. In addiction, the different versions of the device are described. Chapter 5. Experimental validation of the new system, achieved by testing it on 4 commercial FPs. The effectiveness of the calibration was verified by measuring, before and after calibration, the accuracy of the FPs in measuring the center of pressure of an applied force. The new system can estimate local and global calibration matrices; by local and global calibration matrices, the non–linearity of the FPs was quantified and locally compensated. Further, a non–linear calibration is proposed. This calibration compensates the non– linear effect in the FP functioning, due to the bending of its upper plate. The experimental results are presented. Chapter 6. Influence of the FP calibration on the estimation of kinetic quantities, with the inverse dynamics approach. Chapter 7. The conclusions of this thesis are presented: need of a calibration of FPs and consequential enhancement in the kinetic data quality. Appendix: Calibration of the LC used in the presented system. Different calibration set–up of a 3D force transducer are presented, and is proposed the optimal set–up, with particular attention to the compensation of non–linearities. The optimal set–up is verified by experimental results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The evolution of modern and increasingly sensitive image sensors, the increasingly compact design of the cameras, and the recent emergence of low-cost cameras allowed the Underwater Photogrammetry to become an infallible and irreplaceable technique used to estimate the structure of the seabed with high accuracy. Within this context, the main topic of this work is the Underwater Photogrammetry from a geomatic point of view and all the issues associated with its implementation, in particular with the support of Unmanned Underwater Vehicles. Questions such as: how does the technique work, what is needed to deal with a proper survey, what tools are available to apply this technique, and how to resolve uncertainties in measurement will be the subject of this thesis. The study conducted can be divided into two major parts: one devoted to several ad-hoc surveys and tests, thus a practical part, another supported by the bibliographical research. However the main contributions are related to the experimental section, in which two practical case studies are carried out in order to improve the quality of the underwater survey of some calibration platforms. The results obtained from these two experiments showed that, the refractive effects due to water and underwater housing can be compensated by the distortion coefficients in the camera model, but if the aim is to achieve high accuracy then a model that takes into account the configuration of the underwater housing, based on ray tracing, must also be coupled. The major contributions that this work brought are: an overview of the practical issues when performing surveys exploiting an UUV prototype, a method to reach a reliable accuracy in the 3D reconstructions without the use of an underwater local geodetic network, a guide for who addresses underwater photogrammetry topics for the first time, and the use of open-source environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, the focus is on utilizing metasurfaces to improve radiation characteristics of planar structures. The study encompasses various aspects of metasurface applications, including enhancing antenna radiation characteristics and manipulating electromagnetic (EM) waves, such as polarization conversion and anomalous reflection. The thesis introduces the design of a single-port antenna with dual-mode operation, integrating metasurfaces. This antenna serves as the front-end for a next-generation tag, functioning as a position sensor with identification and energy harvesting capabilities. It operates in the lower European Ultra-Wideband (UWB) frequency range for communication/localization and the UHF band for wireless energy reception. The design aims for a low-profile stack-up that remains unaffected by background materials. Researchers worldwide are drawn to metasurfaces due to their EM wave manipulation capabilities. The thesis also demonstrates how a High-Impedance Surface (HIS) can enhance the antenna's versatility through metasurface application, including conformal design using 3D-printing technology, ensuring adaptability for various deformation and tracking/powering scenarios. Additionally, the thesis explores two distinct metasurface applications. One involves designing an angularly stable super-wideband Circular Polarization Converter (CPC) operating from 11 to 35GHz with an impressive relative impedance bandwidth of 104.3%. The CPC shows a stable response even at oblique incidences up to 40 degrees, with a Peak Cross-Polarization Ratio (PCR) exceeding 62% across the entire band. The second application focuses on an Intelligent Reflective Surface (IRS) capable of redirecting incoming waves in unconventional directions. Tunability is achieved through an artificially developed ferroelectric material (HfZrO) and distributed capacitive elements (IDC) to fine-tune impedance and phase responses at the meta-atom level. The IRS demonstrates anomalous reflection for normal incident waves. These innovative applications of metasurfaces offer promising advancements in antenna design, EM wave manipulation, and versatile wireless communication systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The scale down of transistor technology allows microelectronics manufacturers such as Intel and IBM to build always more sophisticated systems on a single microchip. The classical interconnection solutions based on shared buses or direct connections between the modules of the chip are becoming obsolete as they struggle to sustain the increasing tight bandwidth and latency constraints that these systems demand. The most promising solution for the future chip interconnects are the Networks on Chip (NoC). NoCs are network composed by routers and channels used to inter- connect the different components installed on the single microchip. Examples of advanced processors based on NoC interconnects are the IBM Cell processor, composed by eight CPUs that is installed on the Sony Playstation III and the Intel Teraflops pro ject composed by 80 independent (simple) microprocessors. On chip integration is becoming popular not only in the Chip Multi Processor (CMP) research area but also in the wider and more heterogeneous world of Systems on Chip (SoC). SoC comprehend all the electronic devices that surround us such as cell-phones, smart-phones, house embedded systems, automotive systems, set-top boxes etc... SoC manufacturers such as ST Microelectronics , Samsung, Philips and also Universities such as Bologna University, M.I.T., Berkeley and more are all proposing proprietary frameworks based on NoC interconnects. These frameworks help engineers in the switch of design methodology and speed up the development of new NoC-based systems on chip. In this Thesis we propose an introduction of CMP and SoC interconnection networks. Then focusing on SoC systems we propose: • a detailed analysis based on simulation of the Spidergon NoC, a ST Microelectronics solution for SoC interconnects. The Spidergon NoC differs from many classical solutions inherited from the parallel computing world. Here we propose a detailed analysis of this NoC topology and routing algorithms. Furthermore we propose aEqualized a new routing algorithm designed to optimize the use of the resources of the network while also increasing its performance; • a methodology flow based on modified publicly available tools that combined can be used to design, model and analyze any kind of System on Chip; • a detailed analysis of a ST Microelectronics-proprietary transport-level protocol that the author of this Thesis helped developing; • a simulation-based comprehensive comparison of different network interface designs proposed by the author and the researchers at AST lab, in order to integrate shared-memory and message-passing based components on a single System on Chip; • a powerful and flexible solution to address the time closure exception issue in the design of synchronous Networks on Chip. Our solution is based on relay stations repeaters and allows to reduce the power and area demands of NoC interconnects while also reducing its buffer needs; • a solution to simplify the design of the NoC by also increasing their performance and reducing their power and area consumption. We propose to replace complex and slow virtual channel-based routers with multiple and flexible small Multi Plane ones. This solution allows us to reduce the area and power dissipation of any NoC while also increasing its performance especially when the resources are reduced. This Thesis has been written in collaboration with the Advanced System Technology laboratory in Grenoble France, and the Computer Science Department at Columbia University in the city of New York.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is a detailed study of hydrodynamic processes in a defined area, the littoral in front of the Venice Lagoon and its inlets, which are complex morphological areas of interconnection. A finite element hydrodynamic model of the Venice Lagoon and the Adriatic Sea has been developed in order to study the coastal current patterns and the exchanges at the inlets of the Venice Lagoon. This is the first work in this area that tries to model the interaction dynamics, running together a model for the lagoon and the Adriatic Sea. First the barotropic processes near the inlets of the Venice Lagoon have been studied. Data from more than ten tide gauges displaced in the Adriatic Sea have been used in the calibration of the simulated water levels. To validate the model results, empirical flux data measured by ADCP probes installed inside the inlets of Lido and Malamocco have been used and the exchanges through the three inlets of the Venice Lagoon have been analyzed. The comparison between modelled and measured fluxes at the inlets outlined the efficiency of the model to reproduce both tide and wind induced water exchanges between the sea and the lagoon. As a second step, also small scale processes around the inlets that connect the Venice lagoon with the Northern Adriatic Sea have been investigated by means of 3D simulations. Maps of vorticity have been produced, considering the influence of tidal flows and wind stress in the area. A sensitivity analysis has been carried out to define the importance of the advection and of the baroclinic pressure gradients in the development of vortical processes seen along the littoral close to the inlets. Finally a comparison with real data measurements, surface velocity data from HF Radar near the Venice inlets, has been performed, which allows for a better understanding of the processes and their seasonal dynamics. The results outline the predominance of wind and tidal forcing in the coastal area. Wind forcing acts mainly on the mean coastal current inducing its detachment offshore during Sirocco events and an increase of littoral currents during Bora events. The Bora action is more homogeneous on the whole coastal area whereas the Sirocco strengthens its impact in the South, near Chioggia inlet. Tidal forcing at the inlets is mainly barotropic. The sensitivity analysis shows how advection is the main physical process responsible for the persistent vortical structures present along the littoral between the Venice Lagoon inlets. The comparison with measurements from HF Radar not only permitted a validation the model results, but also a description of different patterns in specific periods of the year. The success of the 2D and the 3D simulations on the reproduction both of the SSE, inside and outside the Venice Lagoon, of the tidal flow, through the lagoon inlets, and of the small scale phenomena, occurring along the littoral, indicates that the finite element approach is the most suitable tool for the investigation of coastal processes. For the first time, as shown by the flux modeling, the physical processes that drive the interaction between the two basins were reproduced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this Doctoral Thesis is to develop a genetic algorithm based optimization methods to find the best conceptual design architecture of an aero-piston-engine, for given design specifications. Nowadays, the conceptual design of turbine airplanes starts with the aircraft specifications, then the most suited turbofan or turbo propeller for the specific application is chosen. In the aeronautical piston engines field, which has been dormant for several decades, as interest shifted towards turboaircraft, new materials with increased performance and properties have opened new possibilities for development. Moreover, the engine’s modularity given by the cylinder unit, makes it possible to design a specific engine for a given application. In many real engineering problems the amount of design variables may be very high, characterized by several non-linearities needed to describe the behaviour of the phenomena. In this case the objective function has many local extremes, but the designer is usually interested in the global one. The stochastic and the evolutionary optimization techniques, such as the genetic algorithms method, may offer reliable solutions to the design problems, within acceptable computational time. The optimization algorithm developed here can be employed in the first phase of the preliminary project of an aeronautical piston engine design. It’s a mono-objective genetic algorithm, which, starting from the given design specifications, finds the engine propulsive system configuration which possesses minimum mass while satisfying the geometrical, structural and performance constraints. The algorithm reads the project specifications as input data, namely the maximum values of crankshaft and propeller shaft speed and the maximal pressure value in the combustion chamber. The design variables bounds, that describe the solution domain from the geometrical point of view, are introduced too. In the Matlab® Optimization environment the objective function to be minimized is defined as the sum of the masses of the engine propulsive components. Each individual that is generated by the genetic algorithm is the assembly of the flywheel, the vibration damper and so many pistons, connecting rods, cranks, as the number of the cylinders. The fitness is evaluated for each individual of the population, then the rules of the genetic operators are applied, such as reproduction, mutation, selection, crossover. In the reproduction step the elitist method is applied, in order to save the fittest individuals from a contingent mutation and recombination disruption, making it undamaged survive until the next generation. Finally, as the best individual is found, the optimal dimensions values of the components are saved to an Excel® file, in order to build a CAD-automatic-3D-model for each component of the propulsive system, having a direct pre-visualization of the final product, still in the engine’s preliminary project design phase. With the purpose of showing the performance of the algorithm and validating this optimization method, an actual engine is taken, as a case study: it’s the 1900 JTD Fiat Avio, 4 cylinders, 4T, Diesel. Many verifications are made on the mechanical components of the engine, in order to test their feasibility and to decide their survival through generations. A system of inequalities is used to describe the non-linear relations between the design variables, and is used for components checking for static and dynamic loads configurations. The design variables geometrical boundaries are taken from actual engines data and similar design cases. Among the many simulations run for algorithm testing, twelve of them have been chosen as representative of the distribution of the individuals. Then, as an example, for each simulation, the corresponding 3D models of the crankshaft and the connecting rod, have been automatically built. In spite of morphological differences among the component the mass is almost the same. The results show a significant mass reduction (almost 20% for the crankshaft) in comparison to the original configuration, and an acceptable robustness of the method have been shown. The algorithm here developed is shown to be a valid method for an aeronautical-piston-engine preliminary project design optimization. In particular the procedure is able to analyze quite a wide range of design solutions, rejecting the ones that cannot fulfill the feasibility design specifications. This optimization algorithm could increase the aeronautical-piston-engine development, speeding up the production rate and joining modern computation performances and technological awareness to the long lasting traditional design experiences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Extrusion is a process used to form long products of constant cross section, from simple billets, with a high variety of shapes. Aluminum alloys are the materials most processed in the extrusion industry due to their deformability and the wide field of applications that range from buildings to aerospace and from design to automotive industries. The diverse applications imply different requirements that can be fulfilled by the wide range of alloys and treatments, that is from critical structural application to high quality surface and aesthetical aspect. Whether one or the other is the critical aspect, they both depend directly from microstructure. The extrusion process is moreover marked by high deformations and complex strain gradients making difficult the control of microstructure evolution that is at present not yet fully achieved. Nevertheless the evolution of Finite Element modeling has reached a maturity and can therefore start to be used as a tool for investigation and prediction of microstructure evolution. This thesis will analyze and model the evolution of microstructure throughout the entire extrusion process for 6XXX series aluminum alloys. Core phase of the work was the development of specific tests to investigate the microstructure evolution and validate the model implemented in a commercial FE code. Along with it two essential activities were carried out for a correct calibration of the model beyond the simple research of contour parameters, thus leading to the understanding and control of both code and process. In this direction activities were also conducted on building critical knowhow on the interpretation of microstructure and extrusion phenomena. It is believed, in fact, that the sole analysis of the microstructure evolution regardless of its relevance in the technological aspects of the process would be of little use for the industry as well as ineffective for the interpretation of the results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduzione: L'analgesia epidurale è stata messa in correlazione con l'aumento della durata del secondo stadio del travaglio e del tasso di utilizzo della ventosa ostetrica. Diversi meccanismi sono stati ipotizzati, tra cui la riduzione di percezione della discesa fetale, della forza di spinta e dei riflessi che promuovono la progressione e rotazione della testa fetale nel canale del parto. Tali parametri sono solitamente valutati mediante esame clinico digitale, costantemente riportato essere poco accurato e riproducibile. Su queste basi l'uso dell'ecografia in travaglio, con introduzione di diversi parametri ecografici di valutazione della discesa della testa fetale, sono stati proposti per supportare la diagnosi clinica nel secondo stadio del travaglio. Scopi dello studio: studiare effetto dell’analgesia epidurale sulla progressione della testa fetale durante il II stadio del travaglio valutata mediante ecografia intrapartum. Materiali e metodi: una serie di pazienti nullipare a basso rischio a termine (37+0-42+0) sono state reclutate in modo prospettico nella sala parto del nostro Policlinico Universitario. In ciascuna di esse abbiamo acquisito un volume ecografico ogni 20 minuti dall’inizio della fase attiva del secondo stadio fino al parto ed una serie di parametri ecografici sono stati ricavati in un secondo tempo (angolo di progressione, distanza di progressione distanza testa sinfisi pubica e midline angle). Tutti questi parametri sono stati confrontati ad ogni intervallo di tempo nei due gruppi. Risultati: 71 pazienti totali, di cui 41 (57.7%) con analgesia epidurale. In 58 (81.7%) casi il parto è stato spontaneo, mentre in 8 (11.3%) e 5 (7.0%) casi rispettivamente si è ricorsi a ventosa ostetrica o taglio cesareo. I valori di tutti i parametri ecografici misurati sono risultati sovrapponibili nei due gruppi in tutti gli intervalli di misurazione. Conclusioni: la progressione della testa fetale valutata longitudinalmente mediante ecografia 3D non sembra differire significativamente nelle pazienti con o senza analgesia epidurale.