879 resultados para 3D printing,steel bars,calibration of design values,correlation
Resumo:
Titan's optical and near-IR spectra result primarily from the scattering of sunlight by haze and its absorption by methane. With a column abundance of 92 km amagat (11 times that of Earth), Titan's atmosphere is optically thick and only similar to 10% of the incident solar radiation reaches the surface, compared to 57% on Earth. Such a formidable atmosphere obstructs investigations of the moon's lower troposphere and surface, which are highly sensitive to the radiative transfer treatment of methane absorption and haze scattering. The absorption and scattering characteristics of Titan's atmosphere have been constrained by the Huygens Probe Descent Imager/Spectral Radiometer (DISR) experiment for conditions at the probe landing site (Tomasko, M.G., Bezard, B., Doose, L., Engel, S., Karkoschka, E. 120084 Planet. Space Sci. 56, 624-247: Tomasko, M.G. et al. [2008b] Planet. Space Sci. 56, 669-707). Cassini's Visual and Infrared Mapping Spectrometer (VIMS) data indicate that the rest of the atmosphere (except for the polar regions) can be understood with small perturbations in the high haze structure determined at the landing site (Penteado, P.F., Griffith, CA., Tomasko, M.G., Engel, S., See, C., Doose, L, Baines, K.H., Brown, R.H., Buratti, B.J., Clark, R., Nicholson, P., Sotin, C. [2010]. Icarus 206, 352-365). However the in situ measurements were analyzed with a doubling and adding radiative transfer calculation that differs considerably from the discrete ordinates codes used to interpret remote data from Cassini and ground-based measurements. In addition, the calibration of the VIMS data with respect to the DISR data has not yet been tested. Here, VIMS data of the probe landing site are analyzed with the DISR radiative transfer method and the faster discrete ordinates radiative transfer calculation; both models are consistent (to within 0.3%) and reproduce the scattering and absorption characteristics derived from in situ measurements. Constraints on the atmospheric opacity at wavelengths outside those measured by DISR, that is from 1.6 to 5.0 mu m, are derived using clouds as diffuse reflectors in order to derive Titan's surface albedo to within a few percent error and cloud altitudes to within 5 km error. VIMS spectra of Titan at 2.6-3.2 mu m indicate not only spectral features due to CH4 and CH3D (Rannou, P., Cours, T., Le Mouelic, S., Rodriguez, S., Sotin, C., Drossart, P., Brown, R. [2010]. Icarus 208, 850-867), but also a fairly uniform absorption of unknown source, equivalent to the effects of a darkening of the haze to a single scattering albedo of 0.63 +/- 0.05. Titan's 4.8 mu m spectrum point to a haze optical depth of 0.2 at that wavelength. Cloud spectra at 2 mu m indicate that the far wings of the Voigt profile extend 460 cm(-1) from methane line centers. This paper releases the doubling and adding radiative transfer code developed by the DISR team, so that future studies of Titan's atmosphere and surface are consistent with the findings by the Huygens Probe. We derive the surface albedo at eight spectral regions of the 8 x 12 km(2) area surrounding the Huygens landing site. Within the 0.4-1.6 mu m spectral region our surface albedos match DISR measurements, indicating that DISR and VIMS measurements are consistently calibrated. These values together with albedos at longer 1.9-5.0 mu m wavelengths, not sampled by DISR, resemble a dark version of the spectrum of Ganymede's icy leading hemisphere. The eight surface albedos of the landing site are consistent with, but not deterministic of, exposed water ice with dark impurities. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
Background: Balancing the subject composition of case and control groups to create homogenous ancestries between each group is essential for medical association studies. Methods: We explored the applicability of single-tube 34-plex ancestry informative markers (AIM) single nucleotide polymorphisms (SNPs) to estimate the African Component of Ancestry (ACA) to design a future case-control association study of a Brazilian urban sample. Results: One hundred eighty individuals (107 case group; 73 control group) self-described as white, brown-intermediate or black were selected. The proportions of the relative contribution of a variable number of ancestral population components were similar between case and control groups. Moreover, the case and control groups demonstrated similar distributions for ACA <0.25 and >0.50 categories. Notably a high number of outlier values (23 samples) were observed among individuals with ACA <0.25. These individuals presented a high probability of Native American and East Asian ancestral components; however, no individuals originally giving these self-described ancestries were observed in this study. Conclusions: The strategy proposed for the assessment of ancestry and adjustment of case and control groups for an association study is an important step for the proper construction of the study, particularly when subjects are taken from a complex urban population. This can be achieved using a straight forward multiplexed AIM-SNPs assay of highly discriminatory ancestry markers.
Resumo:
The Passifloraceae family is extensively used in native Brazilian folk medicine to treat a wide variety of diseases. The problem of flavonoid extraction from Passiflora was treated by application of design of experiments (DOE), as an experiment with mixture including one categorical process variable. The components of the binary mixture were: ethanol (component A) and water (component B); the categorical process variable: extraction method (factor C) was varied at two levels: (+1) maceration and (-1) percolation. ANOVA suggested a cubic model for P. edulis extraction and a quadratic model for P. alata.These results indicate that the proportion of components A and B in the mixture is the main factor involved in significantly increasing flavonoid extraction. In regard to the extraction methods, no important differences were observed, which indicates that these two traditional extraction methods could be effectively used to extract flavonoids from both medicinal plants. The evaluation of antioxidant activity of the extract by ORAC method showed that P. edulis displays twice as much antioxidant activity as P. alata. Considering that maceration is a simple, rapid and environmentally friendly extraction method, in this study, the optimized conditions for flavonoid extraction from these Passiflora species is maceration with 75% ethanol for P. edulis and 50% ethanol for P. alata.
Resumo:
Blood-brain barrier (BBB) permeation is an essential property for drugs that act in the central nervous system (CNS) for the treatment of human diseases, such as epilepsy, depression, Alzheimer's disease, Parkinson disease, schizophrenia, among others. In the present work, quantitative structure-property relationship (QSPR) studies were conducted for the development and validation of in silico models for the prediction of BBB permeation. The data set used has substantial chemical diversity and a relatively wide distribution of property values. The generated QSPR models showed good statistical parameters and were successfully employed for the prediction of a test set containing 48 compounds. The predictive models presented herein are useful in the identification, selection and design of new drug candidates having improved pharmacokinetic properties.
Resumo:
This paper addresses the analysis of probabilistic corrosion time initiation in reinforced concrete structures exposed to ions chloride penetration. Structural durability is an important criterion which must be evaluated in every type of structure, especially when these structures are constructed in aggressive atmospheres. Considering reinforced concrete members, chloride diffusion process is widely used to evaluate the durability. Therefore, at modelling this phenomenon, corrosion of reinforcements can be better estimated and prevented. These processes begin when a threshold level of chlorides concentration is reached at the steel bars of reinforcements. Despite the robustness of several models proposed in the literature, deterministic approaches fail to predict accurately the corrosion time initiation due to the inherently randomness observed in this process. In this regard, the durability can be more realistically represented using probabilistic approaches. A probabilistic analysis of ions chloride penetration is presented in this paper. The ions chloride penetration is simulated using the Fick's second law of diffusion. This law represents the chloride diffusion process, considering time dependent effects. The probability of failure is calculated using Monte Carlo simulation and the First Order Reliability Method (FORM) with a direct coupling approach. Some examples are considered in order to study these phenomena and a simplified method is proposed to determine optimal values for concrete cover.
Resumo:
[EN] In this paper, we present a vascular tree model made with synthetic materials and which allows us to obtain images to make a 3D reconstruction. In order to create this model, we have used PVC tubes of several diameters and lengths that will let us evaluate the accuracy of our 3D reconstruction. We have made the 3D reconstruction from a series of images that we have from our model and after we have calibrated the camera. In order to calibrate it we have used a corner detector. Also we have used Optical Flow techniques to follow the points through the images going and going back. Once we have the set of images where we have located a point, we have made the 3D reconstruction choosing by chance a couple of images and we have calculated the projection error. After several repetitions, we have found the best 3D location for the point.
Resumo:
Reinforced concrete columns might fail because of buckling of the longitudinal reinforcing bar when exposed to earthquake motions. Depending on the hoop stiffness and the length-over-diameter ratio, the instability can be local (in between two subsequent hoops) or global (the buckling length comprises several hoop spacings). To get insight into the topic, an extensive literary research of 19 existing models has been carried out including different approaches and assumptions which yield different results. Finite element fiberanalysis was carried out to study the local buckling behavior with varying length-over-diameter and initial imperfection-over-diameter ratios. The comparison of the analytical results with some experimental results shows good agreement before the post buckling behavior undergoes large deformation. Furthermore, different global buckling analysis cases were run considering the influence of different parameters; for certain hoop stiffnesses and length-over-diameter ratios local buckling was encountered. A parametric study yields an adimensional critical stress in function of a stiffness ratio characterized by the reinforcement configuration. Colonne in cemento armato possono collassare per via dell’instabilità dell’armatura longitudinale se sottoposte all’azione di un sisma. In funzione della rigidezza dei ferri trasversali e del rapporto lunghezza d’inflessione-diametro, l’instabilità può essere locale (fra due staffe adiacenti) o globale (la lunghezza d’instabilità comprende alcune staffe). Per introdurre alla materia, è proposta un’esauriente ricerca bibliografica di 19 modelli esistenti che include approcci e ipotesi differenti che portano a risultati distinti. Tramite un’analisi a fibre e elementi finiti si è studiata l’instabilità locale con vari rapporti lunghezza d’inflessione-diametro e imperfezione iniziale-diametro. Il confronto dei risultati analitici con quelli sperimentali mostra una buona coincidenza fino al raggiungimento di grandi spostamenti. Inoltre, il caso d’instabilità globale è stato simulato valutando l’influenza di vari parametri; per certe configurazioni di rigidezza delle staffe e lunghezza d’inflessione-diametro si hanno ottenuto casi di instabilità locale. Uno studio parametrico ha permesso di ottenere un carico critico adimensionale in funzione del rapporto di rigidezza dato dalle caratteristiche dell’armatura.
Resumo:
The scale down of transistor technology allows microelectronics manufacturers such as Intel and IBM to build always more sophisticated systems on a single microchip. The classical interconnection solutions based on shared buses or direct connections between the modules of the chip are becoming obsolete as they struggle to sustain the increasing tight bandwidth and latency constraints that these systems demand. The most promising solution for the future chip interconnects are the Networks on Chip (NoC). NoCs are network composed by routers and channels used to inter- connect the different components installed on the single microchip. Examples of advanced processors based on NoC interconnects are the IBM Cell processor, composed by eight CPUs that is installed on the Sony Playstation III and the Intel Teraflops pro ject composed by 80 independent (simple) microprocessors. On chip integration is becoming popular not only in the Chip Multi Processor (CMP) research area but also in the wider and more heterogeneous world of Systems on Chip (SoC). SoC comprehend all the electronic devices that surround us such as cell-phones, smart-phones, house embedded systems, automotive systems, set-top boxes etc... SoC manufacturers such as ST Microelectronics , Samsung, Philips and also Universities such as Bologna University, M.I.T., Berkeley and more are all proposing proprietary frameworks based on NoC interconnects. These frameworks help engineers in the switch of design methodology and speed up the development of new NoC-based systems on chip. In this Thesis we propose an introduction of CMP and SoC interconnection networks. Then focusing on SoC systems we propose: • a detailed analysis based on simulation of the Spidergon NoC, a ST Microelectronics solution for SoC interconnects. The Spidergon NoC differs from many classical solutions inherited from the parallel computing world. Here we propose a detailed analysis of this NoC topology and routing algorithms. Furthermore we propose aEqualized a new routing algorithm designed to optimize the use of the resources of the network while also increasing its performance; • a methodology flow based on modified publicly available tools that combined can be used to design, model and analyze any kind of System on Chip; • a detailed analysis of a ST Microelectronics-proprietary transport-level protocol that the author of this Thesis helped developing; • a simulation-based comprehensive comparison of different network interface designs proposed by the author and the researchers at AST lab, in order to integrate shared-memory and message-passing based components on a single System on Chip; • a powerful and flexible solution to address the time closure exception issue in the design of synchronous Networks on Chip. Our solution is based on relay stations repeaters and allows to reduce the power and area demands of NoC interconnects while also reducing its buffer needs; • a solution to simplify the design of the NoC by also increasing their performance and reducing their power and area consumption. We propose to replace complex and slow virtual channel-based routers with multiple and flexible small Multi Plane ones. This solution allows us to reduce the area and power dissipation of any NoC while also increasing its performance especially when the resources are reduced. This Thesis has been written in collaboration with the Advanced System Technology laboratory in Grenoble France, and the Computer Science Department at Columbia University in the city of New York.
Resumo:
The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.
Resumo:
This work is a detailed study of hydrodynamic processes in a defined area, the littoral in front of the Venice Lagoon and its inlets, which are complex morphological areas of interconnection. A finite element hydrodynamic model of the Venice Lagoon and the Adriatic Sea has been developed in order to study the coastal current patterns and the exchanges at the inlets of the Venice Lagoon. This is the first work in this area that tries to model the interaction dynamics, running together a model for the lagoon and the Adriatic Sea. First the barotropic processes near the inlets of the Venice Lagoon have been studied. Data from more than ten tide gauges displaced in the Adriatic Sea have been used in the calibration of the simulated water levels. To validate the model results, empirical flux data measured by ADCP probes installed inside the inlets of Lido and Malamocco have been used and the exchanges through the three inlets of the Venice Lagoon have been analyzed. The comparison between modelled and measured fluxes at the inlets outlined the efficiency of the model to reproduce both tide and wind induced water exchanges between the sea and the lagoon. As a second step, also small scale processes around the inlets that connect the Venice lagoon with the Northern Adriatic Sea have been investigated by means of 3D simulations. Maps of vorticity have been produced, considering the influence of tidal flows and wind stress in the area. A sensitivity analysis has been carried out to define the importance of the advection and of the baroclinic pressure gradients in the development of vortical processes seen along the littoral close to the inlets. Finally a comparison with real data measurements, surface velocity data from HF Radar near the Venice inlets, has been performed, which allows for a better understanding of the processes and their seasonal dynamics. The results outline the predominance of wind and tidal forcing in the coastal area. Wind forcing acts mainly on the mean coastal current inducing its detachment offshore during Sirocco events and an increase of littoral currents during Bora events. The Bora action is more homogeneous on the whole coastal area whereas the Sirocco strengthens its impact in the South, near Chioggia inlet. Tidal forcing at the inlets is mainly barotropic. The sensitivity analysis shows how advection is the main physical process responsible for the persistent vortical structures present along the littoral between the Venice Lagoon inlets. The comparison with measurements from HF Radar not only permitted a validation the model results, but also a description of different patterns in specific periods of the year. The success of the 2D and the 3D simulations on the reproduction both of the SSE, inside and outside the Venice Lagoon, of the tidal flow, through the lagoon inlets, and of the small scale phenomena, occurring along the littoral, indicates that the finite element approach is the most suitable tool for the investigation of coastal processes. For the first time, as shown by the flux modeling, the physical processes that drive the interaction between the two basins were reproduced.
Resumo:
This thesis selects concrete, steel and their relation as research subjects, mainly commentary and discusses the property changes of steel and concrete materials under and after high temperature.The differences and comparisons of reasearch methods and ways between different researchers and different papers,particularly for chinese researches and chinese papers,and partly for comparison between chinese papers methods and Euro-Amercian papers methods about Fire Resistance Behavior of Reinforced Concrete will be summarized and analyzed.The researches on fire-resistance behavior of reinforced concrete become more and more important all over the world. And I would find differences between Chinese researches results, between Chinese researches results and other countries researches results.
Resumo:
Experimental study on the long-term deformations of the fibre reinforced concrete. Steel and macro-synthetic fibers were used to evaluate the shrinkage, creep, mid-span deflection, cracking and rupture analysis of three different types of samples. At the end the main topics of ACI guidelines were analyzed in order to perform an overview of design.
Resumo:
The aim of this Doctoral Thesis is to develop a genetic algorithm based optimization methods to find the best conceptual design architecture of an aero-piston-engine, for given design specifications. Nowadays, the conceptual design of turbine airplanes starts with the aircraft specifications, then the most suited turbofan or turbo propeller for the specific application is chosen. In the aeronautical piston engines field, which has been dormant for several decades, as interest shifted towards turboaircraft, new materials with increased performance and properties have opened new possibilities for development. Moreover, the engine’s modularity given by the cylinder unit, makes it possible to design a specific engine for a given application. In many real engineering problems the amount of design variables may be very high, characterized by several non-linearities needed to describe the behaviour of the phenomena. In this case the objective function has many local extremes, but the designer is usually interested in the global one. The stochastic and the evolutionary optimization techniques, such as the genetic algorithms method, may offer reliable solutions to the design problems, within acceptable computational time. The optimization algorithm developed here can be employed in the first phase of the preliminary project of an aeronautical piston engine design. It’s a mono-objective genetic algorithm, which, starting from the given design specifications, finds the engine propulsive system configuration which possesses minimum mass while satisfying the geometrical, structural and performance constraints. The algorithm reads the project specifications as input data, namely the maximum values of crankshaft and propeller shaft speed and the maximal pressure value in the combustion chamber. The design variables bounds, that describe the solution domain from the geometrical point of view, are introduced too. In the Matlab® Optimization environment the objective function to be minimized is defined as the sum of the masses of the engine propulsive components. Each individual that is generated by the genetic algorithm is the assembly of the flywheel, the vibration damper and so many pistons, connecting rods, cranks, as the number of the cylinders. The fitness is evaluated for each individual of the population, then the rules of the genetic operators are applied, such as reproduction, mutation, selection, crossover. In the reproduction step the elitist method is applied, in order to save the fittest individuals from a contingent mutation and recombination disruption, making it undamaged survive until the next generation. Finally, as the best individual is found, the optimal dimensions values of the components are saved to an Excel® file, in order to build a CAD-automatic-3D-model for each component of the propulsive system, having a direct pre-visualization of the final product, still in the engine’s preliminary project design phase. With the purpose of showing the performance of the algorithm and validating this optimization method, an actual engine is taken, as a case study: it’s the 1900 JTD Fiat Avio, 4 cylinders, 4T, Diesel. Many verifications are made on the mechanical components of the engine, in order to test their feasibility and to decide their survival through generations. A system of inequalities is used to describe the non-linear relations between the design variables, and is used for components checking for static and dynamic loads configurations. The design variables geometrical boundaries are taken from actual engines data and similar design cases. Among the many simulations run for algorithm testing, twelve of them have been chosen as representative of the distribution of the individuals. Then, as an example, for each simulation, the corresponding 3D models of the crankshaft and the connecting rod, have been automatically built. In spite of morphological differences among the component the mass is almost the same. The results show a significant mass reduction (almost 20% for the crankshaft) in comparison to the original configuration, and an acceptable robustness of the method have been shown. The algorithm here developed is shown to be a valid method for an aeronautical-piston-engine preliminary project design optimization. In particular the procedure is able to analyze quite a wide range of design solutions, rejecting the ones that cannot fulfill the feasibility design specifications. This optimization algorithm could increase the aeronautical-piston-engine development, speeding up the production rate and joining modern computation performances and technological awareness to the long lasting traditional design experiences.
Resumo:
Extrusion is a process used to form long products of constant cross section, from simple billets, with a high variety of shapes. Aluminum alloys are the materials most processed in the extrusion industry due to their deformability and the wide field of applications that range from buildings to aerospace and from design to automotive industries. The diverse applications imply different requirements that can be fulfilled by the wide range of alloys and treatments, that is from critical structural application to high quality surface and aesthetical aspect. Whether one or the other is the critical aspect, they both depend directly from microstructure. The extrusion process is moreover marked by high deformations and complex strain gradients making difficult the control of microstructure evolution that is at present not yet fully achieved. Nevertheless the evolution of Finite Element modeling has reached a maturity and can therefore start to be used as a tool for investigation and prediction of microstructure evolution. This thesis will analyze and model the evolution of microstructure throughout the entire extrusion process for 6XXX series aluminum alloys. Core phase of the work was the development of specific tests to investigate the microstructure evolution and validate the model implemented in a commercial FE code. Along with it two essential activities were carried out for a correct calibration of the model beyond the simple research of contour parameters, thus leading to the understanding and control of both code and process. In this direction activities were also conducted on building critical knowhow on the interpretation of microstructure and extrusion phenomena. It is believed, in fact, that the sole analysis of the microstructure evolution regardless of its relevance in the technological aspects of the process would be of little use for the industry as well as ineffective for the interpretation of the results.
Resumo:
Introduzione: L'analgesia epidurale è stata messa in correlazione con l'aumento della durata del secondo stadio del travaglio e del tasso di utilizzo della ventosa ostetrica. Diversi meccanismi sono stati ipotizzati, tra cui la riduzione di percezione della discesa fetale, della forza di spinta e dei riflessi che promuovono la progressione e rotazione della testa fetale nel canale del parto. Tali parametri sono solitamente valutati mediante esame clinico digitale, costantemente riportato essere poco accurato e riproducibile. Su queste basi l'uso dell'ecografia in travaglio, con introduzione di diversi parametri ecografici di valutazione della discesa della testa fetale, sono stati proposti per supportare la diagnosi clinica nel secondo stadio del travaglio. Scopi dello studio: studiare effetto dell’analgesia epidurale sulla progressione della testa fetale durante il II stadio del travaglio valutata mediante ecografia intrapartum. Materiali e metodi: una serie di pazienti nullipare a basso rischio a termine (37+0-42+0) sono state reclutate in modo prospettico nella sala parto del nostro Policlinico Universitario. In ciascuna di esse abbiamo acquisito un volume ecografico ogni 20 minuti dall’inizio della fase attiva del secondo stadio fino al parto ed una serie di parametri ecografici sono stati ricavati in un secondo tempo (angolo di progressione, distanza di progressione distanza testa sinfisi pubica e midline angle). Tutti questi parametri sono stati confrontati ad ogni intervallo di tempo nei due gruppi. Risultati: 71 pazienti totali, di cui 41 (57.7%) con analgesia epidurale. In 58 (81.7%) casi il parto è stato spontaneo, mentre in 8 (11.3%) e 5 (7.0%) casi rispettivamente si è ricorsi a ventosa ostetrica o taglio cesareo. I valori di tutti i parametri ecografici misurati sono risultati sovrapponibili nei due gruppi in tutti gli intervalli di misurazione. Conclusioni: la progressione della testa fetale valutata longitudinalmente mediante ecografia 3D non sembra differire significativamente nelle pazienti con o senza analgesia epidurale.