903 resultados para mathematical modeling of PTO


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this thesis was to improve the commercial CFD software Ansys Fluent to obtain a tool able to perform accurate simulations of flow boiling in the slug flow regime. The achievement of a reliable numerical framework allows a better understanding of the bubble and flow dynamics induced by the evaporation and makes possible the prediction of the wall heat transfer trends. In order to save computational time, the flow is modeled with an axisymmetrical formulation. Vapor and liquid phases are treated as incompressible and in laminar flow. By means of a single fluid approach, the flow equations are written as for a single phase flow, but discontinuities at the interface and interfacial effects need to be accounted for and discretized properly. Ansys Fluent provides a Volume Of Fluid technique to advect the interface and to map the discontinuous fluid properties throughout the flow domain. The interfacial effects are dominant in the boiling slug flow and the accuracy of their estimation is fundamental for the reliability of the solver. Self-implemented functions, developed ad-hoc, are introduced within the numerical code to compute the surface tension force and the rates of mass and energy exchange at the interface related to the evaporation. Several validation benchmarks assess the better performances of the improved software. Various adiabatic configurations are simulated in order to test the capability of the numerical framework in modeling actual flows and the comparison with experimental results is very positive. The simulation of a single evaporating bubble underlines the dominant effect on the global heat transfer rate of the local transient heat convection in the liquid after the bubble transit. The simulation of multiple evaporating bubbles flowing in sequence shows that their mutual influence can strongly enhance the heat transfer coefficient, up to twice the single phase flow value.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis tackles the problem of the automated detection of the atmospheric boundary layer (BL) height, h, from aerosol lidar/ceilometer observations. A new method, the Bayesian Selective Method (BSM), is presented. It implements a Bayesian statistical inference procedure which combines in an statistically optimal way different sources of information. Firstly atmospheric stratification boundaries are located from discontinuities in the ceilometer back-scattered signal. The BSM then identifies the discontinuity edge that has the highest probability to effectively mark the BL height. Information from the contemporaneus physical boundary layer model simulations and a climatological dataset of BL height evolution are combined in the assimilation framework to assist this choice. The BSM algorithm has been tested for four months of continuous ceilometer measurements collected during the BASE:ALFA project and is shown to realistically diagnose the BL depth evolution in many different weather conditions. Then the BASE:ALFA dataset is used to investigate the boundary layer structure in stable conditions. Functions from the Obukhov similarity theory are used as regression curves to fit observed velocity and temperature profiles in the lower half of the stable boundary layer. Surface fluxes of heat and momentum are best-fitting parameters in this exercise and are compared with what measured by a sonic anemometer. The comparison shows remarkable discrepancies, more evident in cases for which the bulk Richardson number turns out to be quite large. This analysis supports earlier results, that surface turbulent fluxes are not the appropriate scaling parameters for profiles of mean quantities in very stable conditions. One of the practical consequences is that boundary layer height diagnostic formulations which mainly rely on surface fluxes are in disagreement to what obtained by inspecting co-located radiosounding profiles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

From the perspective of a new-generation opto-electronic technology based on organic semiconductors, a major objective is to achieve a deep and detailed knowledge of the structure-property relationships, in order to optimize the electronic, optical, and charge transport properties by tuning the chemical-physical characteristics of the compounds. The purpose of this dissertation is to contribute to such understanding, through suitable theoretical and computational studies. Precisely, the structural, electronic, optical, and charge transport characteristics of several promising organic materials recently synthesized are investigated by means of an integrated approach encompassing quantum-chemical calculations, molecular dynamics and kinetic Monte Carlo simulations. Particular care is addressed to the rationalization of optical and charge transport properties in terms of both intra- and intermolecular features. Moreover, a considerable part of this project involves the development of a home-made set of procedures and parts of software code required to assist the modeling of charge transport properties in the framework of the non-adiabatic hopping mechanism applied to organic crystalline materials. As a first part of my investigations, I mainly discuss the optical, electronic, and structural properties of several core-extended rylene derivatives, which can be regarded to as model compounds for graphene nanoribbons. Two families have been studied, consisting in bay-linked perylene bisimide oligomers and N-annulated rylenes. Beside rylene derivatives, my studies also concerned electronic and spectroscopic properties of tetracene diimides, quinoidal oligothiophenes, and oxygen doped picene. As an example of device application, I studied the structural characteristics governing the efficiency of resistive molecular memories based on a derivative of benzoquinone. Finally, as a second part of my investigations, I concentrate on the charge transport properties of perylene bisimides derivatives. Precisely, a comprehensive study of the structural and thermal effects on the charge transport of several core-twisted chlorinated and fluoro-alkylated perylene bisimide n-type semiconductors is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The cooperative motion algorithm was applied on the molecular simulation of complex chemical reactions and macromolecular orientation phenomena in confined geometries. First, we investigated the case of equilibrium step-growth polymerization in lamellae, pores and droplets. In such systems, confinement was quantified as the area/volume ratio. Results showed that, as confinement increases, polymerization becomes slower and the average molecular weight (MW) at equilibrium decreases. This is caused by the sterical hindrance imposed by the walls since chain growth reactions in their close vicinity have less realization possibilities. For reactions inside droplets at surfaces, contact angles usually increased after polymerization to compensate conformation restrictions imposed by confinement upon growing chains. In a second investigation, we considered monodisperse and chemically inert chains and focused on the effect of confinement on chain orientation. Simulations of thin polymer films showed that chains are preferably oriented parallel to the surface. Orientation increases as MW increases or as film thickness d decreases, in qualitative agreement with experiments with low MW polystyrene. It is demonstrated that the orientation of simulated chains results from a size effect, being a function of the ratio between chain end-to-end distance and d. This study was complemented by experiments with thin films of pi-conjugated polymers like MEH-PPV. Anisotropic refractive index measurements were used to analyze chain orientation. With increasing MW, orientation is enhanced. However, for MEH-PPV, orientation does not depend on d even at thicknesses much larger than the chain contour length. This contradiction with simulations was discussed by considering additional causes for orientation, for instance the appearance of nematic-like ordering in polymer films. In another investigation, we simulated droplet evaporation at soluble surfaces and reproduced the formation of wells surrounded by ringlike deposits at the surface, as observed experimentally. In our simulations, swollen substrate particles migrate to the border of the droplet to minimize the contact between solvent and vacuum, which costs the most energy. Deposit formation in the beginning of evaporation results in pinning of the droplet. When polymer chains at the substrate surface have strong uniaxial orientation, the resulting pattern is no longer similar to a ring but to a pair of half-moons. In a final stage, as an extension for the model developed for polymerization in nanoreactors, we studied the effect of geometrical confinement on a hypothetical oscillating reaction following the mechanism of the so called periodically forced Brusselator. It was shown that a reaction which is chaotic in the bulk may be driven to periodicity by confinement and vice-versa, opening new perspectives for chaos control.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deep convection by pyro-cumulonimbus clouds (pyroCb) can transport large amounts of forest fire smoke into the upper troposphere and lower stratosphere. Here, results from numerical simulations of such deep convective smoke transport are presented. The structure, shape and injection height of the pyroCb simulated for a specific case study are in good agreement with observations. The model results confirm that substantial amounts of smoke are injected into the lower stratosphere. Small-scale mixing processes at the cloud top result in a significant enhancement of smoke injection into the stratosphere. Sensitivity studies show that the release of sensible heat by the fire plays an important role for the dynamics of the pyroCb. Furthermore, the convection is found to be very sensitive to background meteorological conditions. While the abundance of aerosol particles acting as cloud condensation nuclei (CCN) has a strong influence on the microphysical structure of the pyroCb, the CCN effect on the convective dynamics is rather weak. The release of latent heat dominates the overall energy budget of the pyroCb. Since most of the cloud water originates from moisture entrained from the background atmosphere, the fire-released moisture contributes only minor to convection dynamics. Sufficient fire heating, favorable meteorological conditions, and small-scale mixing processes at the cloud top are identified as the key ingredients for troposphere-to-stratosphere transport by pyroCb convection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Extrusion is a process used to form long products of constant cross section, from simple billets, with a high variety of shapes. Aluminum alloys are the materials most processed in the extrusion industry due to their deformability and the wide field of applications that range from buildings to aerospace and from design to automotive industries. The diverse applications imply different requirements that can be fulfilled by the wide range of alloys and treatments, that is from critical structural application to high quality surface and aesthetical aspect. Whether one or the other is the critical aspect, they both depend directly from microstructure. The extrusion process is moreover marked by high deformations and complex strain gradients making difficult the control of microstructure evolution that is at present not yet fully achieved. Nevertheless the evolution of Finite Element modeling has reached a maturity and can therefore start to be used as a tool for investigation and prediction of microstructure evolution. This thesis will analyze and model the evolution of microstructure throughout the entire extrusion process for 6XXX series aluminum alloys. Core phase of the work was the development of specific tests to investigate the microstructure evolution and validate the model implemented in a commercial FE code. Along with it two essential activities were carried out for a correct calibration of the model beyond the simple research of contour parameters, thus leading to the understanding and control of both code and process. In this direction activities were also conducted on building critical knowhow on the interpretation of microstructure and extrusion phenomena. It is believed, in fact, that the sole analysis of the microstructure evolution regardless of its relevance in the technological aspects of the process would be of little use for the industry as well as ineffective for the interpretation of the results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this work is to characterize the genome of the chromosome 1 of A.thaliana, a small flowering plants used as a model organism in studies of biology and genetics, on the basis of a recent mathematical model of the genetic code. I analyze and compare different portions of the genome: genes, exons, coding sequences (CDS), introns, long introns, intergenes, untranslated regions (UTR) and regulatory sequences. In order to accomplish the task, I transformed nucleotide sequences into binary sequences based on the definition of the three different dichotomic classes. The descriptive analysis of binary strings indicate the presence of regularities in each portion of the genome considered. In particular, there are remarkable differences between coding sequences (CDS and exons) and non-coding sequences, suggesting that the frame is important only for coding sequences and that dichotomic classes can be useful to recognize them. Then, I assessed the existence of short-range dependence between binary sequences computed on the basis of the different dichotomic classes. I used three different measures of dependence: the well-known chi-squared test and two indices derived from the concept of entropy i.e. Mutual Information (MI) and Sρ, a normalized version of the “Bhattacharya Hellinger Matusita distance”. The results show that there is a significant short-range dependence structure only for the coding sequences whose existence is a clue of an underlying error detection and correction mechanism. No doubt, further studies are needed in order to assess how the information carried by dichotomic classes could discriminate between coding and noncoding sequence and, therefore, contribute to unveil the role of the mathematical structure in error detection and correction mechanisms. Still, I have shown the potential of the approach presented for understanding the management of genetic information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, we extend some ideas of statistical physics to describe the properties of human mobility. By using a database containing GPS measures of individual paths (position, velocity and covered space at a spatial scale of 2 Km or a time scale of 30 sec), which includes the 2% of the private vehicles in Italy, we succeed in determining some statistical empirical laws pointing out "universal" characteristics of human mobility. Developing simple stochastic models suggesting possible explanations of the empirical observations, we are able to indicate what are the key quantities and cognitive features that are ruling individuals' mobility. To understand the features of individual dynamics, we have studied different aspects of urban mobility from a physical point of view. We discuss the implications of the Benford's law emerging from the distribution of times elapsed between successive trips. We observe how the daily travel-time budget is related with many aspects of the urban environment, and describe how the daily mobility budget is then spent. We link the scaling properties of individual mobility networks to the inhomogeneous average durations of the activities that are performed, and those of the networks describing people's common use of space with the fractional dimension of the urban territory. We study entropy measures of individual mobility patterns, showing that they carry almost the same information of the related mobility networks, but are also influenced by a hierarchy among the activities performed. We discover that Wardrop's principles are violated as drivers have only incomplete information on traffic state and therefore rely on knowledge on the average travel-times. We propose an assimilation model to solve the intrinsic scattering of GPS data on the street network, permitting the real-time reconstruction of traffic state at a urban scale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the present dissertation we consider Feynman integrals in the framework of dimensional regularization. As all such integrals can be expressed in terms of scalar integrals, we focus on this latter kind of integrals in their Feynman parametric representation and study their mathematical properties, partially applying graph theory, algebraic geometry and number theory. The three main topics are the graph theoretic properties of the Symanzik polynomials, the termination of the sector decomposition algorithm of Binoth and Heinrich and the arithmetic nature of the Laurent coefficients of Feynman integrals.rnrnThe integrand of an arbitrary dimensionally regularised, scalar Feynman integral can be expressed in terms of the two well-known Symanzik polynomials. We give a detailed review on the graph theoretic properties of these polynomials. Due to the matrix-tree-theorem the first of these polynomials can be constructed from the determinant of a minor of the generic Laplacian matrix of a graph. By use of a generalization of this theorem, the all-minors-matrix-tree theorem, we derive a new relation which furthermore relates the second Symanzik polynomial to the Laplacian matrix of a graph.rnrnStarting from the Feynman parametric parameterization, the sector decomposition algorithm of Binoth and Heinrich serves for the numerical evaluation of the Laurent coefficients of an arbitrary Feynman integral in the Euclidean momentum region. This widely used algorithm contains an iterated step, consisting of an appropriate decomposition of the domain of integration and the deformation of the resulting pieces. This procedure leads to a disentanglement of the overlapping singularities of the integral. By giving a counter-example we exhibit the problem, that this iterative step of the algorithm does not terminate for every possible case. We solve this problem by presenting an appropriate extension of the algorithm, which is guaranteed to terminate. This is achieved by mapping the iterative step to an abstract combinatorial problem, known as Hironaka's polyhedra game. We present a publicly available implementation of the improved algorithm. Furthermore we explain the relationship of the sector decomposition method with the resolution of singularities of a variety, given by a sequence of blow-ups, in algebraic geometry.rnrnMotivated by the connection between Feynman integrals and topics of algebraic geometry we consider the set of periods as defined by Kontsevich and Zagier. This special set of numbers contains the set of multiple zeta values and certain values of polylogarithms, which in turn are known to be present in results for Laurent coefficients of certain dimensionally regularized Feynman integrals. By use of the extended sector decomposition algorithm we prove a theorem which implies, that the Laurent coefficients of an arbitrary Feynman integral are periods if the masses and kinematical invariants take values in the Euclidean momentum region. The statement is formulated for an even more general class of integrals, allowing for an arbitrary number of polynomials in the integrand.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Atmospheric aerosol particles serving as cloud condensation nuclei (CCN) are key elements of the hydrological cycle and climate. Knowledge of the spatial and temporal distribution of CCN in the atmosphere is essential to understand and describe the effects of aerosols in meteorological models. In this study, CCN properties were measured in polluted and pristine air of different continental regions, and the results were parameterized for efficient prediction of CCN concentrations.The continuous-flow CCN counter used for size-resolved measurements of CCN efficiency spectra (activation curves) was calibrated with ammonium sulfate and sodium chloride aerosols for a wide range of water vapor supersaturations (S=0.068% to 1.27%). A comprehensive uncertainty analysis showed that the instrument calibration depends strongly on the applied particle generation techniques, Köhler model calculations, and water activity parameterizations (relative deviations in S up to 25%). Laboratory experiments and a comparison with other CCN instruments confirmed the high accuracy and precision of the calibration and measurement procedures developed and applied in this study.The mean CCN number concentrations (NCCN,S) observed in polluted mega-city air and biomass burning smoke (Beijing and Pearl River Delta, China) ranged from 1000 cm−3 at S=0.068% to 16 000 cm−3 at S=1.27%, which is about two orders of magnitude higher than in pristine air at remote continental sites (Swiss Alps, Amazonian rainforest). Effective average hygroscopicity parameters, κ, describing the influence of chemical composition on the CCN activity of aerosol particles were derived from the measurement data. They varied in the range of 0.3±0.2, were size-dependent, and could be parameterized as a function of organic and inorganic aerosol mass fraction. At low S (≤0.27%), substantial portions of externally mixed CCN-inactive particles with much lower hygroscopicity were observed in polluted air (fresh soot particles with κ≈0.01). Thus, the aerosol particle mixing state needs to be known for highly accurate predictions of NCCN,S. Nevertheless, the observed CCN number concentrations could be efficiently approximated using measured aerosol particle number size distributions and a simple κ-Köhler model with a single proxy for the effective average particle hygroscopicity. The relative deviations between observations and model predictions were on average less than 20% when a constant average value of κ=0.3 was used in conjunction with variable size distribution data. With a constant average size distribution, however, the deviations increased up to 100% and more. The measurement and model results demonstrate that the aerosol particle number and size are the major predictors for the variability of the CCN concentration in continental boundary layer air, followed by particle composition and hygroscopicity as relatively minor modulators. Depending on the required and applicable level of detail, the measurement results and parameterizations presented in this study can be directly implemented in detailed process models as well as in large-scale atmospheric and climate models for efficient description of the CCN activity of atmospheric aerosols.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this Thesis is to obtain a better understanding of the mechanical behavior of the active Alto Tiberina normal fault (ATF). Integrating geological, geodetic and seismological data, we perform 2D and 3D quasi-static and dynamic mechanical models to simulate the interseismic phase and rupture dynamic of the ATF. Effects of ATF locking depth, synthetic and antithetic fault activity, lithology and realistic fault geometries are taken in account. The 2D and 3D quasi-static model results suggest that the deformation pattern inferred by GPS data is consistent with a very compliant ATF zone (from 5 to 15 km) and Gubbio fault activity. The presence of the ATF compliant zone is a first order condition to redistribute the stress in the Umbria-Marche region; the stress bipartition between hanging wall (high values) and footwall (low values) inferred by the ATF zone activity could explain the microseismicity rates that are higher in the hanging wall respect to the footwall. The interseismic stress build-up is mainly located along the Gubbio fault zone and near ATF patches with higher dip (30°of 3D rupture dynamic models demonstrate that the magnitude expected, after that an event is simulated on the ATF, can decrease if we consider the fault plane roughness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this thesis was the study of the cement-bone interface in the tibial component of a cemented total knee prosthesis. One of the things you can see in specimens after in vivo service is that resorption of bone occurs in the interdigitated region between bone and cement. A stress shielding effect was investigated as a cause to explain bone resorption. Stress shielding occurs when bone is loaded less than physiological and therefore it starts remodeling according to the new loading conditions. µCT images were used to obtain 3D models of the bone and cement structure and a Finite Element Analysis was used to simulate different kind of loads. Resorption was also simulated by performing erosion operations in the interdigitated bone region. Finally, 4 models were simulated: bone (trabecular), bone with cement, and two models of bone with cement after progressive erosions of the bone. Compression, tension and shear test were simulated for each model in displacement-control until 2% of strain. The results show how the principal strain and Von Mises stress decrease after adding the cement on the structure and after the erosion operations. These results show that a stress shielding effect does occur and rises after resorption starts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This is a research B for the University of Bologna. The course is the civil engineering LAUREA MAGISTRALE at UNIBO. The main purpose of this research is to promote another way of explaining, analyzing and presenting some civil engineering aspects to the students worldwide by theory, modeling and photos. The basic idea is divided into three steps. The first one is to present and analyze the theoretical parts. So a detailed analysis of the theory combined with theorems, explanations, examples and exercises will cover this step. At the second, a model will make clear all these parts that were discussed in the theory by showing how the structures work or fail. The modeling is able to present the behavior of many elements, in scale which we use in the real structures. After these two steps an interesting exhibition of photos from the real world with comments will give the chance to the engineers to observe all these theoretical and modeling-laboratory staff in many different cases. For example many civil engineers in the world may know about the air pressure on the structures but many of them have never seen the extraordinary behavior of the bridge of Tacoma ‘dancing with the air’. At this point I would like to say that what I have done is not a book, but a research of how this ‘3 step’ presentation or explanation of some mechanical characteristics could be helpful. I know that my research is something different and new and in my opinion is very important because it helps students to go deeper in the science and also gives new ideas and inspirations. This way of teaching can be used at all lessons especially at the technical. Hope that one day all the books will adopt this kind of presentation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Urban centers significantly contribute to anthropogenic air pollution, although they cover only a minor fraction of the Earth's land surface. Since the worldwide degree of urbanization is steadily increasing, the anthropogenic contribution to air pollution from urban centers is expected to become more substantial in future air quality assessments. The main objective of this thesis was to obtain a more profound insight in the dispersion and the deposition of aerosol particles from 46 individual major population centers (MPCs) as well as the regional and global influence on the atmospheric distribution of several aerosol types. For the first time, this was assessed in one model framework, for which the global model EMAC was applied with different representations of aerosol particles. First, in an approach with passive tracers and a setup in which the results depend only on the source location and the size and the solubility of the tracers, several metrics and a regional climate classification were used to quantify the major outflow pathways, both vertically and horizontally, and to compare the balance between pollution export away from and pollution build-up around the source points. Then in a more comprehensive approach, the anthropogenic emissions of key trace species were changed at the MPC locations to determine the cumulative impact of the MPC emissions on the atmospheric aerosol burdens of black carbon, particulate organic matter, sulfate, and nitrate. Ten different mono-modal passive aerosol tracers were continuously released at the same constant rate at each emission point. The results clearly showed that on average about five times more mass is advected quasi-horizontally at low levels than exported into the upper troposphere. The strength of the low-level export is mainly determined by the location of the source, while the vertical transport is mainly governed by the lifting potential and the solubility of the tracers. Similar to insoluble gas phase tracers, the low-level export of aerosol tracers is strongest at middle and high latitudes, while the regions of strongest vertical export differ between aerosol (temperate winter dry) and gas phase (tropics) tracers. The emitted mass fraction that is kept around MPCs is largest in regions where aerosol tracers have short lifetimes; this mass is also critical for assessing the impact on humans. However, the number of people who live in a strongly polluted region around urban centers depends more on the population density than on the size of the area which is affected by strong air pollution. Another major result was that fine aerosol particles (diameters smaller than 2.5 micrometer) from MPCs undergo substantial long-range transport, with about half of the emitted mass being deposited beyond 1000 km away from the source. In contrast to this diluted remote deposition, there are areas around the MPCs which experience high deposition rates, especially in regions which are frequently affected by heavy precipitation or are situated in poorly ventilated locations. Moreover, most MPC aerosol emissions are removed over land surfaces. In particular, forests experience more deposition from MPC pollutants than other land ecosystems. In addition, it was found that the generic treatment of aerosols has no substantial influence on the major conclusions drawn in this thesis. Moreover, in the more comprehensive approach, it was found that emissions of black carbon, particulate organic matter, sulfur dioxide, and nitrogen oxides from MPCs influence the atmospheric burden of various aerosol types very differently, with impacts generally being larger for secondary species, sulfate and nitrate, than for primary species, black carbon and particulate organic matter. While the changes in the burdens of sulfate, black carbon, and particulate organic matter show an almost linear response for changes in the emission strength, the formation of nitrate was found to be contingent upon many more factors, e.g., the abundance of sulfuric acid, than only upon the strength of the nitrogen oxide emissions. The generic tracer experiments were further extended to conduct the first risk assessment to obtain the cumulative risk of contamination from multiple nuclear reactor accidents on the global scale. For this, many factors had to be taken into account: the probability of major accidents, the cumulative deposition field of the radionuclide cesium-137, and a threshold value that defines contamination. By collecting the necessary data and after accounting for uncertainties, it was found that the risk is highest in western Europe, the eastern US, and in Japan, where on average contamination by major accidents is expected about every 50 years.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The kinematics is a fundamental tool to infer the dynamical structure of galaxies and to understand their formation and evolution. Spectroscopic observations of gas emission lines are often used to derive rotation curves and velocity dispersions. It is however difficult to disentangle these two quantities in low spatial-resolution data because of beam smearing. In this thesis, we present 3D-Barolo, a new software to derive the gas kinematics of disk galaxies from emission-line data-cubes. The code builds tilted-ring models in the 3D observational space and compares them with the actual data-cubes. 3D-Barolo works with data at a wide range of spatial resolutions without being affected by instrumental biases. We use 3D-Barolo to derive rotation curves and velocity dispersions of several galaxies in both the local and the high-redshift Universe. We run our code on HI observations of nearby galaxies and we compare our results with 2D traditional approaches. We show that a 3D approach to the derivation of the gas kinematics has to be preferred to a 2D approach whenever a galaxy is resolved with less than about 20 elements across the disk. We moreover analyze a sample of galaxies at z~1, observed in the H-alpha line with the KMOS/VLT spectrograph. Our 3D modeling reveals that the kinematics of these high-z systems is comparable to that of local disk galaxies, with steeply-rising rotation curves followed by a flat part and H-alpha velocity dispersions of 15-40 km/s over the whole disks. This evidence suggests that disk galaxies were already fully settled about 7-8 billion years ago. In summary, 3D-Barolo is a powerful and robust tool to separate physical and instrumental effects and to derive a reliable kinematics. The analysis of large samples of galaxies at different redshifts with 3D-Barolo will provide new insights on how galaxies assemble and evolve throughout cosmic time.