912 resultados para Alternative fluids. Steam injection. Simulation. IOR. Modeling of reservoirs
Resumo:
[EN]Isobaric vapor–liquid equilibria at p = 101.32 kPa (iso-p VLE) and the mixing properties, hE and vE, are determined for a set of twelve binary solutions: HCOOCuH2u+1(1)+CnH2n+2(2) with u = (1–4) and n = (7– 9). The (iso-p VLE) present deviations from the ideal behavior, which augment as u diminishes and n increases. Systems with [u = 2,3 n = 7] and [u =4 , n = 7,8] present a minimum-boiling azeotrope. The nonideality is also reflected in high endothermic values, hE > 0, and expansive effects, vE > 0, for all the binaries, which increase regularly with n
Resumo:
Polycyclic aromatic hydrocarbons are chemicals produced by both human activities and natural sources and they have been present in the biosphere since millions of years. For this reason microorganisms should have developed, during the world history, the capacity of metabolized them under different electron acceptors and redox conditions. The deep understanding of these natural attenuation processes and of microbial degradation pathways has a main importance in the cleanup of contaminated areas. Anaerobic degradation of aromatic hydrocarbons is often presumed to be slow and of a minor ecological significance compared with the aerobic processes; however anaerobic bioremediation may play a key role in the transformation of organic pollutants when oxygen demand exceeds supply in natural environments. Under such conditions, anoxic and anaerobic degradation mediated by denitrifying or sulphate-reducing bacteria can become a key pathway for the contaminated lands clean up. Actually not much is known about anaerobic bioremediation processes. Anaerobic biodegrading techniques may be really interesting for the future, because they give the possibility of treating contaminated soil directly in their natural status, decreasing the costs concerning the oxygen supply, which usually are the highest ones, and about soil excavations and transports in appropriate sites for a further disposal. The aim of this dissertation work is to characterize the conditions favouring the anaerobic degradation of polycyclic aromatic hydrocarbons. Special focus will be given to the assessment of the various AEA efficiency, the characterization of degradation performance and rates under different redox conditions as well as toxicity monitoring. A comparison with aerobic and anaerobic degradation concerning the same contaminated material is also made to estimate the different biodegradation times.
Resumo:
For many years, RF and analog integrated circuits have been mainly developed using bipolar and compound semiconductor technologies due to their better performance. In the last years, the advance made in CMOS technology allowed analog and RF circuits to be built with such a technology, but the use of CMOS technology in RF application instead of bipolar technology has brought more issues in terms of noise. The noise cannot be completely eliminated and will therefore ultimately limit the accuracy of measurements and set a lower limit on how small signals can be detected and processed in an electronic circuit. One kind of noise which affects MOS transistors much more than bipolar ones is the low-frequency noise. In MOSFETs, low-frequency noise is mainly of two kinds: flicker or 1/f noise and random telegraph signal noise (RTS). The objective of this thesis is to characterize and to model the low-frequency noise by studying RTS and flicker noise under both constant and switched bias conditions. The effect of different biasing schemes on both RTS and flicker noise in time and frequency domain has been investigated.
Resumo:
The objective of this dissertation is to develop and test a predictive model for the passive kinematics of human joints based on the energy minimization principle. To pursue this goal, the tibio-talar joint is chosen as a reference joint, for the reduced number of bones involved and its simplicity, if compared with other sinovial joints such as the knee or the wrist. Starting from the knowledge of the articular surface shapes, the spatial trajectory of passive motion is obtained as the envelop of joint configurations that maximize the surfaces congruence. An increase in joint congruence corresponds to an improved capability of distributing an applied load, allowing the joint to attain a better strength with less material. Thus, joint congruence maximization is a simple geometric way to capture the idea of joint energy minimization. The results obtained are validated against in vitro measured trajectories. Preliminary comparison provide strong support for the predictions of the theoretical model.
Resumo:
Object of this thesis has been centrifuge modelling of earth reinforced retaining walls with modular blocks facing in order to investigate on the influence of design parameters, such as length and vertical spacing of reinforcement, on the behaviour of the structure. In order to demonstrate, 11 models were tested, each one with different length of reinforcement or spacing. Each model was constructed and then placed in the centrifuge in order to artificially raise gravitational acceleration up to 35 g, reproducing the soil behaviour of a 5 metre high wall. Vertical and horizontal displacements were recorded by means of a special device which enabled tracking of deformations in the structure along its longitudinal cross section, essentially drawing its deformed shape. As expected, results confirmed reinforcement parameters to be the governing factor in the behaviour of earth reinforced structures since increase in length and spacing improved structural stability. However, the influence of the length was found out to be the leading parameter, reducing facial deformations up to five times, and the spacing playing an important role especially in unstable configurations. When failure occurred, failure surface was characterised by the same shape (circular) and depth, regardless of the reinforcement configuration. Furthermore, results confirmed the over-conservatism of codes, since models with reinforcement layers 0.4H long showed almost negligible deformations. Although the experiments performed were consistent and yielded replicable results, further numerical modelling may allow investigation on other issues, such as the influence of the reinforcement stiffness, facing stiffness and varying backfills.
Resumo:
We use data from about 700 GPS stations in the EuroMediterranen region to investigate the present-day behavior of the the Calabrian subduction zone within the Mediterranean-scale plates kinematics and to perform local scale studies about the strain accumulation on active structures. We focus attenction on the Messina Straits and Crati Valley faults where GPS data show extentional velocity gradients of ∼3 mm/yr and ∼2 mm/yr, respectively. We use dislocation model and a non-linear constrained optimization algorithm to invert for fault geometric parameters and slip-rates and evaluate the associated uncertainties adopting a bootstrap approach. Our analysis suggest the presence of two partially locked normal faults. To investigate the impact of elastic strain contributes from other nearby active faults onto the observed velocity gradient we use a block modeling approach. Our models show that the inferred slip-rates on the two analyzed structures are strongly impacted by the assumed locking width of the Calabrian subduction thrust. In order to frame the observed local deformation features within the present- day central Mediterranean kinematics we realyze a statistical analysis testing the indipendent motion (w.r.t. the African and Eurasias plates) of the Adriatic, Cal- abrian and Sicilian blocks. Our preferred model confirms a microplate like behaviour for all the investigated blocks. Within these kinematic boundary conditions we fur- ther investigate the Calabrian Slab interface geometry using a combined approach of block modeling and χ2ν statistic. Almost no information is obtained using only the horizontal GPS velocities that prove to be a not sufficient dataset for a multi-parametric inversion approach. Trying to stronger constrain the slab geometry we estimate the predicted vertical velocities performing suites of forward models of elastic dislocations varying the fault locking depth. Comparison with the observed field suggest a maximum resolved locking depth of 25 km.
Resumo:
This thesis tackles the problem of the automated detection of the atmospheric boundary layer (BL) height, h, from aerosol lidar/ceilometer observations. A new method, the Bayesian Selective Method (BSM), is presented. It implements a Bayesian statistical inference procedure which combines in an statistically optimal way different sources of information. Firstly atmospheric stratification boundaries are located from discontinuities in the ceilometer back-scattered signal. The BSM then identifies the discontinuity edge that has the highest probability to effectively mark the BL height. Information from the contemporaneus physical boundary layer model simulations and a climatological dataset of BL height evolution are combined in the assimilation framework to assist this choice. The BSM algorithm has been tested for four months of continuous ceilometer measurements collected during the BASE:ALFA project and is shown to realistically diagnose the BL depth evolution in many different weather conditions. Then the BASE:ALFA dataset is used to investigate the boundary layer structure in stable conditions. Functions from the Obukhov similarity theory are used as regression curves to fit observed velocity and temperature profiles in the lower half of the stable boundary layer. Surface fluxes of heat and momentum are best-fitting parameters in this exercise and are compared with what measured by a sonic anemometer. The comparison shows remarkable discrepancies, more evident in cases for which the bulk Richardson number turns out to be quite large. This analysis supports earlier results, that surface turbulent fluxes are not the appropriate scaling parameters for profiles of mean quantities in very stable conditions. One of the practical consequences is that boundary layer height diagnostic formulations which mainly rely on surface fluxes are in disagreement to what obtained by inspecting co-located radiosounding profiles.
Resumo:
From the perspective of a new-generation opto-electronic technology based on organic semiconductors, a major objective is to achieve a deep and detailed knowledge of the structure-property relationships, in order to optimize the electronic, optical, and charge transport properties by tuning the chemical-physical characteristics of the compounds. The purpose of this dissertation is to contribute to such understanding, through suitable theoretical and computational studies. Precisely, the structural, electronic, optical, and charge transport characteristics of several promising organic materials recently synthesized are investigated by means of an integrated approach encompassing quantum-chemical calculations, molecular dynamics and kinetic Monte Carlo simulations. Particular care is addressed to the rationalization of optical and charge transport properties in terms of both intra- and intermolecular features. Moreover, a considerable part of this project involves the development of a home-made set of procedures and parts of software code required to assist the modeling of charge transport properties in the framework of the non-adiabatic hopping mechanism applied to organic crystalline materials. As a first part of my investigations, I mainly discuss the optical, electronic, and structural properties of several core-extended rylene derivatives, which can be regarded to as model compounds for graphene nanoribbons. Two families have been studied, consisting in bay-linked perylene bisimide oligomers and N-annulated rylenes. Beside rylene derivatives, my studies also concerned electronic and spectroscopic properties of tetracene diimides, quinoidal oligothiophenes, and oxygen doped picene. As an example of device application, I studied the structural characteristics governing the efficiency of resistive molecular memories based on a derivative of benzoquinone. Finally, as a second part of my investigations, I concentrate on the charge transport properties of perylene bisimides derivatives. Precisely, a comprehensive study of the structural and thermal effects on the charge transport of several core-twisted chlorinated and fluoro-alkylated perylene bisimide n-type semiconductors is presented.
Resumo:
Extrusion is a process used to form long products of constant cross section, from simple billets, with a high variety of shapes. Aluminum alloys are the materials most processed in the extrusion industry due to their deformability and the wide field of applications that range from buildings to aerospace and from design to automotive industries. The diverse applications imply different requirements that can be fulfilled by the wide range of alloys and treatments, that is from critical structural application to high quality surface and aesthetical aspect. Whether one or the other is the critical aspect, they both depend directly from microstructure. The extrusion process is moreover marked by high deformations and complex strain gradients making difficult the control of microstructure evolution that is at present not yet fully achieved. Nevertheless the evolution of Finite Element modeling has reached a maturity and can therefore start to be used as a tool for investigation and prediction of microstructure evolution. This thesis will analyze and model the evolution of microstructure throughout the entire extrusion process for 6XXX series aluminum alloys. Core phase of the work was the development of specific tests to investigate the microstructure evolution and validate the model implemented in a commercial FE code. Along with it two essential activities were carried out for a correct calibration of the model beyond the simple research of contour parameters, thus leading to the understanding and control of both code and process. In this direction activities were also conducted on building critical knowhow on the interpretation of microstructure and extrusion phenomena. It is believed, in fact, that the sole analysis of the microstructure evolution regardless of its relevance in the technological aspects of the process would be of little use for the industry as well as ineffective for the interpretation of the results.
Resumo:
In this thesis, we extend some ideas of statistical physics to describe the properties of human mobility. By using a database containing GPS measures of individual paths (position, velocity and covered space at a spatial scale of 2 Km or a time scale of 30 sec), which includes the 2% of the private vehicles in Italy, we succeed in determining some statistical empirical laws pointing out "universal" characteristics of human mobility. Developing simple stochastic models suggesting possible explanations of the empirical observations, we are able to indicate what are the key quantities and cognitive features that are ruling individuals' mobility. To understand the features of individual dynamics, we have studied different aspects of urban mobility from a physical point of view. We discuss the implications of the Benford's law emerging from the distribution of times elapsed between successive trips. We observe how the daily travel-time budget is related with many aspects of the urban environment, and describe how the daily mobility budget is then spent. We link the scaling properties of individual mobility networks to the inhomogeneous average durations of the activities that are performed, and those of the networks describing people's common use of space with the fractional dimension of the urban territory. We study entropy measures of individual mobility patterns, showing that they carry almost the same information of the related mobility networks, but are also influenced by a hierarchy among the activities performed. We discover that Wardrop's principles are violated as drivers have only incomplete information on traffic state and therefore rely on knowledge on the average travel-times. We propose an assimilation model to solve the intrinsic scattering of GPS data on the street network, permitting the real-time reconstruction of traffic state at a urban scale.
Resumo:
The cardiomyocyte is a complex biological system where many mechanisms interact non-linearly to regulate the coupling between electrical excitation and mechanical contraction. For this reason, the development of mathematical models is fundamental in the field of cardiac electrophysiology, where the use of computational tools has become complementary to the classical experimentation. My doctoral research has been focusing on the development of such models for investigating the regulation of ventricular excitation-contraction coupling at the single cell level. In particular, the following researches are presented in this thesis: 1) Study of the unexpected deleterious effect of a Na channel blocker on a long QT syndrome type 3 patient. Experimental results were used to tune a Na current model that recapitulates the effect of the mutation and the treatment, in order to investigate how these influence the human action potential. Our research suggested that the analysis of the clinical phenotype is not sufficient for recommending drugs to patients carrying mutations with undefined electrophysiological properties. 2) Development of a model of L-type Ca channel inactivation in rabbit myocytes to faithfully reproduce the relative roles of voltage- and Ca-dependent inactivation. The model was applied to the analysis of Ca current inactivation kinetics during normal and abnormal repolarization, and predicts arrhythmogenic activity when inhibiting Ca-dependent inactivation, which is the predominant mechanism in physiological conditions. 3) Analysis of the arrhythmogenic consequences of the crosstalk between β-adrenergic and Ca-calmodulin dependent protein kinase signaling pathways. The descriptions of the two regulatory mechanisms, both enhanced in heart failure, were integrated into a novel murine action potential model to investigate how they concur to the development of cardiac arrhythmias. These studies show how mathematical modeling is suitable to provide new insights into the mechanisms underlying cardiac excitation-contraction coupling and arrhythmogenesis.
Resumo:
Il fenomeno dello scattering diffuso è stato oggetto di numerosi studi nell’arco degli ultimi anni, questo grazie alla sua rilevanza nell’ambito della propagazione elettromagnetica così come in molti altri campi di applicazione (remote sensing, ottica, fisica, etc.), ma la compresione completa di questo effetto è lungi dall’essere raggiunta. Infatti la complessità nello studio e nella caratterizzazione della diffusione deriva dalla miriade di casistiche ed effetti che si possono incontrare in un ambiente di propagazione reale, lasciando intuire la necessità di trattarne probabilisticamente il relativo contributo. Da qui nasce l’esigenza di avere applicazioni efficienti dal punto di vista ingegneristico che coniughino la definizione rigorosa del fenomeno e la conseguente semplificazione per fini pratici. In tale visione possiamo descrivere lo scattering diffuso come la sovrapposizione di tutti quegli effetti che si scostano dalle classiche leggi dell’ottica geometrica (riflessione, rifrazione e diffrazione) che generano contributi del campo anche in punti dello spazio e direzioni in cui teoricamente, per oggetti lisci ed omogenei, non dovrebbe esserci alcun apporto. Dunque l’effetto principale, nel caso di ambiente di propagazione reale, è la diversa distribuzione spaziale del campo rispetto al caso teorico di superficie liscia ed omogenea in congiunzione ad effetti di depolarizzazione e redistribuzione di energia nel bilancio di potenza. Perciò la complessità del fenomeno è evidente e l’obiettivo di tale elaborato è di proporre nuovi risultati che permettano di meglio descrivere lo scattering diffuso ed individuare altresì le tematiche sulle quali concentrare l’attenzione nei lavori futuri. In principio è stato quindi effettuato uno studio bibliografico così da identificare i modelli e le teorie esistenti individuando i punti sui quali riflettere maggiormente; nel contempo si sono analizzate le metodologie di caratterizzazione della permittività elettrica complessa dei materiali, questo per valutare la possibilità di ricavare i parametri da utilizzare nelle simulazioni utilizzando il medesimo setup di misura ideato per lo studio della diffusione. Successivamente si è realizzato un setup di simulazione grazie ad un software di calcolo elettromagnetico (basato sul metodo delle differenze finite nel dominio del tempo) grazie al quale è stato possibile analizzare la dispersione tridimensionale dovuta alle irregolarità del materiale. Infine è stata condotta una campagna di misure in camera anecoica con un banco sperimentale realizzato ad-hoc per effettuare una caratterizzazione del fenomeno di scattering in banda larga.