912 resultados para Alternative fluids. Steam injection. Simulation. IOR. Modeling of reservoirs
Resumo:
Investigation of large, destructive earthquakes is challenged by their infrequent occurrence and the remote nature of geophysical observations. This thesis sheds light on the source processes of large earthquakes from two perspectives: robust and quantitative observational constraints through Bayesian inference for earthquake source models, and physical insights on the interconnections of seismic and aseismic fault behavior from elastodynamic modeling of earthquake ruptures and aseismic processes.
To constrain the shallow deformation during megathrust events, we develop semi-analytical and numerical Bayesian approaches to explore the maximum resolution of the tsunami data, with a focus on incorporating the uncertainty in the forward modeling. These methodologies are then applied to invert for the coseismic seafloor displacement field in the 2011 Mw 9.0 Tohoku-Oki earthquake using near-field tsunami waveforms and for the coseismic fault slip models in the 2010 Mw 8.8 Maule earthquake with complementary tsunami and geodetic observations. From posterior estimates of model parameters and their uncertainties, we are able to quantitatively constrain the near-trench profiles of seafloor displacement and fault slip. Similar characteristic patterns emerge during both events, featuring the peak of uplift near the edge of the accretionary wedge with a decay toward the trench axis, with implications for fault failure and tsunamigenic mechanisms of megathrust earthquakes.
To understand the behavior of earthquakes at the base of the seismogenic zone on continental strike-slip faults, we simulate the interactions of dynamic earthquake rupture, aseismic slip, and heterogeneity in rate-and-state fault models coupled with shear heating. Our study explains the long-standing enigma of seismic quiescence on major fault segments known to have hosted large earthquakes by deeper penetration of large earthquakes below the seismogenic zone, where mature faults have well-localized creeping extensions. This conclusion is supported by the simulated relationship between seismicity and large earthquakes as well as by observations from recent large events. We also use the modeling to connect the geodetic observables of fault locking with the behavior of seismicity in numerical models, investigating how a combination of interseismic geodetic and seismological estimates could constrain the locked-creeping transition of faults and potentially their co- and post-seismic behavior.
Resumo:
The oil production in mature areas can be improved by advanced recovery techniques. In special, steam injection reduces the viscosity of heavy oils, thus improving its flow to surrounding wells. On the other hand, the usually high temperatures and pressures involved in the process may lead to cement cracking, negatively affecting both the mechanical stability and zonal isolation provided by the cement sheath of the well. The addition of plastic materials to the cement is an alternative to prevent this scenario. Composite slurries consisting of Portland cement and a natural biopolymer were studied. Samples containing different contents of biopolymer dispersed in a Portland cement matrix were prepared and evaluated by mechanical and rheological tests in order to assess their behavior according to API (American Petroleum Institute) guidelines. FEM was also applied to map the stress distribution encountered by the cement at bottom bole. The slurries were prepared according to a factorial experiment plan by varying three parameters, i.e., cement age, contents of biopolymer and water-to-cement ratio. The results revealed that the addition of the biopolymer reduced the volume of free water and the setting time of the slurry. In addition, tensile strength, compressive strength and toughness improved by 30% comparing hardened composites to plain Portland slurries. FEM results suggested that the stresses developed at bottomhole may be 10 to 100 times higher than the strength of the cement as evaluated in the lab by unconfined mechanical testing. An alternative approach is proposed to adapt the testing methodology used to evaluate the mechanical behavior of oilwell cement slurries by simulating the confined conditions encountered at bottornhole
Resumo:
Until the early 90s, the simulation of fluid flow in oil reservoir basically used the numerical technique of finite differences. Since then, there was a big development in simulation technology based on streamlines, so that nowadays it is being used in several cases and it can represent the physical mechanisms that influence the fluid flow, such as compressibility, capillarity and gravitational segregation. Streamline-based flow simulation is a tool that can help enough in waterflood project management, because it provides important information not available through traditional simulation of finite differences and shows, in a direct way, the influence between injector well and producer well. This work presents the application of a methodology published in literature for optimizing water injection projects in modeling of a Brazilian Potiguar Basin reservoir that has a large number of wells. This methodology considers changes of injection well rates over time, based on information available through streamline simulation. This methodology reduces injection rates in wells of lower efficiency and increases injection rates in more efficient wells. In the proposed model, the methodology was effective. The optimized alternatives presented higher oil recovery associated with a lower water injection volume. This shows better efficiency and, consequently, reduction in costs. Considering the wide use of the water injection in oil fields, the positive outcome of the modeling is important, because it shows a case study of increasing of oil recovery achieved simply through better distribution of water injection rates
Resumo:
In Brazilian Northeast there are reservoirs with heavy oil, which use steam flooding as a recovery method. This process allows to reduce oil viscosity, increasing its mobility and consequently its oil recovery. Steam injection is a thermal method and can occurs in continues or cyclic form. Cyclic steam stimulation (CSS) can be repeated several times. Each cycle consisting of three stages: steam injection, soaking time and production phase. CSS becomes less efficient with an increase of number of cycles. Thus, this work aims to study the influence of compositional models in cyclic steam injection and the effects of some parameters, such like: flow injection, steam quality and temperature of steam injected, analyzing the influence of pseudocomponents numbers on oil rate, cumulative oil, oil recovery and simulation time. In the situations analyzed was compared the model of fluid of three phases and three components known as Blackoil . Simulations were done using commercial software (CMG), it was analyzed a homogeneous reservoir with characteristics similar to those found in Brazilian Northeast. It was observed that an increase of components number, increase the time spent in simulation. As for analyzed parameters, it appears that the steam rate, and steam quality has influence on cumulative oil and oil recovery. The number of components did not a lot influenced on oil recovery, however it has influenced on gas production
Resumo:
Spatio-temporal modelling is an area of increasing importance in which models and methods have often been developed to deal with specific applications. In this study, a spatio-temporal model was used to estimate daily rainfall data. Rainfall records from several weather stations, obtained from the Agritempo system for two climatic homogeneous zones, were used. Rainfall values obtained for two fixed dates (January 1 and May 1, 2012) using the spatio-temporal model were compared with the geostatisticals techniques of ordinary kriging and ordinary cokriging with altitude as auxiliary variable. The spatio-temporal model was more than 17% better at producing estimates of daily precipitation compared to kriging and cokriging in the first zone and more than 18% in the second zone. The spatio-temporal model proved to be a versatile technique, adapting to different seasons and dates.
Resumo:
The thermoset epoxy resin EPON 862, coupled with the DETDA hardening agent, are utilized as the polymer matrix component in many graphite (carbon fiber) composites. Because it is difficult to experimentally characterize the interfacial region, computational molecular modeling is a necessary tool for understanding the influence of the interfacial molecular structure on bulk-level material properties. The purpose of this research is to investigate the many possible variables that may influence the interfacial structure and the effect they will have on the mechanical behavior of the bulk level composite. Molecular models are established for EPON 862-DETDA polymer in the presence of a graphite surface. Material characteristics such as polymer mass-density, residual stresses, and molecular potential energy are investigated near the polymer/fiber interface. Because the exact degree of crosslinking in these thermoset systems is not known, many different crosslink densities (degrees of curing) are investigated. It is determined that a region exists near the carbon fiber surface in which the polymer mass density is different than that of the bulk mass density. These surface effects extend ~10 Å into the polymer from the center of the outermost graphite layer. Early simulations predict polymer residual stress levels to be higher near the graphite surface. It is also seen that the molecular potential energy in polymer atoms decreases with increasing crosslink density. New models are then established in order to investigate the interface between EPON 862-DETDA polymer and graphene nanoplatelets (GNPs) of various atomic thicknesses. Mechanical properties are extracted from the models using Molecular Dynamics techniques. These properties are then implemented into micromechanics software that utilizes the generalized method of cells to create representations of macro-scale composites. Micromechanics models are created representing GNP doped epoxy with varying number of graphene layers and interfacial polymer crosslink densities. The initial micromechanics results for the GNP doped epoxy are then taken to represent the matrix component and are re-run through the micromechanics software with the addition of a carbon fiber to simulate a GNP doped epoxy/carbon fiber composite. Micromechanics results agree well with experimental data, and indicate GNPs of 1 to 2 atomic layers to be highly favorable. The effect of oxygen bonded to the surface of the GNPs is lastly investigated. Molecular Models are created for systems with varying graphene atomic thickness, along with different amounts of oxygen species attached to them. Models are created for graphene containing hydroxyl groups only, epoxide groups only, and a combination of epoxide and hydroxyl groups. Results show models of oxidized graphene to decrease in both tensile and shear modulus. Attaching only epoxide groups gives the best results for mechanical properties, though pristine graphene is still favored.
Resumo:
The objective of this research is to synthesize structural composites designed with particular areas defined with custom modulus, strength and toughness values in order to improve the overall mechanical behavior of the composite. Such composites are defined and referred to as 3D-designer composites. These composites will be formed from liquid crystalline polymers and carbon nanotubes. The fabrication process is a variation of rapid prototyping process, which is a layered, additive-manufacturing approach. Composites formed using this process can be custom designed by apt modeling methods for superior performance in advanced applications. The focus of this research is on enhancement of Young's modulus in order to make the final composite stiffer. Strength and toughness of the final composite with respect to various applications is also discussed. We have taken into consideration the mechanical properties of final composite at different fiber volume content as well as at different orientations and lengths of the fibers. The orientation of the LC monomers is supposed to be carried out using electric or magnetic fields. A computer program is modeled incorporating the Mori-Tanaka modeling scheme to generate the stiffness matrix of the final composite. The final properties are then deduced from the stiffness matrix using composite micromechanics. Eshelby's tensor, required to calculate the stiffness tensor using Mori-Tanaka method, is calculated using a numerical scheme that determines the components of the Eshelby's tensor (Gavazzi and Lagoudas 1990). The numerical integration is solved using Gaussian Quadrature scheme and is worked out using MATLAB as well. . MATLAB provides a good deal of commands and algorithms that can be used efficiently to elaborate the continuum of the formula to its extents. Graphs are plotted using different combinations of results and parameters involved in finding these results
Resumo:
Implementation of stable aeroelastic models with the ability to capture the complex features of Multi concept smartblades is a prime step in reducing the uncertainties that come along with blade dynamics. The numerical simulations of fluid structure interaction can thus be used to test a realistic scenarios comprising of full-scale blades at a reasonably low computational cost. A code which was a combination of two advanced numerical models was designed and was run with the help of paralell HPC supercomputer platform. The first model was based on a variation of dimensional reduction technique proposed by Hodges and Yu. This model was the one to record the structural response of heterogenous composite blades. This technique reduces the geometrical complexities of the heterogenous blade section into a stiffness matrix for an equivalent beam. This derived equivalent 1-D strain energy matrix is similar to the actual 3-D strain energy matrix in an asymptotic sense. As this 1-D matrix helps in accurately modeling the blade structure as a 1-D finite element problem, this substantially redues the computational effort and subsequently the computational cost that are required to model the structural dynamics at each step. Second model comprises of implementation of the Blade Element Momentum Theory. In this approach we map all the velocities and the forces with the help of orthogonal matrices that help in capturing the large deformations and the effects of rotations in calculating the aerodynamic forces. This ultimately helps us to take into account the complex flexo torsional deformations. In this thesis we have succesfully tested these computayinal tools developed by MTU’s research team lead by for the aero elastic analysis of wind-turbine blades. The validation in this thesis is majorly based on several experiments done on NREL-5MW blade, as this is widely accepted as a benchmark blade in the wind industry. Along with the use of this innovative model the internal blade structure was also changed to add up to the existing benefits of the already advanced numerical models.
Resumo:
Managed lane strategies are innovative road operation schemes for addressing congestion problems. These strategies operate a lane (lanes) adjacent to a freeway that provides congestion-free trips to eligible users, such as transit or toll-payers. To ensure the successful implementation of managed lanes, the demand on these lanes need to be accurately estimated. Among different approaches for predicting this demand, the four-step demand forecasting process is most common. Managed lane demand is usually estimated at the assignment step. Therefore, the key to reliably estimating the demand is the utilization of effective assignment modeling processes. Managed lanes are particularly effective when the road is functioning at near-capacity. Therefore, capturing variations in demand and network attributes and performance is crucial for their modeling, monitoring and operation. As a result, traditional modeling approaches, such as those used in static traffic assignment of demand forecasting models, fail to correctly predict the managed lane demand and the associated system performance. The present study demonstrates the power of the more advanced modeling approach of dynamic traffic assignment (DTA), as well as the shortcomings of conventional approaches, when used to model managed lanes in congested environments. In addition, the study develops processes to support an effective utilization of DTA to model managed lane operations. Static and dynamic traffic assignments consist of demand, network, and route choice model components that need to be calibrated. These components interact with each other, and an iterative method for calibrating them is needed. In this study, an effective standalone framework that combines static demand estimation and dynamic traffic assignment has been developed to replicate real-world traffic conditions. With advances in traffic surveillance technologies collecting, archiving, and analyzing traffic data is becoming more accessible and affordable. The present study shows how data from multiple sources can be integrated, validated, and best used in different stages of modeling and calibration of managed lanes. Extensive and careful processing of demand, traffic, and toll data, as well as proper definition of performance measures, result in a calibrated and stable model, which closely replicates real-world congestion patterns, and can reasonably respond to perturbations in network and demand properties.
Resumo:
Adaptability and invisibility are hallmarks of modern terrorism, and keeping pace with its dynamic nature presents a serious challenge for societies throughout the world. Innovations in computer science have incorporated applied mathematics to develop a wide array of predictive models to support the variety of approaches to counterterrorism. Predictive models are usually designed to forecast the location of attacks. Although this may protect individual structures or locations, it does not reduce the threat—it merely changes the target. While predictive models dedicated to events or social relationships receive much attention where the mathematical and social science communities intersect, models dedicated to terrorist locations such as safe-houses (rather than their targets or training sites) are rare and possibly nonexistent. At the time of this research, there were no publically available models designed to predict locations where violent extremists are likely to reside. This research uses France as a case study to present a complex systems model that incorporates multiple quantitative, qualitative and geospatial variables that differ in terms of scale, weight, and type. Though many of these variables are recognized by specialists in security studies, there remains controversy with respect to their relative importance, degree of interaction, and interdependence. Additionally, some of the variables proposed in this research are not generally recognized as drivers, yet they warrant examination based on their potential role within a complex system. This research tested multiple regression models and determined that geographically-weighted regression analysis produced the most accurate result to accommodate non-stationary coefficient behavior, demonstrating that geographic variables are critical to understanding and predicting the phenomenon of terrorism. This dissertation presents a flexible prototypical model that can be refined and applied to other regions to inform stakeholders such as policy-makers and law enforcement in their efforts to improve national security and enhance quality-of-life.
Resumo:
Toxoplasmosis is a global zoonosis caused by the protozoan parasite Toxoplasma gondii. Detection of antibodies to T. gondii in serum samples from hunted animals may represent a key step for public health protection. It is also important to assess the circulation of this parasite in wild boar population. However, in hunted animals, collection of blood is not feasible and meat juice may represent an alternative sample. The purpose of the present study was to evaluate heart meat juice of hunted wild boars as an alternative sample for post-mortem detection of antibodies to T. gondii by modified agglutination test (MAT). The agreement beyond chance between results from meat juice assessed with Cohen’s kappa coefficient revealed that the 1:20 meat juice dilution provided the highest agreement. McNemars’s test further revealed 1:10 as the most suitable meat juice dilution, as the proportion of positive paired samples (serum and meat juice from the same animal) did not differ at this dilution. All together, these results suggest a reasonable accuracy of heart meat juice to detect antibodies to T. gondii by MAT and support it as an alternative sample in post-mortem analysis in hunted wild boars.
Resumo:
Background Plant-soil interaction is central to human food production and ecosystem function. Thus, it is essential to not only understand, but also to develop predictive mathematical models which can be used to assess how climate and soil management practices will affect these interactions. Scope In this paper we review the current developments in structural and chemical imaging of rhizosphere processes within the context of multiscale mathematical image based modeling. We outline areas that need more research and areas which would benefit from more detailed understanding. Conclusions We conclude that the combination of structural and chemical imaging with modeling is an incredibly powerful tool which is fundamental for understanding how plant roots interact with soil. We emphasize the need for more researchers to be attracted to this area that is so fertile for future discoveries. Finally, model building must go hand in hand with experiments. In particular, there is a real need to integrate rhizosphere structural and chemical imaging with modeling for better understanding of the rhizosphere processes leading to models which explicitly account for pore scale processes.
Resumo:
A three-dimensional finite element model of cold pilgering of stainless steel tubes is developed in this paper. The objective is to use the model to increase the understanding of forces and deformations in the process. The focus is on the influence of vertical displacements of the roll stand and axial displacements of the mandrel and tube. Therefore, the rigid tools and the tube are supported with elastic springs. Additionally, the influences of friction coefficients in the tube/mandrel and tube/roll interfaces are examined. A sensitivity study is performed to investigate the influences of these parameters on the strain path and the roll separation force. The results show the importance of accounting for the displacements of the tube and rigid tools on the roll separation force and the accumulative plastic strain.
Resumo:
This paper presents the development of a combined experimental and numerical approach to study the anaerobic digestion of both the wastes produced in a biorefinery using yeast for biodiesel production and the wastes generated in the preceding microbial biomass production. The experimental results show that it is possible to valorise through anaerobic digestion all the tested residues. In the implementation of the numerical model for anaerobic digestion, a procedure for the identification of its parameters needs to be developed. A hybrid search Genetic Algorithm was used, followed by a direct search method. In order to test the procedure for estimation of parameters, first noise-free data was considered and a critical analysis of the results obtain so far was undertaken. As a demonstration of its application, the procedure was applied to experimental data.
Resumo:
Understanding what characterizes patients who suffer great delays in diagnosis of pulmonary tuberculosis is of great importance when establishing screening strategies to better control TB. Greater delays in diagnosis imply a higher chance for susceptible individuals to become infected by a bacilliferous patient. A Structured Additive Regression model is attempted in this study in order to potentially contribute to a better characterization of bacilliferous prevalence in Portugal. The main findings suggest the existence of significant regional differences in Portugal, with the fact of being female and/or alcohol dependent contributing to an increased delay-time in diagnosis, while being dependent on intravenous drugs and/or being diagnosed with HIV are factors that increase the chance of an earlier diagnosis of pulmonary TB. A decrease in 2010 to 77% on treatment success in Portugal underlines the importance of conducting more research aimed at better TB control strategies.