8 resultados para III-posed inverse problem
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
In general, an inverse problem corresponds to find a value of an element x in a suitable vector space, given a vector y measuring it, in some sense. When we discretize the problem, it usually boils down to solve an equation system f(x) = y, where f : U Rm ! Rn represents the step function in any domain U of the appropriate Rm. As a general rule, we arrive to an ill-posed problem. The resolution of inverse problems has been widely researched along the last decades, because many problems in science and industry consist in determining unknowns that we try to know, by observing its effects under certain indirect measures. Our general subject of this dissertation is the choice of Tykhonov´s regulaziration parameter of a poorly conditioned linear problem, as we are going to discuss on chapter 1 of this dissertation, focusing on the three most popular methods in nowadays literature of the area. Our more specific focus in this dissertation consists in the simulations reported on chapter 2, aiming to compare the performance of the three methods in the recuperation of images measured with the Radon transform, perturbed by the addition of gaussian i.i.d. noise. We choosed a difference operator as regularizer of the problem. The contribution we try to make, in this dissertation, mainly consists on the discussion of numerical simulations we execute, as is exposed in Chapter 2. We understand that the meaning of this dissertation lays much more on the questions which it raises than on saying something definitive about the subject. Partly, for beeing based on numerical experiments with no new mathematical results associated to it, partly for being about numerical experiments made with a single operator. On the other hand, we got some observations which seemed to us interesting on the simulations performed, considered the literature of the area. In special, we highlight observations we resume, at the conclusion of this work, about the different vocations of methods like GCV and L-curve and, also, about the optimal parameters tendency observed in the L-curve method of grouping themselves in a small gap, strongly correlated with the behavior of the generalized singular value decomposition curve of the involved operators, under reasonably broad regularity conditions in the images to be recovered
Resumo:
The history match procedure in an oil reservoir is of paramount importance in order to obtain a characterization of the reservoir parameters (statics and dynamics) that implicates in a predict production more perfected. Throughout this process one can find reservoir model parameters which are able to reproduce the behaviour of a real reservoir.Thus, this reservoir model may be used to predict production and can aid the oil file management. During the history match procedure the reservoir model parameters are modified and for every new set of reservoir model parameters found, a fluid flow simulation is performed so that it is possible to evaluate weather or not this new set of parameters reproduces the observations in the actual reservoir. The reservoir is said to be matched when the discrepancies between the model predictions and the observations of the real reservoir are below a certain tolerance. The determination of the model parameters via history matching requires the minimisation of an objective function (difference between the observed and simulated productions according to a chosen norm) in a parameter space populated by many local minima. In other words, more than one set of reservoir model parameters fits the observation. With respect to the non-uniqueness of the solution, the inverse problem associated to history match is ill-posed. In order to reduce this ambiguity, it is necessary to incorporate a priori information and constraints in the model reservoir parameters to be determined. In this dissertation, the regularization of the inverse problem associated to the history match was performed via the introduction of a smoothness constraint in the following parameter: permeability and porosity. This constraint has geological bias of asserting that these two properties smoothly vary in space. In this sense, it is necessary to find the right relative weight of this constrain in the objective function that stabilizes the inversion and yet, introduces minimum bias. A sequential search method called COMPLEX was used to find the reservoir model parameters that best reproduce the observations of a semi-synthetic model. This method does not require the usage of derivatives when searching for the minimum of the objective function. Here, it is shown that the judicious introduction of the smoothness constraint in the objective function formulation reduces the associated ambiguity and introduces minimum bias in the estimates of permeability and porosity of the semi-synthetic reservoir model
Resumo:
The gravity inversion method is a mathematic process that can be used to estimate the basement relief of a sedimentary basin. However, the inverse problem in potential-field methods has neither a unique nor a stable solution, so additional information (other than gravity measurements) must be supplied by the interpreter to transform this problem into a well-posed one. This dissertation presents the application of a gravity inversion method to estimate the basement relief of the onshore Potiguar Basin. The density contrast between sediments and basament is assumed to be known and constant. The proposed methodology consists of discretizing the sedimentary layer into a grid of rectangular juxtaposed prisms whose thicknesses correspond to the depth to basement which is the parameter to be estimated. To stabilize the inversion I introduce constraints in accordance with the known geologic information. The method minimizes an objective function of the model that requires not only the model to be smooth and close to the seismic-derived model, which is used as a reference model, but also to honor well-log constraints. The latter are introduced through the use of logarithmic barrier terms in the objective function. The inversion process was applied in order to simulate different phases during the exploration development of a basin. The methodology consisted in applying the gravity inversion in distinct scenarios: the first one used only gravity data and a plain reference model; the second scenario was divided in two cases, we incorporated either borehole logs information or seismic model into the process. Finally I incorporated the basement depth generated by seismic interpretation into the inversion as a reference model and imposed depth constraint from boreholes using the primal logarithmic barrier method. As a result, the estimation of the basement relief in every scenario has satisfactorily reproduced the basin framework, and the incorporation of the constraints led to improve depth basement definition. The joint use of surface gravity data, seismic imaging and borehole logging information makes the process more robust and allows an improvement in the estimate, providing a result closer to the actual basement relief. In addition, I would like to remark that the result obtained in the first scenario already has provided a very coherent basement relief when compared to the known basin framework. This is significant information, when comparing the differences in the costs and environment impact related to gravimetric and seismic surveys and also the well drillings
Resumo:
Injectivity decline, which can be caused by particle retention, generally occurs during water injection or reinjection in oil fields. Several mechanisms, including straining, are responsible for particle retention and pore blocking causing formation damage and injectivity decline. Predicting formation damage and injectivity decline is essential in waterflooding projects. The Classic Model (CM), which incorporates filtration coefficients and formation damage functions, has been widely used to predict injectivity decline. However, various authors have reported significant discrepancies between Classical Model and experimental results, motivating the development of deep bed filtration models considering multiple particle retention mechanisms (Santos & Barros, 2010; SBM). In this dissertation, inverse problem solution was studied and a software for experimental data treatment was developed. Finally, experimental data were fitted using both the CM and SBM. The results showed that, depending on the formation damage function, the predictions for injectivity decline using CM and SBM models can be significantly different
Resumo:
A practical approach to estimate rock thermal conductivities is to use rock models based just on the observed or expected rock mineral content. In this study, we evaluate the performances of the Krischer and Esdorn (KE), Hashin and Shtrikman (HS), classic Maxwell (CM), Maxwell-Wiener (MW), and geometric mean (GM) models in reproducing the measures of thermal conductivity of crystalline rocks.We used 1,105 samples of igneous and metamorphic rocks collected in outcroppings of the Borborema Province, Northeastern Brazil. Both thermal conductivity and petrographic modal analysis (percent volumes of quartz, K-feldspar, plagioclase, and sum of mafic minerals) were done. We divided the rocks into two groups: (a) igneous and ortho-derived (or meta-igneous) rocks and (b) metasedimentary rocks. The group of igneous and ortho-derived rocks (939 samples) covers most the lithologies de_ned in the Streckeisen diagram, with higher concentrations in the fields of granite, granodiorite, and tonalite. In the group of metasedimentary rocks (166 samples), it were sampled representative lithologies, usually of low to medium metamorphic grade. We treat the problem of reproducing the measured values of rock conductivity as an inverse problem where, besides the conductivity measurements, the volume fractions of the constituent minerals are known and the effective conductivities of the constituent minerals and model parameters are unknown. The key idea was to identify the model (and its associated estimates of effective mineral conductivities and parameters) that better reproduces the measures of rock conductivity. We evaluate the model performances by the quantity that is equal to the percentage of number of rock samples which estimated conductivities honor the measured conductivities within the tolerance of 15%. In general, for all models, the performances were quite inferior for the metasedimentary rocks (34% < < 65%) as compared with the igneous and ortho-derived rocks (51% < < 70%). For igneous and ortho-derived rocks, all model performances were very similar ( = 70%), except the GM-model that presented a poor performance (51% < < 65%); the KE and HS-models ( = 70%) were slightly superior than the CM and MW-models ( = 67%). The quartz content is the dominant factor in explaining the rock conductivity for igneous and ortho-derived rocks; in particular, using the MW-model the solution is in practice vi UFRN/CCET– Dissertação de mestrado the series association of the quartz content. On the other hand, for metasedimentary rocks, model performances were different and the performance of the KEmodel ( = 65%) was quite superior than the HS ( = 53%), CM (34% < < 42%), MW ( = 40%), and GM (35% < < 42%). The estimated effective mineral conductivities are stable for perturbations both in the rock conductivity measures and in the quartz volume fraction. The fact that the metasedimentary rocks are richer in platy-minerals explains partially the poor model performances, because both the high thermal anisotropy of biotite (one of the most common platy-mineral) and the difficulty in obtaining polished surfaces for measurement coupling when platyminerals are present. Independently of the rock type, both very low and very high values of rock conductivities are hardly explained by rock models based just on rock mineral content.
Resumo:
The key aspect limiting resolution in crosswell traveltime tomography is illumination, a well known result but not as well exemplified. Resolution in the 2D case is revisited using a simple geometric approach based on the angular aperture distribution and the Radon Transform properties. Analitically it is shown that if an interface has dips contained in the angular aperture limits in all points, it is correctly imaged in the tomogram. By inversion of synthetic data this result is confirmed and it is also evidenced that isolated artifacts might be present when the dip is near the illumination limit. In the inverse sense, however, if an interface is interpretable from a tomogram, even an aproximately horizontal interface, there is no guarantee that it corresponds to a true interface. Similarly, if a body is present in the interwell region it is diffusely imaged in the tomogram, but its interfaces - particularly vertical edges - can not be resolved and additional artifacts might be present. Again, in the inverse sense, there is no guarantee that an isolated anomaly corresponds to a true anomalous body because this anomaly can also be an artifact. Jointly, these results state the dilemma of ill-posed inverse problems: absence of guarantee of correspondence to the true distribution. The limitations due to illumination may not be solved by the use of mathematical constraints. It is shown that crosswell tomograms derived by the use of sparsity constraints, using both Discrete Cosine Transform and Daubechies bases, basically reproduces the same features seen in tomograms obtained with the classic smoothness constraint. Interpretation must be done always taking in consideration the a priori information and the particular limitations due to illumination. An example of interpreting a real data survey in this context is also presented.
Resumo:
The key aspect limiting resolution in crosswell traveltime tomography is illumination, a well known result but not as well exemplified. Resolution in the 2D case is revisited using a simple geometric approach based on the angular aperture distribution and the Radon Transform properties. Analitically it is shown that if an interface has dips contained in the angular aperture limits in all points, it is correctly imaged in the tomogram. By inversion of synthetic data this result is confirmed and it is also evidenced that isolated artifacts might be present when the dip is near the illumination limit. In the inverse sense, however, if an interface is interpretable from a tomogram, even an aproximately horizontal interface, there is no guarantee that it corresponds to a true interface. Similarly, if a body is present in the interwell region it is diffusely imaged in the tomogram, but its interfaces - particularly vertical edges - can not be resolved and additional artifacts might be present. Again, in the inverse sense, there is no guarantee that an isolated anomaly corresponds to a true anomalous body because this anomaly can also be an artifact. Jointly, these results state the dilemma of ill-posed inverse problems: absence of guarantee of correspondence to the true distribution. The limitations due to illumination may not be solved by the use of mathematical constraints. It is shown that crosswell tomograms derived by the use of sparsity constraints, using both Discrete Cosine Transform and Daubechies bases, basically reproduces the same features seen in tomograms obtained with the classic smoothness constraint. Interpretation must be done always taking in consideration the a priori information and the particular limitations due to illumination. An example of interpreting a real data survey in this context is also presented.
Resumo:
Injectivity decline, which can be caused by particle retention, generally occurs during water injection or reinjection in oil fields. Several mechanisms, including straining, are responsible for particle retention and pore blocking causing formation damage and injectivity decline. Predicting formation damage and injectivity decline is essential in waterflooding projects. The Classic Model (CM), which incorporates filtration coefficients and formation damage functions, has been widely used to predict injectivity decline. However, various authors have reported significant discrepancies between Classical Model and experimental results, motivating the development of deep bed filtration models considering multiple particle retention mechanisms (Santos & Barros, 2010; SBM). In this dissertation, inverse problem solution was studied and a software for experimental data treatment was developed. Finally, experimental data were fitted using both the CM and SBM. The results showed that, depending on the formation damage function, the predictions for injectivity decline using CM and SBM models can be significantly different