10 resultados para parabolic-elliptic equation, inverse problems, factorization method

em Universidade Federal do Rio Grande do Norte(UFRN)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In general, an inverse problem corresponds to find a value of an element x in a suitable vector space, given a vector y measuring it, in some sense. When we discretize the problem, it usually boils down to solve an equation system f(x) = y, where f : U Rm ! Rn represents the step function in any domain U of the appropriate Rm. As a general rule, we arrive to an ill-posed problem. The resolution of inverse problems has been widely researched along the last decades, because many problems in science and industry consist in determining unknowns that we try to know, by observing its effects under certain indirect measures. Our general subject of this dissertation is the choice of Tykhonov´s regulaziration parameter of a poorly conditioned linear problem, as we are going to discuss on chapter 1 of this dissertation, focusing on the three most popular methods in nowadays literature of the area. Our more specific focus in this dissertation consists in the simulations reported on chapter 2, aiming to compare the performance of the three methods in the recuperation of images measured with the Radon transform, perturbed by the addition of gaussian i.i.d. noise. We choosed a difference operator as regularizer of the problem. The contribution we try to make, in this dissertation, mainly consists on the discussion of numerical simulations we execute, as is exposed in Chapter 2. We understand that the meaning of this dissertation lays much more on the questions which it raises than on saying something definitive about the subject. Partly, for beeing based on numerical experiments with no new mathematical results associated to it, partly for being about numerical experiments made with a single operator. On the other hand, we got some observations which seemed to us interesting on the simulations performed, considered the literature of the area. In special, we highlight observations we resume, at the conclusion of this work, about the different vocations of methods like GCV and L-curve and, also, about the optimal parameters tendency observed in the L-curve method of grouping themselves in a small gap, strongly correlated with the behavior of the generalized singular value decomposition curve of the involved operators, under reasonably broad regularity conditions in the images to be recovered

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The history match procedure in an oil reservoir is of paramount importance in order to obtain a characterization of the reservoir parameters (statics and dynamics) that implicates in a predict production more perfected. Throughout this process one can find reservoir model parameters which are able to reproduce the behaviour of a real reservoir.Thus, this reservoir model may be used to predict production and can aid the oil file management. During the history match procedure the reservoir model parameters are modified and for every new set of reservoir model parameters found, a fluid flow simulation is performed so that it is possible to evaluate weather or not this new set of parameters reproduces the observations in the actual reservoir. The reservoir is said to be matched when the discrepancies between the model predictions and the observations of the real reservoir are below a certain tolerance. The determination of the model parameters via history matching requires the minimisation of an objective function (difference between the observed and simulated productions according to a chosen norm) in a parameter space populated by many local minima. In other words, more than one set of reservoir model parameters fits the observation. With respect to the non-uniqueness of the solution, the inverse problem associated to history match is ill-posed. In order to reduce this ambiguity, it is necessary to incorporate a priori information and constraints in the model reservoir parameters to be determined. In this dissertation, the regularization of the inverse problem associated to the history match was performed via the introduction of a smoothness constraint in the following parameter: permeability and porosity. This constraint has geological bias of asserting that these two properties smoothly vary in space. In this sense, it is necessary to find the right relative weight of this constrain in the objective function that stabilizes the inversion and yet, introduces minimum bias. A sequential search method called COMPLEX was used to find the reservoir model parameters that best reproduce the observations of a semi-synthetic model. This method does not require the usage of derivatives when searching for the minimum of the objective function. Here, it is shown that the judicious introduction of the smoothness constraint in the objective function formulation reduces the associated ambiguity and introduces minimum bias in the estimates of permeability and porosity of the semi-synthetic reservoir model

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the greatest challenges of demography, nowadays, is to obtain estimates of mortality, in a consistent manner, mainly in small areas. The lack of this information, hinders public health actions and leads to impairment of quality of classification of deaths, generating concern on the part of demographers and epidemiologists in obtaining reliable statistics of mortality in the country. In this context, the objective of this work is to obtain estimates of deaths adjustment factors for correction of adult mortality, by States, meso-regions and age groups in the northeastern region, in 2010. The proposal is based on two lines of observation: a demographic one and a statistical one, considering also two areas of coverage in the States of the Northeast region, the meso-regions, as larger areas and counties, as small areas. The methodological principle is to use the General Equation and Balancing demographic method or General Growth Balance to correct the observed deaths, in larger areas (meso-regions) of the states, since they are less prone to breakage of methodological assumptions. In the sequence, it will be applied the statistical empirical Bayesian estimator method, considering as sum of deaths in the meso-regions, the death value corrected by the demographic method, and as reference of observation of smaller area, the observed deaths in small areas (counties). As results of this combination, a smoothing effect on the degree of coverage of deaths is obtained, due to the association with the empirical Bayesian Estimator, and the possibility of evaluating the degree of coverage of deaths by age groups at counties, meso-regions and states levels, with the advantage of estimete adjustment factors, according to the desired level of aggregation. The results grouped by State, point to a significant improvement of the degree of coverage of deaths, according to the combination of the methods with values above 80%. Alagoas (0.88), Bahia (0.90), Ceará (0.90), Maranhão (0.84), Paraíba (0.88), Pernambuco (0.93), Piauí (0.85), Rio Grande do Norte (0.89) and Sergipe (0.92). Advances in the control of the registry information in the health system, linked to improvements in socioeconomic conditions and urbanization of the counties, in the last decade, provided a better quality of information registry of deaths in small areas

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The key aspect limiting resolution in crosswell traveltime tomography is illumination, a well known result but not as well exemplified. Resolution in the 2D case is revisited using a simple geometric approach based on the angular aperture distribution and the Radon Transform properties. Analitically it is shown that if an interface has dips contained in the angular aperture limits in all points, it is correctly imaged in the tomogram. By inversion of synthetic data this result is confirmed and it is also evidenced that isolated artifacts might be present when the dip is near the illumination limit. In the inverse sense, however, if an interface is interpretable from a tomogram, even an aproximately horizontal interface, there is no guarantee that it corresponds to a true interface. Similarly, if a body is present in the interwell region it is diffusely imaged in the tomogram, but its interfaces - particularly vertical edges - can not be resolved and additional artifacts might be present. Again, in the inverse sense, there is no guarantee that an isolated anomaly corresponds to a true anomalous body because this anomaly can also be an artifact. Jointly, these results state the dilemma of ill-posed inverse problems: absence of guarantee of correspondence to the true distribution. The limitations due to illumination may not be solved by the use of mathematical constraints. It is shown that crosswell tomograms derived by the use of sparsity constraints, using both Discrete Cosine Transform and Daubechies bases, basically reproduces the same features seen in tomograms obtained with the classic smoothness constraint. Interpretation must be done always taking in consideration the a priori information and the particular limitations due to illumination. An example of interpreting a real data survey in this context is also presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The key aspect limiting resolution in crosswell traveltime tomography is illumination, a well known result but not as well exemplified. Resolution in the 2D case is revisited using a simple geometric approach based on the angular aperture distribution and the Radon Transform properties. Analitically it is shown that if an interface has dips contained in the angular aperture limits in all points, it is correctly imaged in the tomogram. By inversion of synthetic data this result is confirmed and it is also evidenced that isolated artifacts might be present when the dip is near the illumination limit. In the inverse sense, however, if an interface is interpretable from a tomogram, even an aproximately horizontal interface, there is no guarantee that it corresponds to a true interface. Similarly, if a body is present in the interwell region it is diffusely imaged in the tomogram, but its interfaces - particularly vertical edges - can not be resolved and additional artifacts might be present. Again, in the inverse sense, there is no guarantee that an isolated anomaly corresponds to a true anomalous body because this anomaly can also be an artifact. Jointly, these results state the dilemma of ill-posed inverse problems: absence of guarantee of correspondence to the true distribution. The limitations due to illumination may not be solved by the use of mathematical constraints. It is shown that crosswell tomograms derived by the use of sparsity constraints, using both Discrete Cosine Transform and Daubechies bases, basically reproduces the same features seen in tomograms obtained with the classic smoothness constraint. Interpretation must be done always taking in consideration the a priori information and the particular limitations due to illumination. An example of interpreting a real data survey in this context is also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work proposes a formulation for optimization of 2D-structure layouts submitted to mechanic and thermal shipments and applied an h-adaptive filter process which conduced to computational low spend and high definition structural layouts. The main goal of the formulation is to minimize the structure mass submitted to an effective state of stress of von Mises, with stability and lateral restriction variants. A criterion of global measurement was used for intents a parametric condition of stress fields. To avoid singularity problems was considerate a release on the stress restriction. On the optimization was used a material approach where the homogenized constructive equation was function of the material relative density. The intermediary density effective properties were represented for a SIMP-type artificial model. The problem was simplified by use of the method of finite elements of Galerkin using triangles with linear Lagrangian basis. On the solution of the optimization problem, was applied the augmented Lagrangian Method, that consists on minimum problem sequence solution with box-type restrictions, resolved by a 2nd orderprojection method which uses the method of the quasi-Newton without memory, during the problem process solution. This process reduces computational expends showing be more effective and solid. The results materialize more refined layouts with accurate topologic and shape of structure definitions. On the other hand formulation of mass minimization with global stress criterion provides to modeling ready structural layouts, with violation of the criterion of homogeneous distributed stress

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents an optimization technique based on structural topology optimization methods, TOM, designed to solve problems of thermoelasticity 3D. The presented approach is based on the adjoint method of sensitivity analysis unified design and is intended to loosely coupled thermomechanical problems. The technique makes use of analytical expressions of sensitivities, enabling a reduction in the computational cost through the use of a coupled field adjoint equation, defined in terms the of temperature and displacement fields. The TOM used is based on the material aproach. Thus, to make the domain is composed of a continuous distribution of material, enabling the use of classical models in nonlinear programming optimization problem, the microstructure is considered as a porous medium and its constitutive equation is a function only of the homogenized relative density of the material. In this approach, the actual properties of materials with intermediate densities are penalized based on an artificial microstructure model based on the SIMP (Solid Isotropic Material with Penalty). To circumvent problems chessboard and reduce dependence on layout in relation to the final optimal initial mesh, caused by problems of numerical instability, restrictions on components of the gradient of relative densities were applied. The optimization problem is solved by applying the augmented Lagrangian method, the solution being obtained by applying the finite element method of Galerkin, the process of approximation using the finite element Tetra4. This element has the ability to interpolate both the relative density and the displacement components and temperature. As for the definition of the problem, the heat load is assumed in steady state, i.e., the effects of conduction and convection of heat does not vary with time. The mechanical load is assumed static and distributed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The topology optimization problem characterize and determine the optimum distribution of material into the domain. In other words, after the definition of the boundary conditions in a pre-established domain, the problem is how to distribute the material to solve the minimization problem. The objective of this work is to propose a competitive formulation for optimum structural topologies determination in 3D problems and able to provide high-resolution layouts. The procedure combines the Galerkin Finite Elements Method with the optimization method, looking for the best material distribution along the fixed domain of project. The layout topology optimization method is based on the material approach, proposed by Bendsoe & Kikuchi (1988), and considers a homogenized constitutive equation that depends only on the relative density of the material. The finite element used for the approach is a four nodes tetrahedron with a selective integration scheme, which interpolate not only the components of the displacement field but also the relative density field. The proposed procedure consists in the solution of a sequence of layout optimization problems applied to compliance minimization problems and mass minimization problems under local stress constraint. The microstructure used in this procedure was the SIMP (Solid Isotropic Material with Penalty). The approach reduces considerably the computational cost, showing to be efficient and robust. The results provided a well defined structural layout, with a sharpness distribution of the material and a boundary condition definition. The layout quality was proporcional to the medium size of the element and a considerable reduction of the project variables was observed due to the tetrahedrycal element

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recent observational advances of Astronomy and a more consistent theoretical framework turned Cosmology in one of the most exciting frontiers of contemporary science. In this thesis, homogeneous and inhomogeneous Universe models containing dark matter and different kinds of dark energy are confronted with recent observational data. Initially, we analyze constraints from the existence of old high redshift objects, Supernovas type Ia and the gas mass fraction of galaxy clusters for 2 distinct classes of homogeneous and isotropic models: decaying vacuum and X(z)CDM cosmologies. By considering the quasar APM 08279+5255 at z = 3.91 with age between 2-3 Gyr, we obtain 0,2 < OM < 0,4 while for the j3 parameter which quantifies the contribution of A( t) is restricted to the intervalO, 07 < j3 < 0,32 thereby implying that the minimal age of the Universe amounts to 13.4 Gyr. A lower limit to the quasar formation redshift (zJ > 5,11) was also obtained. Our analyzes including flat, closed and hyperbolic models show that there is no an age crisis for this kind of decaying A( t) scenario. Tests from SN e Ia and gas mass fraction data were realized for flat X(z)CDM models. For an equation of state, úJ(z) = úJo + úJIZ, the best fit is úJo = -1,25, úJl = 1,3 and OM = 0,26, whereas for models with úJ(z) = úJo+úJlz/(l+z), we obtainúJo = -1,4, úJl = 2,57 and OM = 0,26. In another line of development, we have discussed the influence of the observed inhomogeneities by considering the Zeldovich-Kantowski-DyerRoeder (ZKDR) angular diameter distance. By applying the statistical X2 method to a sample of angular diameter for compact radio sources, the best fit to the cosmological parameters for XCDM models are OM = O, 26,úJ = -1,03 and a = 0,9, where úJ and a are the equation of state and the smoothness parameters, respectively. Such results are compatible with a phantom energy component (úJ < -1). The possible bidimensional spaces associated to the plane (a , OM) were restricted by using data from SNe Ia and gas mass fraction of galaxy clusters. For Supernovas the parameters are restricted to the interval 0,32 < OM < 0,5(20") and 0,32 < a < 1,0(20"), while to the gas mass fraction we find 0,18 < OM < 0,32(20") with alI alIowed values of a. For a joint analysis involving Supernovas and gas mass fraction data we obtained 0,18 < OM < 0,38(20"). In general grounds, the present study suggests that the influence of the cosmological inhomogeneities in the matter distribution need to be considered with more detail in the analyses of the observational tests. Further, the analytical treatment based on the ZKDR distance may give non-negligible corrections to the so-calIed background tests of FRW type cosmologies

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we have elaborated a spline-based method of solution of inicial value problems involving ordinary differential equations, with emphasis on linear equations. The method can be seen as an alternative for the traditional solvers such as Runge-Kutta, and avoids root calculations in the linear time invariant case. The method is then applied on a central problem of control theory, namely, the step response problem for linear EDOs with possibly varying coefficients, where root calculations do not apply. We have implemented an efficient algorithm which uses exclusively matrix-vector operations. The working interval (till the settling time) was determined through a calculation of the least stable mode using a modified power method. Several variants of the method have been compared by simulation. For general linear problems with fine grid, the proposed method compares favorably with the Euler method. In the time invariant case, where the alternative is root calculation, we have indications that the proposed method is competitive for equations of sifficiently high order.