926 resultados para variational Bayes, Voronoi tessellations


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Impedance inversion is very important in seismic technology. It is based on seismic profile. Good inversion result is derived from high quality seismic profile, which is formed using high resolution imaging resolution. High-resolution process demands that signal/noise ratio is high. It is very important for seismic inversion to improve signal/noise ratio. the main idea is that the physical parameter (wave impedance), which describes the stratigraphy directly, is achieved from seismic data expressing structural style indirectly. The solution of impedance inversion technology, which is based on convolution model, is arbitrary. It is a good way to apply the priori information as the restricted condition in inversion. An updated impedance inversion technology is presented which overcome the flaw of traditional model and highlight the influence of structure. Considering impedance inversion restricted by sedimentary model, layer filling style and congruence relation, the impedance model is built. So the impedance inversion restricted by geological rule could be realized. there are some innovations in this dissertation: 1. The best migration aperture is achieved from the included angle of time surface of diffracted wave and reflected wave. Restricted by structural model, the dip of time surface of reflected wave and diffracted wave is given. 2. The conventional method of FXY forcasting noise is updated, and the signal/noise ratio is improved. 3. Considering the characteristic of probability distribution of seismic data and geological events fully, an object function is constructed using the theory of Bayes estimation as the criterion. The mathematics is used here to describe the content of practice theory. 4. Considering the influence of structure, the seismic profile is interpreted to build the model of structure. A series of structure model is built. So as the impedance model. The high frequency of inversion is controlled by the geological rule. 5. Conjugate gradient method is selected to improve resolving process for it fit the demands of geophysics, and the efficiency of algorithm is enhanced. As the geological information is used fully, the result of impedance inversion is reasonable and complex reservoir could be forecasted further perfectly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Formation resistivity is one of the most important parameters to be evaluated in the evaluation of reservoir. In order to acquire the true value of virginal formation, various types of resistivity logging tools have been developed. However, with the increment of the proved reserves, the thickness of interest pay zone is becoming thinner and thinner, especially in the terrestrial deposit oilfield, so that electrical logging tools, limited by the contradictory requirements of resolution and investigation depth of this kinds of tools, can not provide the true value of the formation resistivity. Therefore, resitivity inversion techniques have been popular in the determination of true formation resistivity based on the improving logging data from new tools. In geophysical inverse problems, non-unique solution is inevitable due to the noisy data and deficient measurement information. I address this problem in my dissertation from three aspects, data acquisition, data processing/inversion and applications of the results/ uncertainty evaluation of the non-unique solution. Some other problems in the traditional inversion methods such as slowness speed of the convergence and the initial-correlation results. Firstly, I deal with the uncertainties in the data to be processed. The combination of micro-spherically focused log (MSFL) and dual laterolog(DLL) is the standard program to determine formation resistivity. During the inversion, the readings of MSFL are regarded as the resistivity of invasion zone of the formation after being corrected. However, the errors can be as large as 30 percent due to mud cake influence even if the rugose borehole effects on the readings of MSFL can be ignored. Furthermore, there still are argues about whether the two logs can be quantitatively used to determine formation resisitivities due to the different measurement principles. Thus, anew type of laterolog tool is designed theoretically. The new tool can provide three curves with different investigation depths and the nearly same resolution. The resolution is about 0.4meter. Secondly, because the popular iterative inversion method based on the least-square estimation can not solve problems more than two parameters simultaneously and the new laterolog logging tool is not applied to practice, my work is focused on two parameters inversion (radius of the invasion and the resistivty of virgin information ) of traditional dual laterolog logging data. An unequal weighted damp factors- revised method is developed to instead of the parameter-revised techniques used in the traditional inversion method. In this new method, the parameter is revised not only dependency on the damp its self but also dependency on the difference between the measurement data and the fitting data in different layers. At least 2 iterative numbers are reduced than the older method, the computation cost of inversion is reduced. The damp least-squares inversion method is the realization of Tikhonov's tradeoff theory on the smooth solution and stability of inversion process. This method is realized through linearity of non-linear inversion problem which must lead to the dependency of solution on the initial value of parameters. Thus, severe debates on efficiency of this kinds of methods are getting popular with the developments of non-linear processing methods. The artificial neural net method is proposed in this dissertation. The database of tool's response to formation parameters is built through the modeling of the laterolog tool and then is used to training the neural nets. A unit model is put forward to simplify the dada space and an additional physical limitation is applied to optimize the net after the cross-validation method is done. Results show that the neural net inversion method could replace the traditional inversion method in a single formation and can be used a method to determine the initial value of the traditional method. No matter what method is developed, the non-uniqueness and uncertainties of the solution could be inevitable. Thus, it is wise to evaluate the non-uniqueness and uncertainties of the solution in the application of inversion results. Bayes theorem provides a way to solve such problems. This method is illustrately discussed in a single formation and achieve plausible results. In the end, the traditional least squares inversion method is used to process raw logging data, the calculated oil saturation increased 20 percent than that not be proceed compared to core analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Population data which collected and saved according to administrative region is a kind of statistical data. As a traditional method of spatial data expression, average distribution in every administrative region brings population data on a low spatial and temporal precision. Now, an accurate population data with high spatial resolution is becoming more and more important in regional planning, environment protection, policy making and rural-urban development. Spatial distribution of population data is becoming more important in GIS study area. In this article, the author reviewed the progress of research on spatial distribution of population. Under the support of GIS, correlative geographical theories and Grid data model, Remote Sensing data, terrain data, traffic data, river data, resident data, and social economic statistic were applied to calculate the spatial distribution of population in Fujian province, which includes following parts: (1) Simulating of boundary at township level. Based on access cost index, land use data, traffic data, river data, DEM, and correlative social economic statistic data, the access cost surface in study area was constructed. Supported by the lowest cost path query and weighted Voronoi diagram, DVT model (Demarcation of Villages and Towns) was established to simulate the boundary at township level in Fujian province. (2) Modeling of population spatial distribution. Based on the knowledge in geography, seven impact factors, such as land use, altitude, slope, residential area, railway, road, and river were chosen as the parameters in this study. Under the support of GIS, the relations of population distribution to these impact factors were analyzed quantificationally, and the coefficients of population density on pixel scale were calculated. Last, the model of population spatial distribution at township level was established through multiplicative fusion of population density coefficients and simulated boundary of towns. (3) Error test and analysis of population spatial distribution base on modeling. The author not only analyzed the numerical character of modeling error, but also its spatial distribution. The reasons of error were discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An increasing number of parameter estimation tasks involve the use of at least two information sources, one complete but limited, the other abundant but incomplete. Standard algorithms such as EM (or em) used in this context are unfortunately not stable in the sense that they can lead to a dramatic loss of accuracy with the inclusion of incomplete observations. We provide a more controlled solution to this problem through differential equations that govern the evolution of locally optimal solutions (fixed points) as a function of the source weighting. This approach permits us to explicitly identify any critical (bifurcation) points leading to choices unsupported by the available complete data. The approach readily applies to any graphical model in O(n^3) time where n is the number of parameters. We use the naive Bayes model to illustrate these ideas and demonstrate the effectiveness of our approach in the context of text classification problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Methods for fusing two computer vision methods are discussed and several example algorithms are presented to illustrate the variational method of fusing algorithms. The example algorithms seek to determine planet topography given two images taken from two different locations with two different lighting conditions. The algorithms each employ assingle cost function that combines the computer vision methods of shape-from-shading and stereo in different ways. The algorithms are closely coupled and take into account all the constraints of the photo-topography problem. The algorithms are run on four synthetic test image sets of varying difficulty.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of detecting intensity changes in images is canonical in vision. Edge detection operators are typically designed to optimally estimate first or second derivative over some (usually small) support. Other criteria such as output signal to noise ratio or bandwidth have also been argued for. This thesis is an attempt to formulate a set of edge detection criteria that capture as directly as possible the desirable properties of an edge operator. Variational techniques are used to find a solution over the space of all linear shift invariant operators. The first criterion is that the detector have low probability of error i.e. failing to mark edges or falsely marking non-edges. The second is that the marked points should be as close as possible to the centre of the true edge. The third criterion is that there should be low probability of more than one response to a single edge. The technique is used to find optimal operators for step edges and for extended impulse profiles (ridges or valleys in two dimensions). The extension of the one dimensional operators to two dimentions is then discussed. The result is a set of operators of varying width, length and orientation. The problem of combining these outputs into a single description is discussed, and a set of heuristics for the integration are given.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The motion planning problem is of central importance to the fields of robotics, spatial planning, and automated design. In robotics we are interested in the automatic synthesis of robot motions, given high-level specifications of tasks and geometric models of the robot and obstacles. The Mover's problem is to find a continuous, collision-free path for a moving object through an environment containing obstacles. We present an implemented algorithm for the classical formulation of the three-dimensional Mover's problem: given an arbitrary rigid polyhedral moving object P with three translational and three rotational degrees of freedom, find a continuous, collision-free path taking P from some initial configuration to a desired goal configuration. This thesis describes the first known implementation of a complete algorithm (at a given resolution) for the full six degree of freedom Movers' problem. The algorithm transforms the six degree of freedom planning problem into a point navigation problem in a six-dimensional configuration space (called C-Space). The C-Space obstacles, which characterize the physically unachievable configurations, are directly represented by six-dimensional manifolds whose boundaries are five dimensional C-surfaces. By characterizing these surfaces and their intersections, collision-free paths may be found by the closure of three operators which (i) slide along 5-dimensional intersections of level C-Space obstacles; (ii) slide along 1- to 4-dimensional intersections of level C-surfaces; and (iii) jump between 6 dimensional obstacles. Implementing the point navigation operators requires solving fundamental representational and algorithmic questions: we will derive new structural properties of the C-Space constraints and shoe how to construct and represent C-Surfaces and their intersection manifolds. A definition and new theoretical results are presented for a six-dimensional C-Space extension of the generalized Voronoi diagram, called the C-Voronoi diagram, whose structure we relate to the C-surface intersection manifolds. The representations and algorithms we develop impact many geometric planning problems, and extend to Cartesian manipulators with six degrees of freedom.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

3.050 JCR (2013) Q2, 44/125 Cardiac & cardiovascular systems

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cox, S.J. (2006) Calculations of the minimal perimeter for N deformable cells of equal area confined in a circle. Philosophical Magazine Letters. 86:569-578.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

C. Shang and Q. Shen. Aiding classification of gene expression data with feature selection: a comparative study. Computational Intelligence Research, 1(1):68-76.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This is an author-created, un-copyedited version of an article accepted for publication in Acta Physica Polonica A. The Version of Record is available online at http://przyrbwn.icm.edu.pl/APP/PDF/118/a118z2p31.pdf

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent work in sensor databases has focused extensively on distributed query problems, notably distributed computation of aggregates. Existing methods for computing aggregates broadcast queries to all sensors and use in-network aggregation of responses to minimize messaging costs. In this work, we focus on uniform random sampling across nodes, which can serve both as an alternative building block for aggregation and as an integral component of many other useful randomized algorithms. Prior to our work, the best existing proposals for uniform random sampling of sensors involve contacting all nodes in the network. We propose a practical method which is only approximately uniform, but contacts a number of sensors proportional to the diameter of the network instead of its size. The approximation achieved is tunably close to exact uniform sampling, and only relies on well-known existing primitives, namely geographic routing, distributed computation of Voronoi regions and von Neumann's rejection method. Ultimately, our sampling algorithm has the same worst-case asymptotic cost as routing a point-to-point message, and thus it is asymptotically optimal among request/reply-based sampling methods. We provide experimental results demonstrating the effectiveness of our algorithm on both synthetic and real sensor topologies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The goal of this work is to learn a parsimonious and informative representation for high-dimensional time series. Conceptually, this comprises two distinct yet tightly coupled tasks: learning a low-dimensional manifold and modeling the dynamical process. These two tasks have a complementary relationship as the temporal constraints provide valuable neighborhood information for dimensionality reduction and conversely, the low-dimensional space allows dynamics to be learnt efficiently. Solving these two tasks simultaneously allows important information to be exchanged mutually. If nonlinear models are required to capture the rich complexity of time series, then the learning problem becomes harder as the nonlinearities in both tasks are coupled. The proposed solution approximates the nonlinear manifold and dynamics using piecewise linear models. The interactions among the linear models are captured in a graphical model. The model structure setup and parameter learning are done using a variational Bayesian approach, which enables automatic Bayesian model structure selection, hence solving the problem of over-fitting. By exploiting the model structure, efficient inference and learning algorithms are obtained without oversimplifying the model of the underlying dynamical process. Evaluation of the proposed framework with competing approaches is conducted in three sets of experiments: dimensionality reduction and reconstruction using synthetic time series, video synthesis using a dynamic texture database, and human motion synthesis, classification and tracking on a benchmark data set. In all experiments, the proposed approach provides superior performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Image warping, often referred to as "rubber sheeting" represents the deformation of a domain image space into a range image space. In this paper, a technique is described which extends the definition of a rubber-sheet transformation to allow a polygonal region to be warped into one or more subsets of itself, where the subsets may be multiply connected. To do this, it constructs a set of "slits" in the domain image, which correspond to discontinuities in the range image, using a technique based on generalized Voronoi diagrams. The concept of medial axis is extended to describe inner and outer medial contours of a polygon. Polygonal regions are decomposed into annular subregions, and path homotopies are introduced to describe the annular subregions. These constructions motivate the definition of a ladder, which guides the construction of grid point pairs necessary to effect the warp itself.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider massless higher spin gauge theories with both electric and magnetic sources, with a special emphasis on the spin two case. We write the equations of motion at the linear level (with conserved external sources) and introduce Dirac strings so as to derive the equations from a variational principle. We then derive a quantization condition that generalizes the familiar Dirac quantization condition, and which involves the conserved charges associated with the asymptotic symmetries for higher spins. Next we discuss briefly how the result extends to the nonlinear theory. This is done in the context of gravitation, where the Taub-NUT solution provides the exact solution of the field equations with both types of sources. We rederive, in analogy with electromagnetism, the quantization condition from the quantization of the angular momentum. We also observe that the Taub-NUT metric is asymptotically flat at spatial infinity in the sense of Regge and Teitelboim (including their parity conditions). It follows, in particular, that one can consistently consider in the variational principle configurations with different electric and magnetic masses. © 2006 The American Physical Society.