866 resultados para Reproducing kernel
Resumo:
In the prediction of complex reservoir with high heterogeneities in lithologic and petrophysical properties, because of inexact data (e.g., information-overlapping, information-incomplete, and noise-contaminated) and ambiguous physical relationship, inversion results suffer from non-uniqueness, instability and uncertainty. Thus, the reservoir prediction technologies based on the linear assumptions are unsuited for these complex areas. Based on the limitations of conventional technologies, the thesis conducts a series of researches on various kernel problems such as inversions from band-limited seismic data, inversion resolution, inversion stability, and ambiguous physical relationship. The thesis combines deterministic, statistical and nonlinear theories of geophysics, and integrates geological information, rock physics, well data and seismic data to predict lithologic and petrophysical parameters. The joint inversion technology is suited for the areas with complex depositional environment and complex rock-physical relationship. Combining nonlinear multistage Robinson seismic convolution model with unconventional Caianiello neural network, the thesis implements the unification of the deterministic and statistical inversion. Through Robinson seismic convolution model and nonlinear self-affine transform, the deterministic inversion is implemented by establishing a deterministic relationship between seismic impedance and seismic responses. So, this can ensure inversion reliability. Furthermore, through multistage seismic wavelet (MSW)/seismic inverse wavelet (MSIW) and Caianiello neural network, the statistical inversion is implemented by establishing a statistical relationship between seismic impedance and seismic responses. Thus, this can ensure the anti-noise ability. In this thesis, direct and indirect inversion modes are alternately used to estimate and revise the impedance value. Direct inversion result is used as the initial value of indirect inversion and finally high-resolution impedance profile is achieved by indirect inversion. This largely enhances inversion precision. In the thesis, a nonlinear rock physics convolution model is adopted to establish a relationship between impedance and porosity/clay-content. Through multistage decomposition and bidirectional edge wavelet detection, it can depict more complex rock physical relationship. Moreover, it uses the Caianiello neural network to implement the combination of deterministic inversion, statistical inversion and nonlinear theory. Last, by combined applications of direct inversion based on vertical edge detection wavelet and indirect inversion based on lateral edge detection wavelet, it implements the integrative application of geological information, well data and seismic impedance for estimation of high-resolution petrophysical parameters (porosity/clay-content). These inversion results can be used to reservoir prediction and characterization. Multi-well constrains and separate-frequency inversion modes are adopted in the thesis. The analyses of these sections of lithologic and petrophysical properties show that the low-frequency sections reflect the macro structure of the strata, while the middle/high-frequency sections reflect the detailed structure of the strata. Therefore, the high-resolution sections can be used to recognize the boundary of sand body and to predict the hydrocarbon zones.
Resumo:
Seismic technique is in the leading position for discovering oil and gas trap and searching for reserves throughout the course of oil and gas exploration. It needs high quality of seismic processed data, not only required exact spatial position, but also the true information of amplitude and AVO attribute and velocity. Acquisition footprint has an impact on highly precision and best quality of imaging and analysis of AVO attribute and velocity. Acquisition footprint is a new conception of describing seismic noise in 3-D exploration. It is not easy to understand the acquisition footprint. This paper begins with forward modeling seismic data from the simple sound wave model, then processes it and discusses the cause for producing the acquisition footprint. It agreed that the recording geometry is the main cause which leads to the distribution asymmetry of coverage and offset and azimuth in different grid cells. It summarizes the characters and description methods and analysis acquisition footprint’s influence on data geology interpretation and the analysis of seismic attribute and velocity. The data reconstruct based on Fourier transform is the main method at present for non uniform data interpolation and extrapolate, but this method always is an inverse problem with bad condition. Tikhonov regularization strategy which includes a priori information on class of solution in search can reduce the computation difficulty duo to discrete kernel condition disadvantage and scarcity of the number of observations. The method is quiet statistical, which does not require the selection of regularization parameter; and hence it has appropriate inversion coefficient. The result of programming and tentat-ive calculation verifies the acquisition footprint can be removed through prestack data reconstruct. This paper applies migration to the processing method of removing the acquisition footprint. The fundamental principle and algorithms are surveyed, seismic traces are weighted according to the area which occupied by seismic trace in different source-receiver distances. Adopting grid method in stead of accounting the area of Voroni map can reduce difficulty of calculation the weight. The result of processing the model data and actual seismic demonstrate, incorporating a weighting scheme based on the relative area that is associated with each input trace with respect to its neighbors acts to minimize the artifacts caused by irregular acquisition geometry.
Resumo:
In the last several decades, due to the fast development of computer, numerical simulation has been an indispensable tool in scientific research. Numerical simulation methods which based on partial difference operators such as Finite Difference Method (FDM) and Finite Element Method (FEM) have been widely used. However, in the realm of seismology and seismic prospecting, one usually meets with geological models which have piece-wise heterogeneous structures as well as volume heterogeneities between layers, the continuity of displacement and stress across the irregular layers and seismic wave scattering induced by the perturbation of the volume usually bring in error when using conventional methods based on difference operators. The method discussed in this paper is based on elastic theory and integral theory. Seismic wave equation in the frequency domain is transformed into a generalized Lippmann-Schwinger equation, in which the seismic wavefield contributed by the background is expressed by the boundary integral equation and the scattering by the volume heterogeneities is considered. Boundary element-volume integral method based on this equation has advantages of Boundary Element Method (BEM), such as reducing one dimension of the model, explicit use the displacement and stress continuity across irregular interfaces, high precision, satisfying the boundary at infinite, etc. Also, this method could accurately simulate the seismic scattering by the volume heterogeneities. In this paper, the concrete Lippmann-Schwinger equation is specifically given according to the real geological models. Also, the complete coefficients of the non-smooth point for the integral equation are introduced. Because Boundary Element-Volume integral equation method uses fundamental solutions which are singular when the source point and the field are very close,both in the two dimensional and the three dimensional case, the treatment of the singular kernel affects the precision of this method. The method based on integral transform and integration by parts could treat the points on the boundary and inside the domain. It could transform the singular integral into an analytical one both in two dimensional and in three dimensional cases and thus it could eliminate the singularity. In order to analyze the elastic seismic wave scattering due to regional irregular topographies, the analytical solution for problems of this type is discussed and the analytical solution of P waves by multiple canyons is given. For the boundary reflection, the method used here is infinite boundary element absorbing boundary developed by a pervious researcher. The comparison between the analytical solutions and concrete numerical examples validate the efficiency of this method. We thoroughly discussed the sampling frequency in elastic wave simulation and find that, for a general case, three elements per wavelength is sufficient, however, when the problem is too complex, more elements per wavelength are necessary. Also, the seismic response in the frequency domain of the canyons with different types of random heterogeneities is illustrated. We analyzed the model of the random media, the horizontal and vertical correlation length, the standard deviation, and the dimensionless frequency how to affect the seismic wave amplification on the ground, and thus provide a basis for the choice of the parameter of random media during numerical simulation.
Resumo:
The real earth is far away from an ideal elastic ball. The movement of structures or fluid and scattering of thin-layer would inevitably affect seismic wave propagation, which is demonstrated mainly as energy nongeometrical attenuation. Today, most of theoretical researches and applications take the assumption that all media studied are fully elastic. Ignoring the viscoelastic property would, in some circumstances, lead to amplitude and phase distortion, which will indirectly affect extraction of traveltime and waveform we use in imaging and inversion. In order to investigate the response of seismic wave propagation and improve the imaging and inversion quality in complex media, we need not only consider into attenuation of the real media but also implement it by means of efficient numerical methods and imaging techniques. As for numerical modeling, most widely used methods, such as finite difference, finite element and pseudospectral algorithms, have difficulty in dealing with problem of simultaneously improving accuracy and efficiency in computation. To partially overcome this difficulty, this paper devises a matrix differentiator method and an optimal convolutional differentiator method based on staggered-grid Fourier pseudospectral differentiation, and a staggered-grid optimal Shannon singular kernel convolutional differentiator by function distribution theory, which then are used to study seismic wave propagation in viscoelastic media. Results through comparisons and accuracy analysis demonstrate that optimal convolutional differentiator methods can solve well the incompatibility between accuracy and efficiency, and are almost twice more accurate than the same-length finite difference. They can efficiently reduce dispersion and provide high-precision waveform data. On the basis of frequency-domain wavefield modeling, we discuss how to directly solve linear equations and point out that when compared to the time-domain methods, frequency-domain methods would be more convenient to handle the multi-source problem and be much easier to incorporate medium attenuation. We also prove the equivalence of the time- and frequency-domain methods by using numerical tests when assumptions with non-relaxation modulus and quality factor are made, and analyze the reason that causes waveform difference. In frequency-domain waveform inversion, experiments have been conducted with transmission, crosshole and reflection data. By using the relation between media scales and characteristic frequencies, we analyze the capacity of the frequency-domain sequential inversion method in anti-noising and dealing with non-uniqueness of nonlinear optimization. In crosshole experiments, we find the main sources of inversion error and figure out how incorrect quality factor would affect inverted results. When dealing with surface reflection data, several frequencies have been chosen with optimal frequency selection strategy, with which we use to carry out sequential and simultaneous inversions to verify how important low frequency data are to the inverted results and the functionality of simultaneous inversion in anti-noising. Finally, I come with some conclusions about the whole work I have done in this dissertation and discuss detailly the existing and would-be problems in it. I also point out the possible directions and theories we should go and deepen, which, to some extent, would provide a helpful reference to researchers who are interested in seismic wave propagation and imaging in complex media.
Resumo:
How to create a new method to solve the problem or reduce the influence of that the result of the seismic waves scattering nonlinear inversion is not uniqueness is a main purpose of this research work in the paper. On the background of research into the seismic inversion, new progress of the nonlinear inversion is introduced at the first chapter in this paper. Especially, the development, basic theories and assumptions on some major theories of seismic inversion are analyzed, discussed and summarized in mathematics and physics. Also, the problems faced by the mathematical basis of investigations of the seismic inversion are discussed, and inverse questions of strongly seismic scattering due to strong heterogeneous media in the Earth interior are analyzed and viewed. What the kernel of paper is that gathers all our attention making a new nonlinear inversion method of seismic scattering. The paper provides a theory and method of how to introduce the fixed-point theory into the nonlinear seismic scattering inversion and how to obtain the solution, and gives the actually method to create a serials of contractive mappings of velocity parameter's in the mapping space of wave. Therefore, the results testify the existence of fixed point of velocity parameter and give the method the find it. Further, the paper proves the conclusion that the value obtained by taking the fixed point of velocity parameter into wave equation is the fixed point of the wave of the contractive mapping. Thence, the fixed point is the global minima since the stabilities quality of the fixed point. Based on the new theory, in the chapter three, many inverse results are obtained in the numerical value test. By analysis the results one could find a basic facts that all the results, which are inversed by the different initial model, are tended to the true value in theoretical true model. In other words, the new method mostly eliminates the non-uniqueness that which is existed in seismic waves scattering nonlinear inversion in degree. But, since the test results are quite finite now, more test is need here to positive our theory. As a new theoretical method, it must be existed many weaken in it. The chapter four points out all the questions which is bother us. We hope more people to join us to solve the problem together.
Resumo:
Based on social survey data conducted by local research group in some counties executed in the nearly past five years in China, the author proposed and solved two kernel problems in the field of social situation forecasting: i) How can the attitudes’ data on individual level be integrated with social situation data on macrolevel; ii) How can the powers of forecasting models’ constructed by different statistic methods be compared? Five integrative statistics were applied to the research: 1) algorithm average (MEAN); 2) standard deviation (SD); 3) coefficient variability (CV); 4) mixed secondary moment (M2); 5) Tendency (TD). To solve the former problem, the five statistics were taken to synthesize the individual and mocrolevel data of social situations on the levels of counties’ regions, and form novel integrative datasets, from the basis of which, the latter problem was accomplished by the author: modeling methods such as Multiple Regression Analysis (MRA), Discriminant Analysis (DA) and Support Vector Machine (SVM) were used to construct several forecasting models. Meanwhile, on the dimensions of stepwise vs. enter, short-term vs. long-term forecasting and different integrative (statistic) models, meta-analysis and power analysis were taken to compare the predicting power of each model within and among modeling methods. Finally, it can be concluded from the research of the dissertation: 1) Exactly significant difference exists among different integrative (statistic) models, in which, tendency (TD) integrative models have the highest power, but coefficient variability (CV) ones have the lowest; 2) There is no significant difference of the power between stepwise and enter models as well as short-term and long-term forecasting models; 3) There is significant difference among models constructed by different methods, of which, support vector machine (SVM) has the highest statistic power. This research founded basis in all facets for exploring the optimal forecasting models of social situation’s more deeply, further more, it is the first time methods of meta-analysis and power analysis were immersed into the assessments of such forecasting models.
Resumo:
A cellular automaton is an iterative array of very simple identical information processing machines called cells. Each cell can communicate with neighboring cells. At discrete moments of time the cells can change from one state to another as a function of the states of the cell and its neighbors. Thus on a global basis, the collection of cells is characterized by some type of behavior. The goal of this investigation was to determine just how simple the individual cells could be while the global behavior achieved some specified criterion of complexity ??ually the ability to perform a computation or to reproduce some pattern. The chief result described in this thesis is that an array of identical square cells (in two dimensions), each cell of which communicates directly with only its four nearest edge neighbors and each of which can exist in only two states, can perform any computation. This computation proceeds in a straight forward way. A configuration is a specification of the states of all the cells in some area of the iterative array. Another result described in this thesis is the existence of a self-reproducing configuration in an array of four-state cells, a reduction of four states from the previously known eight-state case. The technique of information processing in cellular arrays involves the synthesis of some basic components. Then the desired behaviors are obtained by the interconnection of these components. A chapter on components describes some sets of basic components. Possible applications of the results of this investigation, descriptions of some interesting phenomena (for vanishingly small cells), and suggestions for further study are given later.
Resumo:
The STUDENT problem solving system, programmed in LISP, accepts as input a comfortable but restricted subset of English which can express a wide variety of algebra story problems. STUDENT finds the solution to a large class of these problems. STUDENT can utilize a store of global information not specific to any one problem, and may make assumptions about the interpretation of ambiguities in the wording of the problem being solved. If it uses such information or makes any assumptions, STUDENT communicates this fact to the user. The thesis includes a summary of other English language questions-answering systems. All these systems, and STUDENT, are evaluated according to four standard criteria. The linguistic analysis in STUDENT is a first approximation to the analytic portion of a semantic theory of discourse outlined in the thesis. STUDENT finds the set of kernel sentences which are the base of the input discourse, and transforms this sequence of kernel sentences into a set of simultaneous equations which form the semantic base of the STUDENT system. STUDENT then tries to solve this set of equations for the values of requested unknowns. If it is successful it gives the answers in English. If not, STUDENT asks the user for more information, and indicates the nature of the desired information. The STUDENT system is a first step toward natural language communication with computers. Further work on the semantic theory proposed should result in much more sophisticated systems.
Resumo:
C.G.G. Aitken, Q. Shen, R. Jensen and B. Hayes. The evaluation of evidence for exponentially distributed data. Computational Statistics & Data Analysis, vol. 51, no. 12, pp. 5682-5693, 2007.
Resumo:
C.R. Bull, R. Zwiggelaar and J.V. Stafford, 'Imaging as a technique for assessment and control in the field', Aspects of Applied Biology 43, 197-204 (1995)
Resumo:
Estetyka w archeologii. Antropomorfizacje w pradziejach i starożytności, eds. E. Bugaj, A. P. Kowalski, Poznań: Wydawnictwo Poznańskie.
Resumo:
Extensible systems allow services to be configured and deployed for the specific needs of individual applications. This paper describes a safe and efficient method for user-level extensibility that requires only minimal changes to the kernel. A sandboxing technique is described that supports multiple logical protection domains within the same address space at user-level. This approach allows applications to register sandboxed code with the system, that may be executed in the context of any process. Our approach differs from other implementations that require special hardware support, such as segmentation or tagged translation look-aside buffers (TLBs), to either implement multiple protection domains in a single address space, or to support fast switching between address spaces. Likewise, we do not require the entire system to be written in a type-safe language, to provide fine-grained protection domains. Instead, our user-level sandboxing technique requires only paged-based virtual memory support, and the requirement that extension code is written either in a type-safe language, or by a trusted source. Using a fast method of upcalls, we show how our sandboxing technique for implementing logical protection domains provides significant performance improvements over traditional methods of invoking user-level services. Experimental results show our approach to be an efficient method for extensibility, with inter-protection domain communication costs close to those of hardware-based solutions leveraging segmentation.
Resumo:
The best-effort nature of the Internet poses a significant obstacle to the deployment of many applications that require guaranteed bandwidth. In this paper, we present a novel approach that enables two edge/border routers-which we call Internet Traffic Managers (ITM)-to use an adaptive number of TCP connections to set up a tunnel of desirable bandwidth between them. The number of TCP connections that comprise this tunnel is elastic in the sense that it increases/decreases in tandem with competing cross traffic to maintain a target bandwidth. An origin ITM would then schedule incoming packets from an application requiring guaranteed bandwidth over that elastic tunnel. Unlike many proposed solutions that aim to deliver soft QoS guarantees, our elastic-tunnel approach does not require any support from core routers (as with IntServ and DiffServ); it is scalable in the sense that core routers do not have to maintain per-flow state (as with IntServ); and it is readily deployable within a single ISP or across multiple ISPs. To evaluate our approach, we develop a flow-level control-theoretic model to study the transient behavior of established elastic TCP-based tunnels. The model captures the effect of cross-traffic connections on our bandwidth allocation policies. Through extensive simulations, we confirm the effectiveness of our approach in providing soft bandwidth guarantees. We also outline our kernel-level ITM prototype implementation.
Resumo:
Internet Traffic Managers (ITMs) are special machines placed at strategic places in the Internet. itmBench is an interface that allows users (e.g. network managers, service providers, or experimental researchers) to register different traffic control functionalities to run on one ITM or an overlay of ITMs. Thus itmBench offers a tool that is extensible and powerful yet easy to maintain. ITM traffic control applications could be developed either using a kernel API so they run in kernel space, or using a user-space API so they run in user space. We demonstrate the flexibility of itmBench by showing the implementation of both a kernel module that provides a differentiated network service, and a user-space module that provides an overlay routing service. Our itmBench Linux-based prototype is free software and can be obtained from http://www.cs.bu.edu/groups/itm/.
Resumo:
Current low-level networking abstractions on modern operating systems are commonly implemented in the kernel to provide sufficient performance for general purpose applications. However, it is desirable for high performance applications to have more control over the networking subsystem to support optimizations for their specific needs. One approach is to allow networking services to be implemented at user-level. Unfortunately, this typically incurs costs due to scheduling overheads and unnecessary data copying via the kernel. In this paper, we describe a method to implement efficient application-specific network service extensions at user-level, that removes the cost of scheduling and provides protected access to lower-level system abstractions. We present a networking implementation that, with minor modifications to the Linux kernel, passes data between "sandboxed" extensions and the Ethernet device without copying or processing in the kernel. Using this mechanism, we put a customizable networking stack into a user-level sandbox and show how it can be used to efficiently process and forward data via proxies, or intermediate hosts, in the communication path of high performance data streams. Unlike other user-level networking implementations, our method makes no special hardware requirements to avoid unnecessary data copies. Results show that we achieve a substantial increase in throughput over comparable user-space methods using our networking stack implementation.