917 resultados para PSEUDODIFFERENTIAL-OPERATORS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

3D wave equation prestack depth migration is the effective tool for obtaining the exact imaging result of complex geology structures. It's a part of the 3D seismic data processing. 3D seismic data processing belongs to high dimension signal processing, and there are some difficult problems to do with. They are: How to process high dimension operators? How to improve the focusing? and how to construct the deconvolution operator? The realization of 3D wave equation prestack depth migration, not only realized the leap from poststack to prestack, but also provided the important means to solve the difficult problems in high dimension signal processing. In this thesis, I do a series research especially for the solve of the difficult problems around the 3D wave equation prestack depth migration and using it as a mean. So this thesis service for the realization of 3D wave equation prestack depth migration for one side and improve the migration effect for another side. This thesis expatiates in five departs. Summarizes the main contents as the follows: In the first part, I have completed the projection from 3D data point area to low dimension are using de big matrix transfer and trace rearrangement, and realized the liner processing of high dimension signal. Firstly, I present the mathematics expression of 3D seismic data and the mean according to physics, present the basic ideal of big matrix transfer and describe the realization of five transfer models for example. Secondly, I present the basic ideal and rules for the rearrange and parallel calculate of 3D traces, and give a example. In the conventional DMO focusing method, I recall the history of DM0 process firstly, give the fundamental of DMO process and derive the equation of DMO process and it's impulse response. I also prove the equivalence between DMO and prestack time migration, from the kinematic character of DMO. And derive the relationship between DMO base on wave equation and prestack time migration. Finally, I give the example of DMO process flow and synthetic data of theoretical models. In the wave equation prestak depth migration, I firstly recall the history of migration from time to depth, from poststack to prestack and from 2D to 3D. And conclude the main migration methods, point out their merit and shortcoming. Finally, I obtain the common image point sets using the decomposed migration program code.In the residual moveout, I firstly describe the Viterbi algorithm based on Markov process and compound decision theory and how to solve the shortest path problem using Viterbi algorithm. And based on this ideal, I realized the residual moveout of post 3D wave equation prestack depth migration. Finally, I give the example of residual moveout of real 3D seismic data. In the migration Green function, I firstly give the concept of migration Green function and the 2D Green function migration equation for the approximate of far field. Secondly, I prove the equivalence of wave equation depth extrapolation algorithms. And then I derive the equation of Green function migration. Finally, I present the response and migration result of Green function for point resource, analyze the effect of migration aperture to prestack migration result. This research is benefit for people to realize clearly the effect of migration aperture to migration result, and study on the Green function deconvolution to improve the focusing effect of migration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The seismic survey is the most effective prospecting geophysical method during exploration and development of oil/gas. The structure and the lithology of the geological body become increasingly complex now. So it must assure that the seismic section own upper resolution if we need accurately describe the targets. High signal/noise ratio is the precondition of high-resolution. For the sake of improving signal/noise ratio, we put forward four methods for eliminating random noise on the basis of detailed analysis of the technique for noise elimination using prediction filtering in f-x-y domain. The four methods are put forward for settling different problems, which are in the technique for noise elimination using prediction filtering in f-x-y domain. For weak noise and large filters, the response of the noise to the filter is little. For strong noise and short filters, the response of the noise to the filter is important. For the response of the noise, the predicting operators are inaccurate. The inaccurate operators result in incorrect results. So we put forward the method using prediction filtering by inversion in f-x-y domain. The method makes the assumption that the seismic signal comprises predictable proportion and unpredictable proportion. The transcendental information about predicting operator is introduced in the function. The method eliminates the response of the noise to filtering operator, and assures that the filtering operators are accurate. The filtering results are effectively improved by the method. When the dip of the stratum is very complex, we generally divide the data into rectangular patches in order to obtain the predicting operators using prediction filtering in f-x-y domain. These patches usually need to have significant overlap in order to get a good result. The overlap causes that the data is repeatedly used. It effectively increases the size of the data. The computational cost increases with the size of the data. The computational efficiency is depressed. The predicting operators, which are obtained by general prediction filtering in f-x-y domain, can not describe the change of the dip when the dip of the stratum is very complex. It causes that the filtering results are aliased. And each patch is an independent problem. In order to settle these problems, we put forward the method for eliminating noise using space varying prediction filtering in f-x-y domain. The predicting operators accordingly change with space varying in this method. Therefore it eliminates the false event in the result. The transcendental information about predicting operator is introduced into the function. To obtain the predicting operators of each patch is no longer independent problem, but related problem. Thus it avoids that the data is repeatedly used, and improves computational efficiency. The random noise that is eliminated by prediction filtering in f-x-y domain is Gaussian noise. The general method can't effectively eliminate non-Gaussian noise. The prediction filtering method using lp norm (especially p=l) can effectively eliminate non-Gaussian noise in f-x-y domain. The method is described in this paper. Considering the dip of stratum can be accurately obtained, we put forward the method for eliminating noise using prediction filtering under the restriction of the dip in f-x-y domain. The method can effectively increase computational efficiency and improve the result. Through calculating in the theoretic model and applying it to the field data, it is proved that the four methods in this paper can effectively solve these different problems in the general method. Their practicability is very better. And the effect is very obvious.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: Psychosocial crisis and psychiatric disorders are two stressors for suicide action. This study will explore the differences on demographic characteristics, severity of depression, and suicidality of middle-aged and elder crisis line callers under the influences of psychosocial crisis or psychiatric disorders or two simultaneously-mixed stressors, in order to develop effective intervention strategies for crisis line. Methods: Analysis data of 1,092 cases selected from national crisis line callers aged 45 and over who were assessed with “Suicide risk assessment” during the period from December, 2002 to December, 2008. The sample were divided into three groups of psychosocial crisis, mental health problems, and mixed-stressors of three types of general callers (48.2%, 32.3%, 19.5%), callers with current suicide ideation (43.7%, 33.0%, 23.3%) and callers attempted suicide 2 weeks prior to the call (33.6%, 42.3%, 24.1%) respectively according to the operators’ judgments of the callers’ claimed difficult situations and classification system of crisis line database. X2 test and Tukey-type and Multinomial Logistic Regression multiple comparison methods are applied to analysis the differences of the three groups. Results: In agreement with previous studies, more females (71.3%, X2=13.45, P<0.001), especially females influenced by relationship stressors (76.8%, X2=25.12, P<0.001) made the call for crisis. Among general callers, the check-out rates of Major Depression Episode of mixed-stressor callers (78.5%, P<0.001) and problem callers (68.7%, P<0.05) were significantly higher than that of crisis callers (57.1%). The check-out rates of suicide ideation of mixed-stressor callers (71.4%) were significantly higher than that in crisis callers (53.8%, P<0.001) and problem callers (60.9%, P<0.05). The check-out rates of prior suicide attempts of mixed-stressor (16.6%, P<0.05) and problem callers (18.5%, P<0.01) were significantly higher than that of crisis callers (9.8%). More than half of the mixed-stressor callers (51.8%) reported over 50% degree of hopelessness, which was significantly higher than that of crisis callers (35.6%, P<0.01) and problem callers (38.2%, P<0.05). Fewer crisis callers sought medical help than problem and mixed-stressor callers among three types of callers (X2=241.35, 146.56, 50.87; P<0.001). Compare to non-compound crisis callers, the proportion of minor, severe depression and prior depression diagnosis (14.0% vs. 17.4%; 54.9% vs. 65.2%; 0 vs. 2.2%; X2=14.35,P<0.01), suicide ideation (51.1% vs. 64.0%, P<0.05) and prior suicide attempts (8.4% vs. 15.0%, P<0.05) in compound crisis callers were significantly higher. There were more compound crisis callers with over 50% hopelessness (51.9% vs. 31.0%,X2=11.96,P<0.01). Conclusion: As predicted, among middle-aged and elderly participants, mixed-stressor and compound crisis callers were higher in degree of severity of depression and suicidality. Intervention strategies should be developed addressing to specific stressor or stressors. The promotions of crisis callers’ medical help seeking behavior need to be emphasized.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The research of psycho-simulation training on modern operators was raised with the new demands of the technological revolution and the revolution in traditional industries in China. Having reviewed the history and current situation about psychological researches in personnel training in West, Soviet Union, developing countries, including China, the author hold that the principal problems in perssonel technical training in China was that the theoretical exploration of technical ability was neglected. For the solution of the above problems, the overall conception of this research was designed as follows. The intellectual skill plays a more and more important role in elements of technical abilities of workers due to the evergreater progress in modern science and technology, the higher automatic degree in industry. If the intellectual skill in training was emphasiged, the formation of technical ability in whole would be accelerated. For this purpore, the research adopted psycho-simulation method to realize the conception.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have developed a compiler for the lexically-scoped dialect of LISP known as SCHEME. The compiler knows relatively little about specific data manipulation primitives such as arithmetic operators, but concentrates on general issues of environment and control. Rather than having specialized knowledge about a large variety of control and environment constructs, the compiler handles only a small basis set which reflects the semantics of lambda-calculus. All of the traditional imperative constructs, such as sequencing, assignment, looping, GOTO, as well as many standard LISP constructs such as AND, OR, and COND, are expressed in macros in terms of the applicative basis set. A small number of optimization techniques, coupled with the treatment of function calls as GOTO statements, serve to produce code as good as that produced by more traditional compilers. The macro approach enables speedy implementation of new constructs as desired without sacrificing efficiency in the generated code. A fair amount of analysis is devoted to determining whether environments may be stack-allocated or must be heap-allocated. Heap-allocated environments are necessary in general because SCHEME (unlike Algol 60 and Algol 68, for example) allows procedures with free lexically scoped variables to be returned as the values of other procedures; the Algol stack-allocation environment strategy does not suffice. The methods used here indicate that a heap-allocating generalization of the "display" technique leads to an efficient implementation of such "upward funargs". Moreover, compile-time optimization and analysis can eliminate many "funargs" entirely, and so far fewer environment structures need be allocated at run time than might be expected. A subset of SCHEME (rather than triples, for example) serves as the representation intermediate between the optimized SCHEME code and the final output code; code is expressed in this subset in the so-called continuation-passing style. As a subset of SCHEME, it enjoys the same theoretical properties; one could even apply the same optimizer used on the input code to the intermediate code. However, the subset is so chosen that all temporary quantities are made manifest as variables, and no control stack is needed to evaluate it. As a result, this apparently applicative representation admits an imperative interpretation which permits easy transcription to final imperative machine code. These qualities suggest that an applicative language like SCHEME is a better candidate for an UNCOL than the more imperative candidates proposed to date.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of detecting intensity changes in images is canonical in vision. Edge detection operators are typically designed to optimally estimate first or second derivative over some (usually small) support. Other criteria such as output signal to noise ratio or bandwidth have also been argued for. This thesis is an attempt to formulate a set of edge detection criteria that capture as directly as possible the desirable properties of an edge operator. Variational techniques are used to find a solution over the space of all linear shift invariant operators. The first criterion is that the detector have low probability of error i.e. failing to mark edges or falsely marking non-edges. The second is that the marked points should be as close as possible to the centre of the true edge. The third criterion is that there should be low probability of more than one response to a single edge. The technique is used to find optimal operators for step edges and for extended impulse profiles (ridges or valleys in two dimensions). The extension of the one dimensional operators to two dimentions is then discussed. The result is a set of operators of varying width, length and orientation. The problem of combining these outputs into a single description is discussed, and a set of heuristics for the integration are given.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The motion planning problem is of central importance to the fields of robotics, spatial planning, and automated design. In robotics we are interested in the automatic synthesis of robot motions, given high-level specifications of tasks and geometric models of the robot and obstacles. The Mover's problem is to find a continuous, collision-free path for a moving object through an environment containing obstacles. We present an implemented algorithm for the classical formulation of the three-dimensional Mover's problem: given an arbitrary rigid polyhedral moving object P with three translational and three rotational degrees of freedom, find a continuous, collision-free path taking P from some initial configuration to a desired goal configuration. This thesis describes the first known implementation of a complete algorithm (at a given resolution) for the full six degree of freedom Movers' problem. The algorithm transforms the six degree of freedom planning problem into a point navigation problem in a six-dimensional configuration space (called C-Space). The C-Space obstacles, which characterize the physically unachievable configurations, are directly represented by six-dimensional manifolds whose boundaries are five dimensional C-surfaces. By characterizing these surfaces and their intersections, collision-free paths may be found by the closure of three operators which (i) slide along 5-dimensional intersections of level C-Space obstacles; (ii) slide along 1- to 4-dimensional intersections of level C-surfaces; and (iii) jump between 6 dimensional obstacles. Implementing the point navigation operators requires solving fundamental representational and algorithmic questions: we will derive new structural properties of the C-Space constraints and shoe how to construct and represent C-Surfaces and their intersection manifolds. A definition and new theoretical results are presented for a six-dimensional C-Space extension of the generalized Voronoi diagram, called the C-Voronoi diagram, whose structure we relate to the C-surface intersection manifolds. The representations and algorithms we develop impact many geometric planning problems, and extend to Cartesian manipulators with six degrees of freedom.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

C.H. Orgill, N.W. Hardy, M.H. Lee, and K.A.I. Sharpe. An application of a multiple agent system for flexible assemble tasks. In Knowledge based envirnments for industrial applications including cooperating expert systems in control. IEE London, 1989.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Walker,J. and Garrett,S. and Wilson,M.S., 'Evolving Controllers for Real Robots: A Survey of the Literature', Adaptive Behavior, 2003, volume 11, number 3, pp 179--203, Sage

Relevância:

10.00% 10.00%

Publicador:

Resumo:

I.Wood: Maximal Lp-regularity for the Laplacian on Lipschitz domains, Math. Z., 255, 4 (2007), 855-875.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

X. Wang, J. Yang, X. Teng, W. Xia, and R. Jensen. Feature Selection based on Rough Sets and Particle Swarm Optimization. Pattern Recognition Letters, vol. 28, no. 4, pp. 459-471, 2007.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Infantolino, B., Gales, D., Winter, S., Challis, J., The validity of ultrasound estimation of muscle volumes, Journal of applied biomechanics, ISSN 1065-8483, Vol. 23, N?. 3, 2007 , pags. 213-217 RAE2008

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Iantchenko, A.; Sj?strand, J.; Zworski, M., (2002) 'Birkhoff normal forms in semi-classical inverse problems', Mathematical Research Letters 9(3) pp.337-362 RAE2008

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wood, Ian; Geissert, M.; Heck, H.; Hieber, M., (2005) 'The Ornstein-Uhlenbeck semigroup in exterior domains', Archiv der Mathematik 86 pp.554-562 RAE2008

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gough, John, (2004) 'Holevo-Ordering and the Continuous-Time Limit for Open Floquet Dynamics', Letters in Mathematical Physcis 67(3) pp.207-221 RAE2008