966 resultados para SOLVABLE LIE-ALGEBRAS
Resumo:
Seismic wave field numerical modeling and seismic migration imaging based on wave equation have become useful and absolutely necessarily tools for imaging of complex geological objects. An important task for numerical modeling is to deal with the matrix exponential approximation in wave field extrapolation. For small value size matrix exponential, we can approximate the square root operator in exponential using different splitting algorithms. Splitting algorithms are usually used on the order or the dimension of one-way wave equation to reduce the complexity of the question. In this paper, we achieve approximate equation of 2-D Helmholtz operator inversion using multi-way splitting operation. Analysis on Gauss integral and coefficient of optimized partial fraction show that dispersion may accumulate by splitting algorithms for steep dipping imaging. High-order symplectic Pade approximation may deal with this problem, However, approximation of square root operator in exponential using splitting algorithm cannot solve dispersion problem during one-way wave field migration imaging. We try to implement exact approximation through eigenfunction expansion in matrix. Fast Fourier Transformation (FFT) method is selected because of its lowest computation. An 8-order Laplace matrix splitting is performed to achieve a assemblage of small matrixes using FFT method. Along with the introduction of Lie group and symplectic method into seismic wave-field extrapolation, accurate approximation of matrix exponential based on Lie group and symplectic method becomes the hot research field. To solve matrix exponential approximation problem, the Second-kind Coordinates (SKC) method and Generalized Polar Decompositions (GPD) method of Lie group are of choice. SKC method utilizes generalized Strang-splitting algorithm. While GPD method utilizes polar-type splitting and symmetric polar-type splitting algorithm. Comparing to Pade approximation, these two methods are less in computation, but they can both assure the Lie group structure. We think SKC and GPD methods are prospective and attractive in research and practice.
Resumo:
The Otindag sandy land and the Guyuan region of Hebei Province lie in the agro-pastoral zone, where sandy desertification is serious. So they are typical for us to study on. In this paper, detail investigation were made on the Remote Sensing, Hydrochemistry, Chronology, grain size analyzing of research region to monitor sandy desertification and environmental background. The main conclusions are presented as following: 1. According to the diverse natural condition, the research area is divided into three types as sandy land desertification, cultivated land desertification and desertification reflected by lake change. The monitoring result of the first type shows that the main performance way of the sandy desertification in Otindag sandy land is that (1) the expansion of both the shifting dune and the half fixed sandy dune, (2) the reduce of the fixed sandy dune. While the result of the second type shows (1) the desertification land in the Guyuan region has first increasing then reducing change for about 30 years. (2) The sand mainly concentrates west of the research area and small part of wind-drift sand distributes northeast the research area with the spot shape. (3) The meadow area increases obviously. As far as the third type, the Dalai Nur lake area occurs first expanding then reducing change and the wind-drift sand around the lake first reduces then increases. 2. The land cover of the different types change with the same law. It is worth notice that the lake area changes oppositely with that of the wind-drift sand. 3. For about 5,000 a B.P. -2800 a B.P., the well developed palaeosols emerged. After that, three layer palaeosols were founded in the profile of Otindag sandy land. The analyses of grain size show that the sand grains of the south were coarser than that of the north. The sand in the north and middle were well sorted, while the south poor sorted. 4. Both the natural and human impact on the process of sandy desertification. On this research result, different regions have different influences. So the measures to improve sandy desertification should be choosed respectively.
Resumo:
The dissertation addressed the problems of signals reconstruction and data restoration in seismic data processing, which takes the representation methods of signal as the main clue, and take the seismic information reconstruction (signals separation and trace interpolation) as the core. On the natural bases signal representation, I present the ICA fundamentals, algorithms and its original applications to nature earth quake signals separation and survey seismic signals separation. On determinative bases signal representation, the paper proposed seismic dada reconstruction least square inversion regularization methods, sparseness constraints, pre-conditioned conjugate gradient methods, and their applications to seismic de-convolution, Radon transformation, et. al. The core contents are about de-alias uneven seismic data reconstruction algorithm and its application to seismic interpolation. Although the dissertation discussed two cases of signal representation, they can be integrated into one frame, because they both deal with the signals or information restoration, the former reconstructing original signals from mixed signals, the later reconstructing whole data from sparse or irregular data. The goal of them is same to provide pre-processing methods and post-processing method for seismic pre-stack depth migration. ICA can separate the original signals from mixed signals by them, or abstract the basic structure from analyzed data. I surveyed the fundamental, algorithms and applications of ICA. Compared with KL transformation, I proposed the independent components transformation concept (ICT). On basis of the ne-entropy measurement of independence, I implemented the FastICA and improved it by covariance matrix. By analyzing the characteristics of the seismic signals, I introduced ICA into seismic signal processing firstly in Geophysical community, and implemented the noise separation from seismic signal. Synthetic and real data examples show the usability of ICA to seismic signal processing and initial effects are achieved. The application of ICA to separation quake conversion wave from multiple in sedimentary area is made, which demonstrates good effects, so more reasonable interpretation of underground un-continuity is got. The results show the perspective of application of ICA to Geophysical signal processing. By virtue of the relationship between ICA and Blind Deconvolution , I surveyed the seismic blind deconvolution, and discussed the perspective of applying ICA to seismic blind deconvolution with two possible solutions. The relationship of PC A, ICA and wavelet transform is claimed. It is proved that reconstruction of wavelet prototype functions is Lie group representation. By the way, over-sampled wavelet transform is proposed to enhance the seismic data resolution, which is validated by numerical examples. The key of pre-stack depth migration is the regularization of pre-stack seismic data. As a main procedure, seismic interpolation and missing data reconstruction are necessary. Firstly, I review the seismic imaging methods in order to argue the critical effect of regularization. By review of the seismic interpolation algorithms, I acclaim that de-alias uneven data reconstruction is still a challenge. The fundamental of seismic reconstruction is discussed firstly. Then sparseness constraint on least square inversion and preconditioned conjugate gradient solver are studied and implemented. Choosing constraint item with Cauchy distribution, I programmed PCG algorithm and implement sparse seismic deconvolution, high resolution Radon Transformation by PCG, which is prepared for seismic data reconstruction. About seismic interpolation, dealias even data interpolation and uneven data reconstruction are very good respectively, however they can not be combined each other. In this paper, a novel Fourier transform based method and a algorithm have been proposed, which could reconstruct both uneven and alias seismic data. I formulated band-limited data reconstruction as minimum norm least squares inversion problem where an adaptive DFT-weighted norm regularization term is used. The inverse problem is solved by pre-conditional conjugate gradient method, which makes the solutions stable and convergent quickly. Based on the assumption that seismic data are consisted of finite linear events, from sampling theorem, alias events can be attenuated via LS weight predicted linearly from low frequency. Three application issues are discussed on even gap trace interpolation, uneven gap filling, high frequency trace reconstruction from low frequency data trace constrained by few high frequency traces. Both synthetic and real data numerical examples show the proposed method is valid, efficient and applicable. The research is valuable to seismic data regularization and cross well seismic. To meet 3D shot profile depth migration request for data, schemes must be taken to make the data even and fitting the velocity dataset. The methods of this paper are used to interpolate and extrapolate the shot gathers instead of simply embedding zero traces. So, the aperture of migration is enlarged and the migration effect is improved. The results show the effectiveness and the practicability.
Resumo:
In order to discover the distribution law of the remaining oil, the paper focuses on the quantitative characterization of the reservoir heterogeneity and the distribution law of the fluid barrier and interbed, based on fine geological study of the reservoir in Liuhuall-1 oil field. The refined quantitative reservoir geological model has been established by means of the study of core analysis, logging evaluation on vertical well and parallel well, and seismic interpretation and prediction. Utilizing a comprehensive technology combining dynamic data with static data, the distribution characteristics, formation condition and controlling factors of remaining oil in Liuhuall-1 oil field have been illustrated. The study plays an important role in the enrichment regions of the remaining oil and gives scientific direction for the next development of the remaining oil. Several achievements have been obtained as follows: l.On the basis of the study of reservoir division and correlation,eight lithohorizons (layer A, B_1, B_2, B_3, C, D, E, and F) from the top to the bottom of the reservoir are discriminated. The reef facies is subdivided into reef-core facies, fore-reef facies and backreef facies. These three subfacies are further subdivided into five microfacies: coral algal limestone, coralgal micrite, coral algal clastic limestone, bioclastic limestone and foraminiferal limestone. In order to illustrate the distribution law of remaining oil in high watercut period, the stratigraphic structure model and sedimentary model are reconstructed. 2.1n order to research intra-layer, inter-layer and plane reservoir heterogeneity, a new method to characterize reservoir heterogeneity by using IRH (Index of Reservoir Heterogeneity) is introduced. The result indicates that reservoir heterogeneity is medium in layer B_1 and B_3, hard in layer A, B_2, C, E, poor in layer D. 3.Based on the study of the distribution law of fluid barrier and interbed, the effect of fluid battier and interbed on fluid seepage is revealed. Fluid barrier and interbed is abundant in layer A, which control the distribution of crude oil in reservoir. Fluid barrier and interbed is abundant relatively in layer B_2,C and E, which control the spill movement of the bottom water. Layer B_1, B_3 and D tend to be waterflooded due to fluid barrier and interbed is poor. 4.Based on the analysis of reservoir heterogeneity, fluid barrier and interbed and the distribution of bottom water, four contributing regions are discovered. The main lies on the north of well LH11-1A. Two minors lie on the east of well LH11-1-3 and between well LH11-1-3 and well LH11-1-5. The last one lies in layer E in which the interbed is discontinuous. 5.The parameters of reservoir and fluid are obtained recurring to core analysis, logging evaluation on vertical well and parallel well, and seismic interpretation and prediction. Theses parameters provide data for the quantitative characterization of the reservoir heterogeneity and the distribution law of the fluid barrier and interbed. 6.1n the paper, an integrated method about the distribution prediction of remaining oil is put forward on basis of refined reservoir geological model and reservoir numerical simulation. The precision in history match and prediction of remaining oil is improved greatly. The integrated study embodies latest trend in this research field. 7.It is shown that the enrichment of the remaining oil with high watercut in Liuhua 11-1 oil field is influenced by reservoir heterogeneity, fluid barrier and interbed, sealing property of fault, driving manner of bottom water and exploitation manner of parallel well. 8.Using microfacies, IRH, reservoir structure, effective thickness, physical property of reservoir, distribution of fluid barrier and interbed, the analysis of oil and water movement and production data, twelve new sidetracked holes are proposed and demonstrated. The result is favorable to instruct oil field development and have gotten a good effect.
Resumo:
Geophones being inside the well, VSP can record upgoing and downgoing P waves, upgoing and downgoing S waves simultaneously.Aiming at overcoming the shortages of the known VSP velocity tomography , attenuation tomography , inverse Q filtering and VSP image method , this article mainly do the following jobs:CD; I do the common-source-point raytracing by soving the raytracing equations with Runge-Kutta method, which can provide traveltime , raypath and amplitude for VSP velocity tomography , attenuation tomography and VSP multiwave migration.(D. The velocity distribution can be inversed from the difference between the computed traveltime and the observed traveltime of the VSP downgoing waves. I put forward two methods: A. VSP building-velocity tomography method that doesn't lie on the layered model from which we can derive the slowness of the grids' crunodes . B. deformable layer tomography method from which we can get the location of the interface if the layer's velocity is known..(3). On the basis of the velocity tomography , using the attenuation information shown by the VSP seismic wave , we can derive the attenuation distribution of the subsurface. I also present an algorithm to solve the inverse Q filtering problem directly and accurately from the Q modeling equation . Numerical results presented have shown that our algorithm gives reliable results . ?. According to the theory that the transformed point is the point where the four kinds of wave come into being , and where the stacked energy will be the largest than at other points . This article presents a VSP multiwave Kirchhoff migration method . Application on synthetic examples and field seismic records have shown that the algorithm gives reliable results . (5). When the location of the interface is determined and the velocity of the P wave and S wave is known , we can obtain the transmittivity and reflection coefficient 5 thereby we can gain the elastic parameters . This method is also put into use derive good result.Above all, application on models and field seismic records show that the method mentioned above is efficient and accurate .
Resumo:
Paleointensity changes of geomagnetic field help us to understand the evolutionprocess of earth completely and provide further constraints for earth interior process and geodynamo model. Marine sediments are good carriers for relative paleointensity of geomagnetic field. But in most cases, deep sea sediments that conform with magnetic "uniformity" usually have low sedimentation rate about l-2cm/ka and lie under the Carbonate Compensate Depth with little carbonate content. Therefore, the number of relative paleointensity records with detailed oxygen stratigraphy is still rare. This thesis focus on four cores from east of Ryukyu Trench which have foraminiferal content of 5-30% and sedimentation rate of lOcm/ka and wish to get centennial -millennial changes of relative paleointensity.The sediments from east of Ryukyu Trench conform with magnetic "uniformity" and remanences of four cores all show single component with stable direction and faithfully record the magnetic field. The NRM301T1T/ARM and NRMsomT/ x are still affected by grain size and concentration changes although the sediments are "uniform" , indicating the uniformity might not enough for relative paleointensity. After renomalized by grain size parameter MDF, the intensities remove the effect of grain size changes to different degrees and show coherency in 1-1 Oka scale with results from ODP983/984. The characteristics of paleointensity of geomagnetic field arefrom 32-24kaBP, paleointensity of geomagnetic field is low;24-12kaBP, paleointensity of geomagnetic field is high and shows two peaks boundary with 19kaBP.3) 12-5.3ka, paleointensity is low. Then increase from 5,3kaBP until a small trough at 2.7kaBP, then increase till now.
Resumo:
The adsorption of CO on Al(2)O(3), ZrO(2), ZrO(2)-SiO(2), and ZrO(2)-La(2)O(3) supported Pd catalysts was studied by adsorption microcalorimetry and infrared (TR) spectroscopy. Some interesting and new correlations between the results of microcalorimetry and IR spectroscopy have been found. The CO is adsorbed on palladium catalysts in three different modes: multibonded (3-fold), bridged (2-fold), both on Pd(lll) and (100) planes, and linear (1-fold) adsorbed species. The corresponding differential adsorption heats lie in the field of high (210-170 kJ/mol), medium (140-120 kJ/mol), and low (95-60 kJ/mol) values, respectively. The nature of the support, the reduction temperature, and the pretreatment conditions affect the surface structure of the Pd catalysts, resulting in variations in the site energy distribution, i.e., changes in the fraction of sites adsorbing CO with specific heats of adsorption. Moreover, the CeO(2); promoter addition weakens the adsorption strength of CO on palladium. Based on the exposed results, a correctness factor, which considers the percentages of various CO adsorption states, must be introduced when one calculates the Pd dispersion using CO adsorption data.
Resumo:
Studies on lie-detection by western psychologists indicate that lying cues people usually hold are not in accordance with the real verbal and non-verbal behaviors that liars usually show. A cross-culture study carried out by C.F.Bond and its global research team finds that the commonest view held by people from 75 nations about lying behavior is that liars usually avert gaze, while study shows that gaze-aversion has no relation with lying. In Bond’s view, stereotype of the liar reflect more about common cross-culture values than an objective description of how liars behave. Different culture has its norms based upon which people judge whether a person is credible or not. As a nation of long Confucianism tradition, how Chinese view liars differently from people of other culture is the interest of this study. By a comparative study with that of Bond’s research, it is found that, in line with Bond’s finding, Chinese generally hold the same stereotype about liars with that of the westerners; but it seems that Chinese rely significantly less on gaze-aversion as a cue to lying, and they concern more about senders’ motivation and emotion. It is also found that confidence about their detection ability among Chinese is lower than westerners. A further study on different professions and their view about lying behaviors shows that people in law-enforcement and related professions generally hold a more accurate view toward how liars behave. Possible explanations to the above mentioned findings in view of culture differences, aspects to be improved in this study and direction of future research are discussed in the later part of the thesis.
Resumo:
Classification is a kind of basic cognitive process, and category is an important way for human beings to define the world. At the same time, categories are organized in a hierarchical way, which makes it possible for human beings to process information efficiency. For those reasons, the development of classification ability is always one of the foci in developmental psychology. By using the methods of spontaneous and trained classification of both familiar stimuli materials and artificial concepts, this research explored the 4-6 year old children's classification criteria. And by the artificial concept system formed in these classification criteria experiments, the mastery degree of class hierarchy in these young children was analyzed. The main results and conclusions are: 1) The classification ability increases quickly among kindergarteners from 4 to 6 year old: the 4 year old children seemed unable to classify objects by classificatory criteria, however, the 6 year ones had shown the ability in many experimental conditions. But the main basis of classificatory criteria in these young children, including 6 year old ones, was the functional relation of the objects but the conceptual relations, and their classification criteria was not consistent because they seem to be easily affected by experimental conditions. 2) The age of 5 is a more sensitive period of classification ability development: for the children of 5 year old, it was found that their classification ability was easily enhanced by training. The zone of proximal development in classification ability by category standard could probably lie in this period of age. 3) Knowledge is an important factor that affects young children's classification ability, meanwhile, their classification activity are affected by cognitive processing ability: young children exhibited different classification ability as they had different understanding of stimuli materials. Kindergarteners of different age were significantly different in their classification ability as the difference in cognitive processing ability, even if they had the same knowledge about the stimuli materials. 4) Different properties of class hierarchy are different in difficulty for young children: the 5-6 year old children showed that the could master the transitivity of the class hierarchy. No matter under what learning condition, they could answer most of the transitivity questions correctly and infer the property of the sub-class according to that of the super-class. The young children at 5-6 years old had mastered the branching property of class hierarchy at a relative high level, but their answers were easily affected by the hints in the questions. However, it seemed that the asymmetry of class hierarchy was difficult for young children to learn. Because young children could not understand the class inclusion relation, they always drew wrong conclusions about super-class from sub-class in their classification.
Resumo:
This research aims at the CEO's (chief executive officer) incentive-reward system and investigates 456 companies that have come into the market. The structure and level of agent reward are analyzed. And the problem in the incentive-reward mechanism is brought forward. The agent's payments are poor comparing to their contributions. And stock is not a primary incentive. Bonus compensation is still the dominant incentive means. By questionnaire and interview, it was fond that matriel need was rank first among these CEOs'needs. These foundinds indicate that the agents' payment is too poor to work as an effective incentive. The corporation's agent incentive is not enough in fact. The two reasons about this problem lie in our institutions and traditional opinions about commerce. To solve this matter, we must establish a scientific and reasonable evaluation system and incentive-reward system. At the same time, the market system and corporation management mechanism are absolutely need.
Resumo:
After revising Russell Motives for Smoking Questionnaire(RMSQ,1974), 317 smoking students and 270 non-smoking students in Beijing are studied. Factor analysis on RMSQ showed that there are four factors: Indulgent, Stimulant/Sedative, Addictive and Social, which cause students to smoke. All the four motives are positively correlated to Psychoticism, and excluding Stimulant /Sedative, the other motives have a negative correlation with the scores on the EPQ Lie sclae. The revised RMSQ has a high reliability and validity in China.
Resumo:
The task of shape recovery from a motion sequence requires the establishment of correspondence between image points. The two processes, the matching process and the shape recovery one, are traditionally viewed as independent. Yet, information obtained during the process of shape recovery can be used to guide the matching process. This paper discusses the mutual relationship between the two processes. The paper is divided into two parts. In the first part we review the constraints imposed on the correspondence by rigid transformations and extend them to objects that undergo general affine (non rigid) transformation (including stretch and shear), as well as to rigid objects with smooth surfaces. In all these cases corresponding points lie along epipolar lines, and these lines can be recovered from a small set of corresponding points. In the second part of the paper we discuss the potential use of epipolar lines in the matching process. We present an algorithm that recovers the correspondence from three contour images. The algorithm was implemented and used to construct object models for recognition. In addition we discuss how epipolar lines can be used to solve the aperture problem.
Resumo:
The Saliency Network proposed by Shashua and Ullman is a well-known approach to the problem of extracting salient curves from images while performing gap completion. This paper analyzes the Saliency Network. The Saliency Network is attractive for several reasons. First, the network generally prefers long and smooth curves over short or wiggly ones. While computing saliencies, the network also fills in gaps with smooth completions and tolerates noise. Finally, the network is locally connected, and its size is proportional to the size of the image. Nevertheless, our analysis reveals certain weaknesses with the method. In particular, we show cases in which the most salient element does not lie on the perceptually most salient curve. Furthermore, in some cases the saliency measure changes its preferences when curves are scaled uniformly. Also, we show that for certain fragmented curves the measure prefers large gaps over a few small gaps of the same total size. In addition, we analyze the time complexity required by the method. We show that the number of steps required for convergence in serial implementations is quadratic in the size of the network, and in parallel implementations is linear in the size of the network. We discuss problems due to coarse sampling of the range of possible orientations. We show that with proper sampling the complexity of the network becomes cubic in the size of the network. Finally, we consider the possibility of using the Saliency Network for grouping. We show that the Saliency Network recovers the most salient curve efficiently, but it has problems with identifying any salient curve other than the most salient one.
Resumo:
This research project is a study of the role of fixation and visual attention in object recognition. In this project, we build an active vision system which can recognize a target object in a cluttered scene efficiently and reliably. Our system integrates visual cues like color and stereo to perform figure/ground separation, yielding candidate regions on which to focus attention. Within each image region, we use stereo to extract features that lie within a narrow disparity range about the fixation position. These selected features are then used as input to an alignment-style recognition system. We show that visual attention and fixation significantly reduce the complexity and the false identifications in model-based recognition using Alignment methods. We also demonstrate that stereo can be used effectively as a figure/ground separator without the need for accurate camera calibration.