12 resultados para Markov chains hidden Markov models Viterbi algorithm Forward-Backward algorithm maximum likelihood
em Chinese Academy of Sciences Institutional Repositories Grid Portal
Resumo:
In the light of descriptive geometry and notions in set theory, this paper re-defines the basic elements in space such as curve and surface and so on, presents some fundamental notions with respect to the point cover based on the High-dimension space (HDS) point covering theory, finally takes points from mapping part of speech signals to HDS, so as to analyze distribution information of these speech points in HDS, and various geometric covering objects for speech points and their relationship. Besides, this paper also proposes a new algorithm for speaker independent continuous digit speech recognition based on the HDS point dynamic searching theory without end-points detection and segmentation. First from the different digit syllables in real continuous digit speech, we establish the covering area in feature space for continuous speech. During recognition, we make use of the point covering dynamic searching theory in HDS to do recognition, and then get the satisfying recognized results. At last, compared to HMM (Hidden Markov models)-based method, from the development trend of the comparing results, as sample amount increasing, the difference of recognition rate between two methods will decrease slowly, while sample amount approaching to be very large, two recognition rates all close to 100% little by little. As seen from the results, the recognition rate of HDS point covering method is higher than that of in HMM (Hidden Markov models) based method, because, the point covering describes the morphological distribution for speech in HDS, whereas HMM-based method is only a probability distribution, whose accuracy is certainly inferior to point covering.
Resumo:
3D wave equation prestack depth migration is the effective tool for obtaining the exact imaging result of complex geology structures. It's a part of the 3D seismic data processing. 3D seismic data processing belongs to high dimension signal processing, and there are some difficult problems to do with. They are: How to process high dimension operators? How to improve the focusing? and how to construct the deconvolution operator? The realization of 3D wave equation prestack depth migration, not only realized the leap from poststack to prestack, but also provided the important means to solve the difficult problems in high dimension signal processing. In this thesis, I do a series research especially for the solve of the difficult problems around the 3D wave equation prestack depth migration and using it as a mean. So this thesis service for the realization of 3D wave equation prestack depth migration for one side and improve the migration effect for another side. This thesis expatiates in five departs. Summarizes the main contents as the follows: In the first part, I have completed the projection from 3D data point area to low dimension are using de big matrix transfer and trace rearrangement, and realized the liner processing of high dimension signal. Firstly, I present the mathematics expression of 3D seismic data and the mean according to physics, present the basic ideal of big matrix transfer and describe the realization of five transfer models for example. Secondly, I present the basic ideal and rules for the rearrange and parallel calculate of 3D traces, and give a example. In the conventional DMO focusing method, I recall the history of DM0 process firstly, give the fundamental of DMO process and derive the equation of DMO process and it's impulse response. I also prove the equivalence between DMO and prestack time migration, from the kinematic character of DMO. And derive the relationship between DMO base on wave equation and prestack time migration. Finally, I give the example of DMO process flow and synthetic data of theoretical models. In the wave equation prestak depth migration, I firstly recall the history of migration from time to depth, from poststack to prestack and from 2D to 3D. And conclude the main migration methods, point out their merit and shortcoming. Finally, I obtain the common image point sets using the decomposed migration program code.In the residual moveout, I firstly describe the Viterbi algorithm based on Markov process and compound decision theory and how to solve the shortest path problem using Viterbi algorithm. And based on this ideal, I realized the residual moveout of post 3D wave equation prestack depth migration. Finally, I give the example of residual moveout of real 3D seismic data. In the migration Green function, I firstly give the concept of migration Green function and the 2D Green function migration equation for the approximate of far field. Secondly, I prove the equivalence of wave equation depth extrapolation algorithms. And then I derive the equation of Green function migration. Finally, I present the response and migration result of Green function for point resource, analyze the effect of migration aperture to prestack migration result. This research is benefit for people to realize clearly the effect of migration aperture to migration result, and study on the Green function deconvolution to improve the focusing effect of migration.
Resumo:
The mandarin keyword spotting system was investigated, and a new approach was proposed based on the principle of homology continuity and point location analysis in high-dimensional space geometry theory which are both parts of biomimetic pattern recognition theory. This approach constructed a hyper-polyhedron with sample points in the training set and calculated the distance between each test point and the hyper-polyhedron. The classification resulted from the value of those distances. The approach was tested by a speech database which was created by ourselves. The performance was compared with the classic HMM approach and the results show that the new approach is much better than HMM approach when the training data is not sufficient.
Resumo:
In speaker-independent speech recognition, the disadvantage of the most diffused technology (HMMs, or Hidden Markov models) is not only the need of many more training samples, but also long train time requirement. This paper describes the use of Biomimetic pattern recognition (BPR) in recognizing some mandarin continuous speech in a speaker-independent manner. A speech database was developed for the course of study. The vocabulary of the database consists of 15 Chinese dish's names, the length of each name is 4 Chinese words. Neural networks (NNs) based on Multi-weight neuron (MWN) model are used to train and recognize the speech sounds. The number of MWN was investigated to achieve the optimal performance of the NNs-based BPR. This system, which is based on BPR and can carry out real time recognition reaches a recognition rate of 98.14% for the first option and 99.81% for the first two options to the persons from different provinces of China speaking common Chinese speech. Experiments were also carried on to evaluate Continuous density hidden Markov models (CDHMM), Dynamic time warping (DTW) and BPR for speech recognition. The Experiment results show that BPR outperforms CDHMM and DTW especially in the cases of samples of a finite size.
Resumo:
In speaker-independent speech recognition, the disadvantage of the most diffused technology ( Hidden Markov Models) is not only the need of many more training samples, but also long train time requirement. This paper describes the use of Biomimetic Pattern Recognition (BPR) in recognizing some Mandarin Speech in a speaker-independent manner. The vocabulary of the system consists of 15 Chinese dish's names. Neural networks based on Multi-Weight Neuron (MWN) model are used to train and recognize the speech sounds. Experimental results are presented to show that the system, which can carry out real time recognition of the persons from different provinces speaking common Chinese speech, outperforms HMMs especially in the cases of samples of a finite size.
Resumo:
The seismic survey is the most effective geophysical method during exploration and development of oil/gas. As a main means in processing and interpreting seismic data, impedance inversion takes up a special position in seismic survey. This is because the impedance parameter is a ligament which connects seismic data with well-logging and geological information, while it is also essential in predicting reservoir properties and sand-body. In fact, the result of traditional impedance inversion is not ideal. This is because the mathematical inverse problem of impedance is poor-pose so that the inverse result has instability and multi-result, so it is necessary to introduce regularization. Most simple regularizations are presented in existent literature, there is a premise that the image(or model) is globally smooth. In fact, as an actual geological model, it not only has made of smooth region but also be separated by the obvious edge, the edge is very important attribute of geological model. It's difficult to preserve these characteristics of the model and to avoid an edge too smooth to clear. Thereby, in this paper, we propose a impedance inverse method controlled by hyperparameters with edge-preserving regularization, the inverse convergence speed and result would be improved. In order to preserve the edge, the potential function of regularization should satisfy nine conditions such as basic assumptions edge preservation and convergence assumptions etc. Eventually, a model with clear background and edge-abnormity can be acquired. The several potential functions and the corresponding weight functions are presented in this paper. The potential functionφLφHL andφGM can meet the need of inverse precision by calculating the models. For the local constant planar and quadric models, we respectively present the neighborhood system of Markov random field corresponding to the regularization term. We linearity nonlinear regularization by using half-quadratic regularization, it not only preserve the edge, and but also simplify the inversion, and can use some linear methods. We introduced two regularization parameters (or hyperparameters) λ2 and δ in the regularization term. λ2 is used to balance the influence between the data term and the transcendental term; δ is a calibrating parameter used to adjust the gradient value at the discontinuous position(or formation interface). Meanwhile, in the inverse procedure, it is important to select the initial value of hyperparameters and to change hyperparameters, these will then have influence on convergence speed and inverse effect. In this paper, we roughly give the initial value of hyperparameters by using a trend- curve of φ-(λ2, δ) and by a method of calculating the upper limit value of hyperparameters. At one time, we change hyperparameters by using a certain coefficient or Maximum Likelihood method, this can be simultaneously fulfilled with the inverse procedure. Actually, we used the Fast Simulated Annealing algorithm in the inverse procedure. This method overcame restrictions from the local extremum without depending on the initial value, and got a global optimal result. Meanwhile, we expound in detail the convergence condition of FSA, the metropolis receiving probability form Metropolis-Hasting, the thermal procession based on the Gibbs sample and other methods integrated with FSA. These content can help us to understand and improve FSA. Through calculating in the theoretic model and applying it to the field data, it is proved that the impedance inverse method in this paper has the advantage of high precision practicability and obvious effect.
Resumo:
Reflectivity sequences extraction is a key part of impedance inversion in seismic exploration. Although many valid inversion methods exist, with crosswell seismic data, the frequency brand of seismic data can not be broadened to satisfy the practical need. It is an urgent problem to be solved. Pre-stack depth migration which developed in these years becomes more and more robust in the exploration. It is a powerful technology of imaging to the geological object with complex structure and its final result is reflectivity imaging. Based on the reflectivity imaging of crosswell seismic data and wave equation, this paper completed such works as follows: Completes the workflow of blind deconvolution, Cauchy criteria is used to regulate the inversion(sparse inversion). Also the precondition conjugate gradient(PCG) based on Krylov subspace is combined with to decrease the computation, improves the speed, and the transition matrix is not necessary anymore be positive and symmetric. This method is used to the high frequency recovery of crosswell seismic section and the result is satisfactory. Application of rotation transform and viterbi algorithm in the preprocess of equation prestack depth migration. In equation prestack depth migration, the grid of seismic dataset is required to be regular. Due to the influence of complex terrain and fold, the acquisition geometry sometimes becomes irregular. At the same time, to avoid the aliasing produced by the sparse sample along the on-line, interpolation should be done between tracks. In this paper, I use the rotation transform to make on-line run parallel with the coordinate, and also use the viterbi algorithm to complete the automatic picking of events, the result is satisfactory. 1. Imaging is a key part of pre-stack depth migration besides extrapolation. Imaging condition can influence the final result of reflectivity sequences imaging greatly however accurate the extrapolation operator is. The author does migration of Marmousi under different imaging conditions. And analyzes these methods according to the results. The results of computation show that imaging condition which stabilize source wave field and the least-squares estimation imaging condition in this paper are better than the conventional correlation imaging condition. The traditional pattern of "distributed computing and mass decision" is wisely adopted in the field of seismic data processing and becoming an obstacle of the promoting of the enterprise management level. Thus at the end of this paper, a systemic solution scheme, which employs the mode of "distributed computing - centralized storage - instant release", is brought forward, based on the combination of C/S and B/S release models. The architecture of the solution, the corresponding web technology and the client software are introduced. The application shows that the validity of this scheme.
Resumo:
National Natural Science Foundation of China [40471134]; program of Lights of the West China by the Chinese Academy of Science
Resumo:
To pick velocity automatically is not only helpful to improve the efficiency of seismic data process, but also to provide quickly the initial velocity for prestack depth migration. In this thesis, we use the Viterbi algorithm to do automatic picking, but the velocity picked usually is immoderate. By thorough study and analysis, we think that the Viterbi algorithm has the function to do quickly and effectually automatic picking, but the data provided for picking maybe not continuous on derivative of its curved surface, viz., the curved face on velocity spectrum is not slick. Therefore, the velocity picked may include irrational velocity information. To solve the problem above, we develop a new method to filter signal by performing nonlinear transformation of coordinate and filter of function. Here, we call it as Gravity Center Preserved Pulse Compressed Filter (GCPPCF). The main idea to perform the GCPPCF as follows: separating a curve, such as a pulse, to several subsection, calculating the gravity center (coordinate displacement), and then assign the value (density) on the subsection to gravity center. When gravity center departure away from center of its subsection, the value assigned to gravity center is smaller than the actual one, but non other than gravity center anastomoses fully with its subsection center, the assigned value equal to the actual one. By doing so, the curve shape under new coordinate breadthwise narrows down compare to its original one. It is a process of nonlinear transformation of coordinate, due to gravity center changing with the shape of subsection. Furthermore, the gravity function is filter one, because it is a cause of filtering that the value assigned from subsection center to gravity center is obtained by calculating its weight mean of subsetion function. In addition, the filter has the properties of the adaptive time delay changed filter, owing to the weight coefficient used for weight mean also changes with the shape of subsection. In this thesis, the Viterbi algorithm inducted, being applied to auto pick the stack velocity, makes the rule to integral the max velocity spectrum ("energy group") forward and to get the optimal solution in recursion backward. It is a convenient tool to pick automatically velocity. The GCPPCF above not only can be used to preserve the position of peak value and compress the velocity spectrum, but also can be used as adaptive time delay changed filter to smooth object curved line or curved face. We apply it to smooth variable of sequence observed to get a favourable source data ta provide for achieving the final exact resolution. If there is no the adaptive time delay-changed filter to perform optimization, we can't get a finer source data and also can't valid velocity information, moreover, if there is no the Viterbi algorithm to do shortcut searching, we can't pick velocity automatically. Accordingly, combination of both of algorithm is to make an effective method to do automatic picking. We apply the method of automatic picking velocity to do velocity analysis of the wavefield extrapolated. The results calculated show that the imaging effect of deep layer with the wavefield extrapolated was improved dominantly. The GCPPCF above has achieved a good effect in application. It not only can be used to optimize and smooth velocity spectrum, but also can be used to perform a correlated process for other type of signal. The method of automatic picking velocity developed in this thesis has obtained favorable result by applying it to calculate single model, complicated model (Marmousi model) and also the practical data. The results show that it not only has feasibility, but also practicability.
Resumo:
文章讲述了交通监控系统中应用视频图像流来跟踪运动目标并对目标进行分类的具体过程和原则.基于目标检测提出了双差分的目标检测算法,目标分类应用到了连续时间限制和最大可能性估计的原则,目标跟踪则结合检测到的运动目标图像和当前模板进行相关匹配.实验结果表明,该过程能够很好地探测和分类目标,去除背景信息的干扰,并能够在运动目标部分被遮挡、外观改变和运动停止等情况下连续地跟踪目标.
Resumo:
The complete mitochondrial (mt) DNA sequence was determined for a ridgetail white prawn, Exopalaemon carinicauda Holthuis, 1950 (Crustacea: Decopoda: Palaemonidae). The mt genome is 15,730 bp in length, encoding a standard set of 13 protein-coding genes, 2 ribosomal RNA genes, and 22 transfer RNA genes, which is typical for metazoans. The majority-strand consists of 33.6% A, 23.0% C, 13.4% G, and 30.0% T bases (AT skew = 0.057: GC skew = -0.264). A total of 1045 bp of non-coding nucleotides were observed in 16 intergenic regions,,including a major A+ T rich (79.7%) noncoding region (886 bp). A novel translocation of tRNA(Pro) and tRNA(Thr) was found when comparing this genome with the pancrustacean ground pattern indicating that gene order is not conserved among caridean mitochondria. Furthermore, the rate of Ka/Ks in 13 protein-coding genes between three caridean species is Much less than 1, which indicates a strong Purifying selection within this group. To investigate the phylogenetic relationship within Malacostraca, phylogenetic trees based oil Currently available malacostracan complete mitochondrial sequences were built with the maximum likelihood and Bayesian models. All analyses based oil nucleotide and amino acid data strongly support the monophyly of Decapoda. The Penaeidae, Reptantia, Caridea, and Meiura clades were also recovered as monophyletic groups with Strong Statistical Support. However, the phylogenetic relationships within Pleocyemata are unstable, as represented by the inclusion or exclusion of Caridea. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This study attempts to model alpine tundra vegetation dynamics in a tundra region in the Qinghai Province of China in response to global warming. We used Raster-based cellular automata and a Geographic Information System to study the spatial and temporal vegetation dynamics. The cellular automata model is implemented with IDRISI's Multi-Criteria Evaluation functionality to simulate the spatial patterns of vegetation change assuming certain scenarios of global mean temperature increase over time. The Vegetation Dynamic Simulation Model calculates a probability surface for each vegetation type, and then combines all vegetation types into a composite map, determined by the maximum likelihood that each vegetation type should distribute to each raster unit. With scenarios of global temperature increase of I to 3 degrees C, the vegetation types such as Dry Kobresia Meadow and Dry Potentilla Shrub that are adapted to warm and dry conditions tend to become more dominant in the study area.