941 resultados para CONDITIONAL HETEROSKEDASTICITY
Resumo:
The laterally confining potential of quantum dots (QDs) fabricated in semiconductor heterostructures is approximated by an elliptical two-dimensional harmonic-oscillator well or a bowl-like circular well. The energy spectrum of two interacting electrons in these potentials is calculated in the effective-mass approximation as a function of dot size and characteristic frequency of the confining potential by the exact diagonalization method. Energy level crossover is displayed according to the ratio of the characteristic frequencies of the elliptical confinement potential along the y axis and that along the x axis. Investigating the rovibrational spectrum with pair-correlation function and conditional probability distribution, we could see the violation of circular symmetry. However, there are still some symmetries left in the elliptical QDs. When the QDs are confined by a "bowl-like" potential, the removal of the degeneracy in the energy levels of QDs is found. The distribution of energy levels is different for the different heights of the barriers. (C) 2003 American Institute of Physics.
Resumo:
We investigate the quantum dynamics of a Cooper-pair box with a superconducting loop in the presence of a nonclassical microwave field. We demonstrate the existence of Rabi oscillations for both single- and multiphoton processes and, moreover, we propose a new quantum computing scheme (including one-bit and conditional two-bit gates) based on Josephson qubits coupled through microwaves.
Resumo:
土壤pH值是影响土壤养分有效性和化学物质在土壤中行为的主要因素,研究土壤pH值的空间分布特征对于土壤养分管理和土壤污染预测具有重要意义。该文用地统计学方法研究了环境因素复杂的黄土高原小流域土壤pH值空间分布特征。结果表明,黄土沟壑区小流域土壤pH值具有球形—指数套合模型的空间结构特征,其空间异质性主要来源于流域内土地利用和土壤侵蚀等随机因素。与有机质协同的Kriging法能较好地对土壤pH值进行估值,其估值范围小于实测数据,估值误差来源于复杂的环境因素。序贯高斯条件模拟的土壤pH值范围与实测数据接近,模拟的平均值低于实测数据,模拟误差来源于模拟过程中独特的Kriging算法及高斯特性。
Resumo:
随着互联网和电子化办公的发展,出现了大量的文本资源。信息抽取技术可以帮助人们快速获取大规模文本中的有用信息。命名体识别与关系抽取是信息抽取的两个基本任务。本文在调研当前命名体识别和实体关系抽取中采用的主要方法的基础上,分别给出了解决方案。论文开展的主要工作有:(1)从模型选择和特征选择两个方面总结了命名体识别及实体关系抽取的国内外研究现状,重点介绍用于命名体识别的统计学习方法以及用于实体关系抽取的基于核的方法。(2)针对当前命名体识别中命名体片段边界的确定问题,研究了如何将 Semi-Markov CRFs 模型应用于中文命名体识别。这种模型只要求段间遵循马尔科夫规则,而段内的文本之间则可以被灵活的赋予各种规则。将这种模型用于中文命名体识别任务时,我们可以更有效更自由的设计出各种有利于识别出命名体片段边界的特征。实验表明,加入段相关的特征后,命名体识别的性能提高了 4-5 个百分点。(3)实体关系抽取的任务是判别两个实体之间的语义关系。之前的研究已经表明,待判别关系的两个实体间的语法树结构对于确定二者的关系类别是非常有用的,而相对成熟的基于平面特征的关系抽取方法在充分提取语法树结构特征方面的能力有限,因此,本文研究了基于核的中文实体关系抽取方法。针对中文特点,我们探讨了卷积(Convolution)核中使用不同的语法树对中文实体关系抽取性能的影响,构造了几种基于卷积核的复合核,改进了最短路依赖核。因为核方法开始被用于英文关系抽取时,F1 值也只有40%左右,而我们只使用作用在语法树上的卷积核时,中文关系抽取的F1 值达到了35%,可见核方法对中文关系抽取也是有效的。
Resumo:
Studies on the bounding character of rare earth ions with borine serum albumin(BSA) are significant for understanding the state of rare earth ions in body and their effects on the structure and function of protein. The fluorescence spectrum and pH potentiometry showed consistent results of apparent complexion constant of Tb-2 . BSA. The equilibrium dialysis showed that there are two specific binding sites and more than six non-specific binding sites of RE ions onto BSA molecule with the conditional stable constants lg K-1 = 5. 157 and lgK(2) = 3. 435. Na-23 NMR studies revealed that the BSA peptide chain bound to RE ions was expanded and the mobility of its molecular backbone was increased.
Resumo:
Given a special type of triplet of reciprocal-lattice vectors in the monoclinic and orthorhombic systems, there exist eight three-phase structure seminvariants (3PSSs) for a pair of isomorphous structures. The first neighborhood of each of these 3PSSs is defined by the six magnitudes and the joint probability distribution of the corresponding six structure factors is derived according to Hauptman's neighborhood principle. This distribution leads to the conditional probability distribution of each of the 3PSSs, assuming as known the six magnitudes in its first neighborhood. The conditional probability distributions can be directly used to yield the reliable estimates (0 or pi) of the one-phase structure seminvariants (1PSSs) in the favorable case that the variances of the distributions happen to be small [Hauptman (1975). Acta Cryst. A31, 680-687]. The relevant parameters in the formulas for the monoclinic and orthorhombic systems are given in a tabular form. The applications suggest that the method is efficient for estimating the 1PSSs with values of 0 or pi.
Resumo:
With the intermediate-complexity Zebiak-Cane model, we investigate the 'spring predictability barrier' (SPB) problem for El Nino events by tracing the evolution of conditional nonlinear optimal perturbation (CNOP), where CNOP is superimposed on the El Nino events and acts as the initial error with the biggest negative effect on the El Nino prediction. We show that the evolution of CNOP-type errors has obvious seasonal dependence and yields a significant SPB, with the most severe occurring in predictions made before the boreal spring in the growth phase of El Nino. The CNOP-type errors can be classified into two types: one possessing a sea-surface-temperature anomaly pattern with negative anomalies in the equatorial central-western Pacific, positive anomalies in the equatorial eastern Pacific, and a thermocline depth anomaly pattern with positive anomalies along the Equator, and another with patterns almost opposite to those of the former type. In predictions through the spring in the growth phase of El Nino, the initial error with the worst effect on the prediction tends to be the latter type of CNOP error, whereas in predictions through the spring in the decaying phase, the initial error with the biggest negative effect on the prediction is inclined to be the former type of CNOP error. Although the linear singular vector (LSV)-type errors also have patterns similar to the CNOP-type errors, they cover a more localized area than the CNOP-type errors and cause a much smaller prediction error, yielding a less significant SPB. Random errors in the initial conditions are also superimposed on El Nino events to investigate the SPB. We find that, whenever the predictions start, the random errors neither exhibit an obvious season-dependent evolution nor yield a large prediction error, and thus may not be responsible for the SPB phenomenon for El Nino events. These results suggest that the occurrence of the SPB is closely related to particular initial error patterns. The two kinds of CNOP-type error are most likely to cause a significant SPB. They have opposite signs and, consequently, opposite growth behaviours, a result which may demonstrate two dynamical mechanisms of error growth related to SPB: in one case, the errors grow in a manner similar to El Nino; in the other, the errors develop with a tendency opposite to El Nino. The two types of CNOP error may be most likely to provide the information regarding the 'sensitive area' of El Nino-Southern Oscillation (ENSO) predictions. If these types of initial error exist in realistic ENSO predictions and if a target method or a data assimilation approach can filter them, the ENSO forecast skill may be improved. Copyright (C) 2009 Royal Meteorological Society
Resumo:
本文首先介绍了文献[1]给出的基于空值完全语义的五值逻辑(5VL),定义了基于5VL的比较运算和逻辑运算的运算规则,并以此为基础结出了一般条件表达式下选择运算的处理策略和实现算法。
Resumo:
Static correction is one of the indispensable steps in the conventional onshore seismic data processing, particularly in the western part of China; it is theoretically and practically significant to resolve the issue of static correction. Conventional refraction static correction is put forward under the assumption that layered medium is horizontal and evenly distributed. The complicated nature of the near surface from western part of China is far from the assumption. Therefore, the essential way to resolve the static correction problem from the complex area is to develop a new theory. In this paper, a high-precision non-linear first arrival tomography is applied to solve the problem, it moved beyond the conventional refraction algorithm based on the layered medium and can be used to modeling the complex near surface. Some of the new and creative work done is as follows: One. In the process of first arrival tomographic image modeling, a fast high-order step algorithm is used to calculate the travel time for first arrival and ray path and various factors concerning the fast step ray tracing algorithm is analyzed. Then the second-order and third-order differential format is applied to the step algorithm which greatly increased the calculation precision of the ray tracing and there is no constraint to the velocity distribution from the complex areas. This method has very strong adaptability and it can meet the needs of great velocity variation from the complicated areas. Based on the numerical calculation, a fast high-order step is a fast, non-conditional and stable high-precision tomographic modeling algorithm. Two, in the tomographic inversion, due to the uneven fold coverage and insufficient information, the inversion result is unstable and less reliable. In the paper, wavelet transform is applied to the tomographic inversion which has achieved a good result. Based on the result of the inversion from the real data, wavelet tomographic inversion has increased the reliability and stability of the inversion. Three. Apply the constrained high-precision wavelet tomographic image to the static correction processing from the complex area. During tomographic imaging, by using uphole survey, refraction shooting or other weathering layer method, weathering layer can be identified before the image. Because the group interval for the shot first arrival is relatively big, there is a lack of precision for the near surface inversion. In this paper, an inversion method of the layer constraint and well constraint is put forward, which can be used to compensate the shallow velocity of the inversion for the shot first arrival and increase the precision of the tomographic inversion. Key words: Tomography ,Fast marching method,Wavelet transform, Static corrections, First break
Resumo:
The study of pore structure in reservoir was paid attention to in the early reservoir research and now a systematic research methodology is set up. On the limits of tools and conditions, methodologies and technologies on formation condition and distribution laws of pore structure and the relationship between remaining oil distribution and pore structure are uncertain and some knownage about it is also uncertain. As the development of petroleum industry, the characterization of pore structure and the prediction of remaining oil are the hot spot and difficult point in the research of oil development. The author pays a close attention to this subject and has done much research on it. In a case study in Linnan oilfield Huimin sag Jiyang Depression Bohai Bay basin by using a new method, named varied scale comprehensive modeling of pore structure, the author builds pore structure models for delta reservoir, reveals the remaining oil distribution laws in delta facies, and predicts the distribution of remaining oil in Linnan oilfield. By the application of stratigraphy, sedimentology and structure geology. the author reveals the genetic types of sandbody and its distribution laws, builds the reservoir geological models for delta sandstone reservoir in Shahejie group in Linnan oilfield and points out the geological Factors that control the development of pores and throats. Combining petrology and the reservoir sensitive analysis, the author builds the rock matrix models. It is the first time to state that rocks in different sentimental micro facies have different sensitive .response to fluid pressed into the rocks. Normally. the reservoirs in the delta front have weaker sensitivity to fluid than the reservoirs in delta plain, In same subfacies, the microfacies that have fine grain, such as bank and crevasse splay, have stronger reservoir sensitivity than the microfacies that have coarse grains, such as under-water branched channel and debauch bar. By the application of advanced testing, such as imagine analysis, scan electronic microscope, and morphology method, the author classifies the pore structure and set up the distribution models of pore, throat and pore structure. By the application of advanced theory in well-logging geology, the author finds the relationship between microscope pore structure and macroscopic percolation characteristics, and then builds the well-logging interpretation formulae for calculating pore structure parameters. By using the geostatistics methods, the author reveals the spatial correlative characteristics of pore structure. By application of conditional stochastic simulation methods, the author builds the 3D models of pore structure in delta reservoir. It is the base of predicting remaining oil distribution. By a great deal of experiments and theoretical deduction, The author expounds the laws of percolation flow in different pore structures, and the laws by which the pore structure controls the micro distribution of remaining oil, and then, states the micro mechanism of remaining oil distribution. There are two types of remaining oil. They are by-pass flow caused by micro-fingering and truncation caused by non-piston movement. By new method, the author states the different pore structure has different replacement efficiency, reveals the formation condition and distribution laws of remaining oil. predicts the remaining oil distribution in Linnan oil field, and put forward some idea about how to adjust the oil production. The study yielded good results in the production in Linnan oilfield.
Resumo:
The dissertation addressed the problems of signals reconstruction and data restoration in seismic data processing, which takes the representation methods of signal as the main clue, and take the seismic information reconstruction (signals separation and trace interpolation) as the core. On the natural bases signal representation, I present the ICA fundamentals, algorithms and its original applications to nature earth quake signals separation and survey seismic signals separation. On determinative bases signal representation, the paper proposed seismic dada reconstruction least square inversion regularization methods, sparseness constraints, pre-conditioned conjugate gradient methods, and their applications to seismic de-convolution, Radon transformation, et. al. The core contents are about de-alias uneven seismic data reconstruction algorithm and its application to seismic interpolation. Although the dissertation discussed two cases of signal representation, they can be integrated into one frame, because they both deal with the signals or information restoration, the former reconstructing original signals from mixed signals, the later reconstructing whole data from sparse or irregular data. The goal of them is same to provide pre-processing methods and post-processing method for seismic pre-stack depth migration. ICA can separate the original signals from mixed signals by them, or abstract the basic structure from analyzed data. I surveyed the fundamental, algorithms and applications of ICA. Compared with KL transformation, I proposed the independent components transformation concept (ICT). On basis of the ne-entropy measurement of independence, I implemented the FastICA and improved it by covariance matrix. By analyzing the characteristics of the seismic signals, I introduced ICA into seismic signal processing firstly in Geophysical community, and implemented the noise separation from seismic signal. Synthetic and real data examples show the usability of ICA to seismic signal processing and initial effects are achieved. The application of ICA to separation quake conversion wave from multiple in sedimentary area is made, which demonstrates good effects, so more reasonable interpretation of underground un-continuity is got. The results show the perspective of application of ICA to Geophysical signal processing. By virtue of the relationship between ICA and Blind Deconvolution , I surveyed the seismic blind deconvolution, and discussed the perspective of applying ICA to seismic blind deconvolution with two possible solutions. The relationship of PC A, ICA and wavelet transform is claimed. It is proved that reconstruction of wavelet prototype functions is Lie group representation. By the way, over-sampled wavelet transform is proposed to enhance the seismic data resolution, which is validated by numerical examples. The key of pre-stack depth migration is the regularization of pre-stack seismic data. As a main procedure, seismic interpolation and missing data reconstruction are necessary. Firstly, I review the seismic imaging methods in order to argue the critical effect of regularization. By review of the seismic interpolation algorithms, I acclaim that de-alias uneven data reconstruction is still a challenge. The fundamental of seismic reconstruction is discussed firstly. Then sparseness constraint on least square inversion and preconditioned conjugate gradient solver are studied and implemented. Choosing constraint item with Cauchy distribution, I programmed PCG algorithm and implement sparse seismic deconvolution, high resolution Radon Transformation by PCG, which is prepared for seismic data reconstruction. About seismic interpolation, dealias even data interpolation and uneven data reconstruction are very good respectively, however they can not be combined each other. In this paper, a novel Fourier transform based method and a algorithm have been proposed, which could reconstruct both uneven and alias seismic data. I formulated band-limited data reconstruction as minimum norm least squares inversion problem where an adaptive DFT-weighted norm regularization term is used. The inverse problem is solved by pre-conditional conjugate gradient method, which makes the solutions stable and convergent quickly. Based on the assumption that seismic data are consisted of finite linear events, from sampling theorem, alias events can be attenuated via LS weight predicted linearly from low frequency. Three application issues are discussed on even gap trace interpolation, uneven gap filling, high frequency trace reconstruction from low frequency data trace constrained by few high frequency traces. Both synthetic and real data numerical examples show the proposed method is valid, efficient and applicable. The research is valuable to seismic data regularization and cross well seismic. To meet 3D shot profile depth migration request for data, schemes must be taken to make the data even and fitting the velocity dataset. The methods of this paper are used to interpolate and extrapolate the shot gathers instead of simply embedding zero traces. So, the aperture of migration is enlarged and the migration effect is improved. The results show the effectiveness and the practicability.
Resumo:
Stochastic reservoir modeling is a technique used in reservoir describing. Through this technique, multiple data sources with different scales can be integrated into the reservoir model and its uncertainty can be conveyed to researchers and supervisors. Stochastic reservoir modeling, for its digital models, its changeable scales, its honoring known information and data and its conveying uncertainty in models, provides a mathematical framework or platform for researchers to integrate multiple data sources and information with different scales into their prediction models. As a fresher method, stochastic reservoir modeling is on the upswing. Based on related works, this paper, starting with Markov property in reservoir, illustrates how to constitute spatial models for catalogued variables and continuum variables by use of Markov random fields. In order to explore reservoir properties, researchers should study the properties of rocks embedded in reservoirs. Apart from methods used in laboratories, geophysical means and subsequent interpretations may be the main sources for information and data used in petroleum exploration and exploitation. How to build a model for flow simulations based on incomplete information is to predict the spatial distributions of different reservoir variables. Considering data source, digital extent and methods, reservoir modeling can be catalogued into four sorts: reservoir sedimentology based method, reservoir seismic prediction, kriging and stochastic reservoir modeling. The application of Markov chain models in the analogue of sedimentary strata is introduced in the third of the paper. The concept of Markov chain model, N-step transition probability matrix, stationary distribution, the estimation of transition probability matrix, the testing of Markov property, 2 means for organizing sections-method based on equal intervals and based on rock facies, embedded Markov matrix, semi-Markov chain model, hidden Markov chain model, etc, are presented in this part. Based on 1-D Markov chain model, conditional 1-D Markov chain model is discussed in the fourth part. By extending 1-D Markov chain model to 2-D, 3-D situations, conditional 2-D, 3-D Markov chain models are presented. This part also discusses the estimation of vertical transition probability, lateral transition probability and the initialization of the top boundary. Corresponding digital models are used to specify, or testify related discussions. The fifth part, based on the fourth part and the application of MRF in image analysis, discusses MRF based method to simulate the spatial distribution of catalogued reservoir variables. In the part, the probability of a special catalogued variable mass, the definition of energy function for catalogued variable mass as a Markov random field, Strauss model, estimation of components in energy function are presented. Corresponding digital models are used to specify, or testify, related discussions. As for the simulation of the spatial distribution of continuum reservoir variables, the sixth part mainly explores 2 methods. The first is pure GMRF based method. Related contents include GMRF model and its neighborhood, parameters estimation, and MCMC iteration method. A digital example illustrates the corresponding method. The second is two-stage models method. Based on the results of catalogued variables distribution simulation, this method, taking GMRF as the prior distribution for continuum variables, taking the relationship between catalogued variables such as rock facies, continuum variables such as porosity, permeability, fluid saturation, can bring a series of stochastic images for the spatial distribution of continuum variables. Integrating multiple data sources into the reservoir model is one of the merits of stochastic reservoir modeling. After discussing how to model spatial distributions of catalogued reservoir variables, continuum reservoir variables, the paper explores how to combine conceptual depositional models, well logs, cores, seismic attributes production history.
Resumo:
The main reservoir type in the south of Dagang Oilfield is alluvial reservoir. In this paper, the reservoir structure model and the distribution of connected body and flow barrier were built on base of the study of high-resolution sequential stratigraphic skeleton and fine sedimentary microfacies on level of single sandbody. Utilizing the static and dynamic data synthetically and carrying out the comparision of the classification method for reservoir flow unit in different reservoir, the criterion, which can be used to classify the flow unit in first section of Kongdian formation of Kongnan area, was defined. The qualitative method of well-to-well correlation and the quantitative method of conditional simulation using multiple data are adopted to disclose the oil and water moving regulation in different flow unit and the distribution rule of remaining oil by physical simulation measure. A set of flow unit study method was formed that is suit for the Dagang Oilfield on account of the remaining oil production according to the flow unit. Several outstanding progresses was obtained in the following aspects:It is considered that the reservoir structure of Zao V iow oil group- Zao Vup4 layerand are jigsaw-puzzled reservoir, while ZaoVup3-ZaoVupi layers are labyrinth reservoir,which are studied on base of high-resolution sequential stratigraphic skeleton on the levelof single sandbody in first section of Kongdian formation of Kongnan area and accordingto the study of fine sedimentary microfacies and fault sealeing.When classifying the flow unit, only permeability is the basic parameter using thestatic and dynamic data and, and also different parameters should be chose or deleted, suchas porosity, effective thickness, fluid viscosity and so on, because of the weak or stronginterlayer heterogeneous and the difference of interlayer crude oil character.The method of building predicting-model of flow unit was proposed. This methodis according to the theories of reservoir sedimentology and high-resolution sequencestratigraphic and adopts the quantitative method of well-to well correlation and the quantitative method of stochastic simulation using integrateddense well data. Finally the 3-D predicting-model of flow unit and the interlay er distribution model in flow unit were built which are for alluvial fan and fan delta fades in first section of Kongdian formation of Kongnan area, and nine genetic model of flow unit of alluvial environment that spread in the space were proposed.(4) Difference of reservoir microscopic pore configuration in various flow units and difference of flow capability and oil displacement effect were demonstrated through the physical experiments such as nuclear magnetic resonance (NMR), constant rate mercury penetration, flow simulation and so on. The distribution of remaining oil in this area was predicted combining the dynamic data and numerical modeling based on the flow unit. Remaining oil production measure was brought up by the clue of flow unit during the medium and late course of the oilfield development.
Resumo:
We seek to both detect and segment objects in images. To exploit both local image data as well as contextual information, we introduce Boosted Random Fields (BRFs), which uses Boosting to learn the graph structure and local evidence of a conditional random field (CRF). The graph structure is learned by assembling graph fragments in an additive model. The connections between individual pixels are not very informative, but by using dense graphs, we can pool information from large regions of the image; dense models also support efficient inference. We show how contextual information from other objects can improve detection performance, both in terms of accuracy and speed, by using a computational cascade. We apply our system to detect stuff and things in office and street scenes.
Resumo:
This thesis presents a new high level robot programming system. The programming system can be used to construct strategies consisting of compliant motions, in which a moving robot slides along obstacles in its environment. The programming system is referred to as high level because the user is spared of many robot-level details, such as the specification of conditional tests, motion termination conditions, and compliance parameters. Instead, the user specifies task-level information, including a geometric model of the robot and its environment. The user may also have to specify some suggested motions. There are two main system components. The first component is an interactive teaching system which accepts motion commands from a user and attempts to build a compliant motion strategy using the specified motions as building blocks. The second component is an autonomous compliant motion planner, which is intended to spare the user from dealing with "simple" problems. The planner simplifies the representation of the environment by decomposing the configuration space of the robot into a finite state space, whose states are vertices, edges, faces, and combinations thereof. States are inked to each other by arcs, which represent reliable compliant motions. Using best first search, states are expanded until a strategy is found from the start state to a global state. This component represents one of the first implemented compliant motion planners. The programming system has been implemented on a Symbolics 3600 computer, and tested on several examples. One of the resulting compliant motion strategies was successfully executed on an IBM 7565 robot manipulator.