884 resultados para Tridiagonal Kernel


Relevância:

10.00% 10.00%

Publicador:

Resumo:

供应链管理使企业在变化的市场环境中有效地与其它企业合作 ,取得集体竞争优势。本文首先讨论了后勤学与供应链管理的定义和之间的关系。本文认为供应链管理的核心是物流与信息流的控制。物流控制决策主要包括操作层次的库存补充和运输路径规划 ,以及战略层次的设施地点规划。信息流管理跨越部门与企业的界限将相关的应用集成起来。动态联盟协调各企业内部的生产经营活动 ,战略性地决定物流与信息流的构形。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

工业自动化总体技术是在CIM思想指导下产生的实施CIM的关键技术。总体技术的核心是总体设计与总体集成技术。本文言及总体技术产生的背景,其宗旨与含义,总体技术方法学,工业自动化系统体系结构与参考模型,以及实践总体技术的概况动向。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

本文从信息控制的角度出发将机器人语言定义为能处理某些特定的“外部设备”的计算机程序设计语言。并将机器人语言成份分为两大部分,即机器人核心语言和机器人专用语言。然后分别综述了机器人专用语言和机器人核心语言的进展情况。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

本文扼要地概述了人工智能与智能控制系统的发展情况.全文分五个部份:1)前言;2)人工智能的历史及主要研究课题;3)人工智能核心问题的研究与进展;4)智能控制系统;5)结论.本文提出应用人工智能的基本原理,特别是专家咨询系统的基本思想,建立新型的控制系统——智能控制系统,来解决复杂大系统的辨识和控制中一些难解决的问题.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

由于MCS-51系列微控制器存在着硬件堆栈小的结构缺陷,无法满足多任务环境下进行任务切换的需求,因此很难将uC/OS—II移植到MCS-51系列微控制器上。本文给出的"堆栈映射"方式很好的解决了这个问题。同时还对uC/OS—II移植过程中一些关键性问题给予了详细的论述。包括:uC/OS—II可移植的条件、内核配置和裁剪以及内核调试。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the prediction of complex reservoir with high heterogeneities in lithologic and petrophysical properties, because of inexact data (e.g., information-overlapping, information-incomplete, and noise-contaminated) and ambiguous physical relationship, inversion results suffer from non-uniqueness, instability and uncertainty. Thus, the reservoir prediction technologies based on the linear assumptions are unsuited for these complex areas. Based on the limitations of conventional technologies, the thesis conducts a series of researches on various kernel problems such as inversions from band-limited seismic data, inversion resolution, inversion stability, and ambiguous physical relationship. The thesis combines deterministic, statistical and nonlinear theories of geophysics, and integrates geological information, rock physics, well data and seismic data to predict lithologic and petrophysical parameters. The joint inversion technology is suited for the areas with complex depositional environment and complex rock-physical relationship. Combining nonlinear multistage Robinson seismic convolution model with unconventional Caianiello neural network, the thesis implements the unification of the deterministic and statistical inversion. Through Robinson seismic convolution model and nonlinear self-affine transform, the deterministic inversion is implemented by establishing a deterministic relationship between seismic impedance and seismic responses. So, this can ensure inversion reliability. Furthermore, through multistage seismic wavelet (MSW)/seismic inverse wavelet (MSIW) and Caianiello neural network, the statistical inversion is implemented by establishing a statistical relationship between seismic impedance and seismic responses. Thus, this can ensure the anti-noise ability. In this thesis, direct and indirect inversion modes are alternately used to estimate and revise the impedance value. Direct inversion result is used as the initial value of indirect inversion and finally high-resolution impedance profile is achieved by indirect inversion. This largely enhances inversion precision. In the thesis, a nonlinear rock physics convolution model is adopted to establish a relationship between impedance and porosity/clay-content. Through multistage decomposition and bidirectional edge wavelet detection, it can depict more complex rock physical relationship. Moreover, it uses the Caianiello neural network to implement the combination of deterministic inversion, statistical inversion and nonlinear theory. Last, by combined applications of direct inversion based on vertical edge detection wavelet and indirect inversion based on lateral edge detection wavelet, it implements the integrative application of geological information, well data and seismic impedance for estimation of high-resolution petrophysical parameters (porosity/clay-content). These inversion results can be used to reservoir prediction and characterization. Multi-well constrains and separate-frequency inversion modes are adopted in the thesis. The analyses of these sections of lithologic and petrophysical properties show that the low-frequency sections reflect the macro structure of the strata, while the middle/high-frequency sections reflect the detailed structure of the strata. Therefore, the high-resolution sections can be used to recognize the boundary of sand body and to predict the hydrocarbon zones.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Seismic technique is in the leading position for discovering oil and gas trap and searching for reserves throughout the course of oil and gas exploration. It needs high quality of seismic processed data, not only required exact spatial position, but also the true information of amplitude and AVO attribute and velocity. Acquisition footprint has an impact on highly precision and best quality of imaging and analysis of AVO attribute and velocity. Acquisition footprint is a new conception of describing seismic noise in 3-D exploration. It is not easy to understand the acquisition footprint. This paper begins with forward modeling seismic data from the simple sound wave model, then processes it and discusses the cause for producing the acquisition footprint. It agreed that the recording geometry is the main cause which leads to the distribution asymmetry of coverage and offset and azimuth in different grid cells. It summarizes the characters and description methods and analysis acquisition footprint’s influence on data geology interpretation and the analysis of seismic attribute and velocity. The data reconstruct based on Fourier transform is the main method at present for non uniform data interpolation and extrapolate, but this method always is an inverse problem with bad condition. Tikhonov regularization strategy which includes a priori information on class of solution in search can reduce the computation difficulty duo to discrete kernel condition disadvantage and scarcity of the number of observations. The method is quiet statistical, which does not require the selection of regularization parameter; and hence it has appropriate inversion coefficient. The result of programming and tentat-ive calculation verifies the acquisition footprint can be removed through prestack data reconstruct. This paper applies migration to the processing method of removing the acquisition footprint. The fundamental principle and algorithms are surveyed, seismic traces are weighted according to the area which occupied by seismic trace in different source-receiver distances. Adopting grid method in stead of accounting the area of Voroni map can reduce difficulty of calculation the weight. The result of processing the model data and actual seismic demonstrate, incorporating a weighting scheme based on the relative area that is associated with each input trace with respect to its neighbors acts to minimize the artifacts caused by irregular acquisition geometry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the last several decades, due to the fast development of computer, numerical simulation has been an indispensable tool in scientific research. Numerical simulation methods which based on partial difference operators such as Finite Difference Method (FDM) and Finite Element Method (FEM) have been widely used. However, in the realm of seismology and seismic prospecting, one usually meets with geological models which have piece-wise heterogeneous structures as well as volume heterogeneities between layers, the continuity of displacement and stress across the irregular layers and seismic wave scattering induced by the perturbation of the volume usually bring in error when using conventional methods based on difference operators. The method discussed in this paper is based on elastic theory and integral theory. Seismic wave equation in the frequency domain is transformed into a generalized Lippmann-Schwinger equation, in which the seismic wavefield contributed by the background is expressed by the boundary integral equation and the scattering by the volume heterogeneities is considered. Boundary element-volume integral method based on this equation has advantages of Boundary Element Method (BEM), such as reducing one dimension of the model, explicit use the displacement and stress continuity across irregular interfaces, high precision, satisfying the boundary at infinite, etc. Also, this method could accurately simulate the seismic scattering by the volume heterogeneities. In this paper, the concrete Lippmann-Schwinger equation is specifically given according to the real geological models. Also, the complete coefficients of the non-smooth point for the integral equation are introduced. Because Boundary Element-Volume integral equation method uses fundamental solutions which are singular when the source point and the field are very close,both in the two dimensional and the three dimensional case, the treatment of the singular kernel affects the precision of this method. The method based on integral transform and integration by parts could treat the points on the boundary and inside the domain. It could transform the singular integral into an analytical one both in two dimensional and in three dimensional cases and thus it could eliminate the singularity. In order to analyze the elastic seismic wave scattering due to regional irregular topographies, the analytical solution for problems of this type is discussed and the analytical solution of P waves by multiple canyons is given. For the boundary reflection, the method used here is infinite boundary element absorbing boundary developed by a pervious researcher. The comparison between the analytical solutions and concrete numerical examples validate the efficiency of this method. We thoroughly discussed the sampling frequency in elastic wave simulation and find that, for a general case, three elements per wavelength is sufficient, however, when the problem is too complex, more elements per wavelength are necessary. Also, the seismic response in the frequency domain of the canyons with different types of random heterogeneities is illustrated. We analyzed the model of the random media, the horizontal and vertical correlation length, the standard deviation, and the dimensionless frequency how to affect the seismic wave amplification on the ground, and thus provide a basis for the choice of the parameter of random media during numerical simulation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The real earth is far away from an ideal elastic ball. The movement of structures or fluid and scattering of thin-layer would inevitably affect seismic wave propagation, which is demonstrated mainly as energy nongeometrical attenuation. Today, most of theoretical researches and applications take the assumption that all media studied are fully elastic. Ignoring the viscoelastic property would, in some circumstances, lead to amplitude and phase distortion, which will indirectly affect extraction of traveltime and waveform we use in imaging and inversion. In order to investigate the response of seismic wave propagation and improve the imaging and inversion quality in complex media, we need not only consider into attenuation of the real media but also implement it by means of efficient numerical methods and imaging techniques. As for numerical modeling, most widely used methods, such as finite difference, finite element and pseudospectral algorithms, have difficulty in dealing with problem of simultaneously improving accuracy and efficiency in computation. To partially overcome this difficulty, this paper devises a matrix differentiator method and an optimal convolutional differentiator method based on staggered-grid Fourier pseudospectral differentiation, and a staggered-grid optimal Shannon singular kernel convolutional differentiator by function distribution theory, which then are used to study seismic wave propagation in viscoelastic media. Results through comparisons and accuracy analysis demonstrate that optimal convolutional differentiator methods can solve well the incompatibility between accuracy and efficiency, and are almost twice more accurate than the same-length finite difference. They can efficiently reduce dispersion and provide high-precision waveform data. On the basis of frequency-domain wavefield modeling, we discuss how to directly solve linear equations and point out that when compared to the time-domain methods, frequency-domain methods would be more convenient to handle the multi-source problem and be much easier to incorporate medium attenuation. We also prove the equivalence of the time- and frequency-domain methods by using numerical tests when assumptions with non-relaxation modulus and quality factor are made, and analyze the reason that causes waveform difference. In frequency-domain waveform inversion, experiments have been conducted with transmission, crosshole and reflection data. By using the relation between media scales and characteristic frequencies, we analyze the capacity of the frequency-domain sequential inversion method in anti-noising and dealing with non-uniqueness of nonlinear optimization. In crosshole experiments, we find the main sources of inversion error and figure out how incorrect quality factor would affect inverted results. When dealing with surface reflection data, several frequencies have been chosen with optimal frequency selection strategy, with which we use to carry out sequential and simultaneous inversions to verify how important low frequency data are to the inverted results and the functionality of simultaneous inversion in anti-noising. Finally, I come with some conclusions about the whole work I have done in this dissertation and discuss detailly the existing and would-be problems in it. I also point out the possible directions and theories we should go and deepen, which, to some extent, would provide a helpful reference to researchers who are interested in seismic wave propagation and imaging in complex media.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

How to create a new method to solve the problem or reduce the influence of that the result of the seismic waves scattering nonlinear inversion is not uniqueness is a main purpose of this research work in the paper. On the background of research into the seismic inversion, new progress of the nonlinear inversion is introduced at the first chapter in this paper. Especially, the development, basic theories and assumptions on some major theories of seismic inversion are analyzed, discussed and summarized in mathematics and physics. Also, the problems faced by the mathematical basis of investigations of the seismic inversion are discussed, and inverse questions of strongly seismic scattering due to strong heterogeneous media in the Earth interior are analyzed and viewed. What the kernel of paper is that gathers all our attention making a new nonlinear inversion method of seismic scattering. The paper provides a theory and method of how to introduce the fixed-point theory into the nonlinear seismic scattering inversion and how to obtain the solution, and gives the actually method to create a serials of contractive mappings of velocity parameter's in the mapping space of wave. Therefore, the results testify the existence of fixed point of velocity parameter and give the method the find it. Further, the paper proves the conclusion that the value obtained by taking the fixed point of velocity parameter into wave equation is the fixed point of the wave of the contractive mapping. Thence, the fixed point is the global minima since the stabilities quality of the fixed point. Based on the new theory, in the chapter three, many inverse results are obtained in the numerical value test. By analysis the results one could find a basic facts that all the results, which are inversed by the different initial model, are tended to the true value in theoretical true model. In other words, the new method mostly eliminates the non-uniqueness that which is existed in seismic waves scattering nonlinear inversion in degree. But, since the test results are quite finite now, more test is need here to positive our theory. As a new theoretical method, it must be existed many weaken in it. The chapter four points out all the questions which is bother us. We hope more people to join us to solve the problem together.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Based on social survey data conducted by local research group in some counties executed in the nearly past five years in China, the author proposed and solved two kernel problems in the field of social situation forecasting: i) How can the attitudes’ data on individual level be integrated with social situation data on macrolevel; ii) How can the powers of forecasting models’ constructed by different statistic methods be compared? Five integrative statistics were applied to the research: 1) algorithm average (MEAN); 2) standard deviation (SD); 3) coefficient variability (CV); 4) mixed secondary moment (M2); 5) Tendency (TD). To solve the former problem, the five statistics were taken to synthesize the individual and mocrolevel data of social situations on the levels of counties’ regions, and form novel integrative datasets, from the basis of which, the latter problem was accomplished by the author: modeling methods such as Multiple Regression Analysis (MRA), Discriminant Analysis (DA) and Support Vector Machine (SVM) were used to construct several forecasting models. Meanwhile, on the dimensions of stepwise vs. enter, short-term vs. long-term forecasting and different integrative (statistic) models, meta-analysis and power analysis were taken to compare the predicting power of each model within and among modeling methods. Finally, it can be concluded from the research of the dissertation: 1) Exactly significant difference exists among different integrative (statistic) models, in which, tendency (TD) integrative models have the highest power, but coefficient variability (CV) ones have the lowest; 2) There is no significant difference of the power between stepwise and enter models as well as short-term and long-term forecasting models; 3) There is significant difference among models constructed by different methods, of which, support vector machine (SVM) has the highest statistic power. This research founded basis in all facets for exploring the optimal forecasting models of social situation’s more deeply, further more, it is the first time methods of meta-analysis and power analysis were immersed into the assessments of such forecasting models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The STUDENT problem solving system, programmed in LISP, accepts as input a comfortable but restricted subset of English which can express a wide variety of algebra story problems. STUDENT finds the solution to a large class of these problems. STUDENT can utilize a store of global information not specific to any one problem, and may make assumptions about the interpretation of ambiguities in the wording of the problem being solved. If it uses such information or makes any assumptions, STUDENT communicates this fact to the user. The thesis includes a summary of other English language questions-answering systems. All these systems, and STUDENT, are evaluated according to four standard criteria. The linguistic analysis in STUDENT is a first approximation to the analytic portion of a semantic theory of discourse outlined in the thesis. STUDENT finds the set of kernel sentences which are the base of the input discourse, and transforms this sequence of kernel sentences into a set of simultaneous equations which form the semantic base of the STUDENT system. STUDENT then tries to solve this set of equations for the values of requested unknowns. If it is successful it gives the answers in English. If not, STUDENT asks the user for more information, and indicates the nature of the desired information. The STUDENT system is a first step toward natural language communication with computers. Further work on the semantic theory proposed should result in much more sophisticated systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

C.G.G. Aitken, Q. Shen, R. Jensen and B. Hayes. The evaluation of evidence for exponentially distributed data. Computational Statistics & Data Analysis, vol. 51, no. 12, pp. 5682-5693, 2007.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Extensible systems allow services to be configured and deployed for the specific needs of individual applications. This paper describes a safe and efficient method for user-level extensibility that requires only minimal changes to the kernel. A sandboxing technique is described that supports multiple logical protection domains within the same address space at user-level. This approach allows applications to register sandboxed code with the system, that may be executed in the context of any process. Our approach differs from other implementations that require special hardware support, such as segmentation or tagged translation look-aside buffers (TLBs), to either implement multiple protection domains in a single address space, or to support fast switching between address spaces. Likewise, we do not require the entire system to be written in a type-safe language, to provide fine-grained protection domains. Instead, our user-level sandboxing technique requires only paged-based virtual memory support, and the requirement that extension code is written either in a type-safe language, or by a trusted source. Using a fast method of upcalls, we show how our sandboxing technique for implementing logical protection domains provides significant performance improvements over traditional methods of invoking user-level services. Experimental results show our approach to be an efficient method for extensibility, with inter-protection domain communication costs close to those of hardware-based solutions leveraging segmentation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The best-effort nature of the Internet poses a significant obstacle to the deployment of many applications that require guaranteed bandwidth. In this paper, we present a novel approach that enables two edge/border routers-which we call Internet Traffic Managers (ITM)-to use an adaptive number of TCP connections to set up a tunnel of desirable bandwidth between them. The number of TCP connections that comprise this tunnel is elastic in the sense that it increases/decreases in tandem with competing cross traffic to maintain a target bandwidth. An origin ITM would then schedule incoming packets from an application requiring guaranteed bandwidth over that elastic tunnel. Unlike many proposed solutions that aim to deliver soft QoS guarantees, our elastic-tunnel approach does not require any support from core routers (as with IntServ and DiffServ); it is scalable in the sense that core routers do not have to maintain per-flow state (as with IntServ); and it is readily deployable within a single ISP or across multiple ISPs. To evaluate our approach, we develop a flow-level control-theoretic model to study the transient behavior of established elastic TCP-based tunnels. The model captures the effect of cross-traffic connections on our bandwidth allocation policies. Through extensive simulations, we confirm the effectiveness of our approach in providing soft bandwidth guarantees. We also outline our kernel-level ITM prototype implementation.