891 resultados para Linear and multilinear programming


Relevância:

100.00% 100.00%

Publicador:

Resumo:

给出了以混凝土泵车各臂油缸长度为参变量的布料机构浇筑过程的轨迹规划计算方法。在解决布料机构运动学分析的逆问题时 ,采用了基于多峰值并行搜索的遗传算法来求解最优控制优化目标函数 ,并对施工过程进行了仿真

Relevância:

100.00% 100.00%

Publicador:

Resumo:

在线性电位器的电路设计中、电路前后级的输出和输入阻抗的影响以及使用与安装不当都可能引入非线性,造成电路和控制系统的精度达不到要求。为此,针对电位器的调节输出电压、限定调节范围、负载等效阻抗、细调等几种典型电路中的传递函数与非线性响应,通过实验给出了线性和非线性输出响应曲线。阐述了实际应用中如何避免和减少非线性的影响。

Relevância:

100.00% 100.00%

Publicador:

Resumo:

作者设计并实现了一个基于多变元逐步回归的二叉树分类器.在树结构和特征子集的选择中采用了穷举法,比有限制条件的选择更合理更优化.用 FORTRAN 语言实现的“遍历”二叉树,充分利用了 FORTRAN 处理可调数组的能力,并采取适当技巧,从而最大限度地利用了计算机内存.该通用分类器,可用来对任何具有统计数据的模式进行分类.在对白血球的分类中,取得了五类97%,六类92.2%的高识别率.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The practice of geophysical prospecting shows us the complex interior earth. The studies of the complexity play an important role and practical guide for the subsurface structure. At present, the complexity of the earth mainly means lateral and vertical homogeneity, anisotropy and non-linear quality. And the anisotropy and non-linear media studies become the frontier in seismology and exploration seismology. This paper summarizes the development of complexities and presents the forward and inverse in the non-linear and anisotropic media. Firstly, the paper introduces the theory of seismic wave propagation in the non-linear and anisotropic media, the theoretical basis for simulation and inversion research. Secondly, high quality numerical simulation method with little dispersion has been developed to investigate the influence of complexity including anisotropy and non-linear multi-component seismograms. Because most real data in seismology have a single component, we developed two aspects work on anisotropic multi-component imaging. One is prestack reflection migration. The result show that distorted images are obtained if data from anisotropic media are migrated using isotropic extrapolation. Moreover, image quality will be improved greatly after considering anisotropy in subsurface layers. The other one is the we take advantage of multi-component data to inversion of the anisotropic parameters jointly seimic reflection travel time and polarization information. Based on these research works, we get the following results: 1.Combing numerical simulation, systematical studies indicate that anisotropy and non-linear seismograms characters are significant to detect cracked belts in the earth and to understand deformation field and mechanism. 2.Based on anisotropic media models, we developed an efficient prestack migration method for subsurface structure and different observation methods seismic data, which improving the imaging quality with VSP, seismograms and real data. 3.Jointly seismic inversion combining seismic anisotropic reflection traveltimes and polarizations data show that the complete wrong inversion and the following explanation will be resulted by ignoring anisotropy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Geophysical inversion is a theory that transforms the observation data into corresponding geophysical models. The goal of seismic inversion is not only wave velocity models, but also the fine structures and dynamic process of interior of the earth, expanding to more parameters such as density, aeolotropism, viscosity and so on. As is known to all, Inversion theory is divided to linear and non-linear inversion theories. In rencent 40 years linear inversion theory has formed into a complete and systematic theory and found extensive applications in practice. While there are still many urgent problems to be solved in non-linear inversion theory and practice. Based on wave equation, this dissertation has been mainly involved in the theoretical research of several non-linear inversion methods: waveform inversion, traveltime inversion and the joint inversion about two methods. The objective of gradient waveform inversion is to find a geologic model, thus synthetic seismograms generated by this geologic model are best fitted to observed seismograms. Contrasting with other inverse methods, waveform inversion uses all characteristics of waveform and has high resolution capacity. But waveform inversion is an interface by interface method. An artificial parameter limit should be provided in each inversion iteration. In addition, waveform information will tend to get stuck in local minima if the starting model is too far from the actual model. Based on velocity scanning in traditional seismic data processing, a layer-by-layer waveform inversion method is developed in this dissertation to deal with weaknesses of waveform inversion. Wave equation is used to calculate the traveltime and derivative (perturbation of traveltime with respect to velocity) in wave-equation traveltime inversion (WT). Unlike traditional ray-based travetime inversion, WT has many advantages. No ray tracing or traveltime picking and no high frequency assumption is necessary and good result can be got while starting model is far from real model. But, comparing with waveform inversion, WT has low resolution. Waveform inversion and WT have complementary advantages and similar algorithm, which proves that the joint inversion is a better inversion method. And another key point which this dissertation emphasizes is how to give fullest play to their complementary advantages on the premise of no increase of storage spaces and amount of calculation. Numerical tests are implemented to prove the feasibility of inversion methods mentioned above in this dissertation. Especially for gradient waveform inversion, field data are inversed. This field data are acquired by our group in Wali park and Shunyi district. Real data processing shows there are many problems for waveform inversion to deal with real data. The matching of synthetic seismograms with observed seismograms and noise cancellation are two primary problems. In conclusion, on the foundation of the former experiences, this dissertation has implemented waveform inversions on the basis of acoustic wave equation and elastic wave equation, traveltime inversion on the basis of acoustic wave equation and traditional combined waveform traveltime inversion. Besides the traditional analysis of inversion theory, there are two innovations: layer by layer inversion of seimic reflection data inversion and rapid method for acoustic wave-equation joint inversion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In modem signal Processing,non-linear,non-Gaussian and non-stable signals are usually the analyzed and Processed objects,especially non-stable signals. The convention always to analyze and Process non-stable signals are: short time Fourier transform,Wigner-Ville distribution,wavelet Transform and so on. But the above three algorithms are all based on Fourier Transform,so they all have the shortcoming of Fourier Analysis and cannot get rid of the localization of it. Hilbert-Huang Transform is a new non-stable signal processing technology,proposed by N. E. Huang in 1998. It is composed of Empirical Mode Decomposition (referred to as EMD) and Hilbert Spectral Analysis (referred to as HSA). After EMD Processing,any non-stable signal will be decomposed to a series of data sequences with different scales. Each sequence is called an Intrinsic Mode Function (referred to as IMF). And then the energy distribution plots of the original non-stable signal can be found by summing all the Hilbert spectrums of each IMF. In essence,this algorithm makes the non-stable signals become stable and decomposes the fluctuations and tendencies of different scales by degrees and at last describes the frequency components with instantaneous frequency and energy instead of the total frequency and energy in Fourier Spectral Analysis. In this case,the shortcoming of using many fake harmonic waves to describe non-linear and non-stable signals in Fourier Transform can be avoided. This Paper researches in the following parts: Firstly,This paper introduce the history and development of HHT,subsequently the characters and main issues of HHT. This paper briefly introduced the basic realization principles and algorithms of Hilbert-Huang transformation and confirms its validity by simulations. Secondly, This paper discuss on some shortcoming of HHT. By using FFT interpolation, we solve the problem of IMF instability and instantaneous frequency undulate which are caused by the insufficiency of sampling rate. As to the bound effect caused by the limitation of envelop algorithm of HHT, we use the wave characteristic matching method, and have good result. Thirdly, This paper do some deeply research on the application of HHT in electromagnetism signals processing. Based on the analysis of actual data examples, we discussed its application in electromagnetism signals processing and noise suppression. Using empirical mode decomposition method and multi-scale filter characteristics can effectively analyze the noise distribution of electromagnetism signal and suppress interference processing and information interpretability. It has been founded that selecting electromagnetism signal sessions using Hilbert time-frequency energy spectrum is helpful to improve signal quality and enhance the quality of data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The grid is a foundation of reservoir description and reservoir simulation. The scale of grid size is vital influence for the precision of reservoir simulation the gridding of reservoir parameters require reasonable interpolation method with computing quickly and accurately. The improved distant weighted interpolation method has many properties, such as logical data points selection, exact interpolation, less calculation and simply programming, and its application can improve the precision of reservoir description and reservoir simulation. The Fractal geologic statistics describes scientifically the distribution law of various geological properties in reservoir. The Fractal interpolation method is applied in grid interpolation of reservoir parameters, and the result more accorded with the geological property and configuration of reservoir, and improved the rationality and quality of interpolation calculation. Incorporating the improved distant weighted interpolation method with Fractal interpolation method during mathematical model of grid-upscaling and grid-downscaling, the softwares of GROUGH(grid-upscaling) and GFINE (grid-downscaling) were developed aiming at the questions of grid-upscaling and grid-downscaling in reservoir description and reservoir simulation. The softwares of GROUGH and GFINE initial applied in the research of fined and large-scale reservoir simulation. It obtained fined distribution of remaining oil applying grid-upscaling and grid-downscaling technique in fined reservoir simulation of Es21-2 Shengtuo oilfield, and provided strongly and scientific basis for integral and comprehensive adjustment. It's a giant tertiary oil recovery pilot area in the alkaline/surfactant/polymer flooding pilot area of west district of Gudao oilfield, and first realized fined reservoir simulation of chemical flooding using grid-upscaling and grid-downscaling technique. It has wide applied foreground and significant research value aiming at the technique of grid-upscaling and grid-downscaling in reservoir description and reservoir simulation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Among the cognitive studies of action, an important behavioral method is used to observe Reaction Time (RT) and Movement Time (MT) as the functions of motor parameters. RT is measured from the beginning of target presentation to the initiation of a movement, which is regarded as the programming of the ongoing movement. MT is measured from the initiation to the end of the movement, which is regarded as the execution of the movement. However, the relationship between RT and motor parameters remains uncertain till now. Under the uncertainty many related issues cannot be settled for long period, especially the issues as whether the amplitude effect appears during RT, or what should the amplitude effect be during RT. The present study aimed to find out the amplitude effect and the related cognitive process under different experimental conditions. First, we discussed the potential composition of RT and suggested that RT that normally measured in previous experiments might not reflect motor programming very well. Then we designed a series experiments to observe the relationship between RT and motor programming by using different Index of Difficulty (ID), different instructions in which speed and accuracy were emphasized respectively, different vision condition during movement execution and Go/NoGo paradigm. Meanwhile, we compared the amplitude effect under the respective RT to make the specific conclusion about the amplitude effect, and the relationship between RT and MT as well. The main findings are showed as following. 1) Because of the existing of “preview”, “visual feedback control” and “speed-accuracy tradeoff”, RT reflects motor programming differently under different experimental conditions. 2) Under different experimental conditions, the amplitude effect on RT varies. RT could be too short to exhibit the amplitude effect. Or the amplitude effect could be that more RT is needed for shorter movement when RT is prolonged. Or the amplitude effect could be that more RT is needed for longer movement when RT is further prolonged. 3) Under the present experimental conditions, the amplitude effect on MT showed consistently that longer movement needs longer MT. 4) Under the present experimental conditions, the relationship between RT and MT is a kind of compensation. The present study has important theoretic significance. The cognitive process of action is an important part of human cognitive behavior. The related studies could be very helpful for human people to know about themselves and the relation between themselves and the surroundings as well. Keywords motor programming; amplitude effect; Reaction Time (RT); Movement Time (MT)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the middle of 1980's, the mechanisms of transfer of training between cognitive subskills rest on the same body of declarative knowledge has been highly concerned. The dominant theory is theory of common element (Singley & Anderson, 1989) which predict that there will be little or no transfer between subskills within the same domain when knowledge is used in different ways, even though the subskills might rest on a common body of declarative knowledge. This idea is termed as "principle of use specificity of knowledge" (Anderson, 1987). Although this principle has gained some empirical evidence from different domains such as elementary geometry (Neves & Anderson, 1981) and computer programming (McKendree & Anderson, 1987), it is challenged by some research (Pennington et al., 1991; 1995) in which substantially larger amounts of transfer of training was found between substills that rest on a shared declarative knowledge but share little procedures (production rules). Pennington et al. (1995) provided evidence that this larger amounts of transfer are due to the elaboration of declarative knowledge. Our research provide a test of these two different explanation, by considering transfer between two subskills within the domain of elementary geometry and elementary algebra respectively, and the inference of learning method ("learning from examples" and "learning from declarative-text") and subject ability (high, middle, low) on the amounts of transfer. Within the domain of elementary geometry, the two subskills of generating proofs" (GP) and "explaining proofs" (EP) which are rest on the declarative knowledge of "theorems on the characters of parallelogram" share little procedures. Within the domain of elementary algebra, the two subskills of "calculation" (C) and "simplification" (S) which are rest on the declarative knowledge of "multiplication of radical" share some more procedures. The results demonstrate that: 1. Within the domain of elementary geometry, although little transfer was found between the two subskills of GP and EP within the total subjects, different results occurred when considering the factor of subject's ability. Within the high level subjects, significant positive transfer was found from EP to GP, while little transfer was found on the opposite direction (i. e. from GP to EP). Within the low level subjects, significant positive transfer was found from EP to GP, while significant negative transfer was found on the opposite direction. For the middle level subject, little transfer was found between the two subskills. 2. Within the domain of elementary algebra, significant positive transfer was found from S to C, while significant negative transfer was found on the opposite direction (i. e. from C to S), when considering the total subjects. The same pattern of transfer occurred within the middle level subjects and low level subject. Within the high level subjects, no transfer was found between the two subskills. 3. Within theses two domains, different learning methods yield little influence on transfer of training between subskills. Apparently, these results can not be attributed to either common procedures or elaboration of declarative knowledge. A kind of synthetic inspection is essential to construct a reasonable explanation of these results which should take into account the following three elements: (1) relations between the procedures of subskills; (2) elaboration of declarative knowledge; (3) elaboration of procedural knowledge. 排Excluding the factor of subject, transfer of training between subskills can be predicted and explained by analyzing the relations between the procedures of two subskills. However, when considering some certain subjects, the explanation of transfer of training between subskills must include subjects' elaboration of declarative knowledge and procedural knowledge, especially the influence of the elaboration on performing the other subskill. The fact that different learning methods yield little influence on transfer of training between subskills can be explained by the fact that these two methods did not effect the level of declarative knowledge. Protocol analysis provided evidence to support these hypothesis. From this research, we conclude that in order to expound the mechanisms of transfer of training between cognitive subskills rest on the same body of declarative knowledge, three elements must be considered synthetically which include: (1) relations between the procedures of subskills; (2) elaboration of declarative knowledge; (3) elaboration of procedural knowledge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

On October 19-22, 1997 the Second PHANToM Users Group Workshop was held at the MIT Endicott House in Dedham, Massachusetts. Designed as a forum for sharing results and insights, the workshop was attended by more than 60 participants from 7 countries. These proceedings report on workshop presentations in diverse areas including rigid and compliant rendering, tool kits, development environments, techniques for scientific data visualization, multi-modal issues and a programming tutorial.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A simple analog circuit designer has been implemented as a rule based system. The system can design voltage followers. Miller integrators, and bootstrap ramp generators from functional descriptions of what these circuits do. While the designer works in a simple domain where all components are ideal, it demonstrates the abilities of skilled designers. While the domain is electronics, the design ideas are useful in many other engineering domains, such as mechanical engineering, chemical engineering, and numerical programming. Most circuit design systems are given the circuit schematic and use arithmetic constraints to select component values. This circuit designer is different because it designs the schematic. The designer uses a unidirectional CONTROL relation to find the schematic. The circuit designs are built around this relation; it restricts the search space, assigns purposes to components and finds design bugs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis confronts the nature of the process of learning an intellectual skill, the ability to solve problems efficiently in a particular domain of discourse. The investigation is synthetic; a computational performance model, HACKER, is displayed. Hacker is a computer problem-solving system whose performance improves with practice. HACKER maintains performance knowledge as a library of procedures indexed by descriptions of the problem types for which the procedures are appropriate. When applied to a problem, HACKER tries to use a procedure from this "Answer Library". If no procedure is found to be applicable, HACKER writes one using more general knowledge of the problem domain and of programming techniques. This new program may be generalized and added to the Answer Library.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of achieving conjunctive goals has been central to domain independent planning research; the nonlinear constraint-posting approach has been most successful. Previous planners of this type have been comlicated, heuristic, and ill-defined. I have combined and distilled the state of the art into a simple, precise, implemented algorithm (TWEAK) which I have proved correct and complete. I analyze previous work on domain-independent conjunctive planning; in retrospect it becomes clear that all conjunctive planners, linear and nonlinear, work the same way. The efficiency of these planners depends on the traditional add/delete-list representation for actions, which drastically limits their usefulness. I present theorems that suggest that efficient general purpose planning with more expressive action representations is impossible, and suggest ways to avoid this problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Simon, B., Hanks, B., Murphy, L., Fitzgerald, S., McCauley, R., Thomas, L., and Zander, C. 2008. Saying isn't necessarily believing: influencing self-theories in computing. In Proceeding of the Fourth international Workshop on Computing Education Research (Sydney, Australia, September 06 - 07, 2008). ICER '08. ACM, New York, NY, 173-184.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wydział Chemii: Zakład Chemii Fizycznej