913 resultados para self-adaptive
Resumo:
传统的软件过程模型大多是静态的、机械的、被动的,它们要求软件工程人员在描述软件过程时预期所有可能发生的情况,并且显式地定义这些问题的解决方案.当软件过程所处的环境发生变化时,软件过程无法自适应地对这些变更作出相应的调整.提出了一种基于Agent的自适应软件过程模型.在这种软件过程模型中,软件过程被描述为一组相互独立而对等的实体——软件过程Agent.这些软件过程Agent能够对软件过程环境的变化主动地、自治地作出反应,动态地确定和变更其行为以实现软件开发的目标.
Resumo:
分析了变异操作对微粒群算法(panicle swarm optimization,简称PSO)的影响,针对收敛速度慢、容易陷入局部极小等缺点,结合生物界中物种发现生存密度过大时会自动分家迁移的习性,给出了一种自适应逃逸微粒群算法,并证明了它依概率收敛到全局最优解.算法中的逃逸行为是一种简化的确定变异操作.当微粒飞行速度过小时,通过逃逸运动使微粒能够有效地进行全局和局部搜索,减弱了随机变异操作带来的不稳定性、典型复杂函数优化的仿真结果表明,该算法不仅具有更快的收敛速度,而且能更有效地进行全局搜索.
Resumo:
提出了一种基于粒子群算法优化(PSO)的模糊控制器,对模糊控制器参数进行全局优化,以弥补模糊控制器参数在线调节方面的不足,并应用于球磨机粉磨系统的控制中。控制系统采用粒子群优化模糊控制器作为双闭环控制中的成品流量控制器,并在Matlab/Simulink进行的仿真分析中实现模糊控制器参数的在线调节。仿真结果表明,系统较好地实现了给定参考轨迹自适应跟踪,具有鲁棒性强、控制精度高等优点。
Resumo:
针对实时序列图像多目标识别问题提出了一种快速图像处理方法。该方法依据一定的先验知识和准则,对复杂背景图像进行窗口化,对每一个窗口独立进行自适应快速中值滤波,及基于局部图像灰度信息的自适应重新量化和最大熵分割处理,实现了对全景视场内预定目标的快速准确提取和识别。为动态环境中多目标条件下移动机器人的视觉定位、导航和目标跟踪所需图像处理技术提供了一种新的方法。
Resumo:
自治潜水器(AUV,Autonomous Underwater Vehicle)是非线性、强耦合、大惯性的多输入多输出系统,又由于受到海流、传感器、执行机构等不确定性的影响,对AUV控制器的鲁棒性能提出了更高的要求。本文针对我国正在研制开发的长航程自治潜水器的特性及其对航行控制的要求,将PID控制与模糊控制的简便性、灵活性以及鲁棒性结合起来,为AUV设计了可在线修改PID参数的自适应模糊PID控制器,仿真结果证明了该种控制方法不但提高了AUV系统的动态特性,而且可在参数摄动和外界扰动时获得较好的控制性能。
Resumo:
Seismic signal is a typical non-stationary signal, whose frequency is continuously changing with time and is determined by the bandwidth of seismic source and the absorption characteristic of the media underground. The most interesting target of seismic signal’s processing and explaining is to know about the local frequency’s abrupt changing with the time, since this kind of abrupt changing is indicating the changing of the physical attributes of the media underground. As to the seismic signal’s instantaneous attributes taken from time-frequency domain, the key target is to search a effective, non-negative and fast algorithm time-frequency distribution, and transform the seismic signal into this time-frequency domain to get its instantaneous power spectrum density, and then use the process of weighted adding and average etc. to get the instantaneous attributes of seismic signal. Time-frequency analysis as a powerful tool to deal with time variant non-stationary signal is becoming a hot researching spot of modern signal processing, and also is an important method to make seismic signal’s attributes analysis. This kind of method provides joint distribution message about time domain and frequency domain, and it clearly plots the correlation of signal’s frequency changing with the time. The spectrum decomposition technique makes seismic signal’s resolving rate reach its theoretical level, and by the method of all frequency scanning and imaging the three dimensional seismic data in frequency domain, it improves and promotes the resolving abilities of seismic signal vs. geological abnormal objects. Matching pursuits method is an important way to realize signal’s self-adaptive decomposition. Its main thought is that any signal can be expressed by a series of time-frequency atoms’ linear composition. By decomposition the signal within an over completed library, the time-frequency atoms which stand for the signal itself are selected neatly and self-adaptively according to the signal’s characteristics. This method has excellent sparse decomposition characteristics, and is widely used in signal de-noising, signal coding and pattern recognizing processing and is also adaptive to seismic signal’s decomposition and attributes analysis. This paper takes matching pursuits method as the key research object. As introducing the principle and implementation techniques of matching pursuits method systematically, it researches deeply the pivotal problems of atom type’s selection, the atom dictionary’s discrete, and the most matching atom’s searching algorithm, and at the same time, applying this matching pursuits method into seismic signal’s processing by picking-up correlative instantaneous messages from time-frequency analysis and spectrum decomposition to the seismic signal. Based on the research of the theory and its correlative model examination of the adaptively signal decomposition with matching pursuit method, this paper proposes a fast optimal matching time-frequency atom’s searching algorithm aimed at seismic signal’s decomposition by frequency-dominated pursuit method and this makes the MP method pertinence to seismic signal’s processing. Upon the research of optimal Gabor atom’s fast searching and matching algorithm, this paper proposes global optimal searching method using Simulated Annealing Algorithm, Genetic Algorithm and composed Simulated Annealing and Genetic Algorithm, so as to provide another way to implement fast matching pursuit method. At the same time, aimed at the characteristics of seismic signal, this paper proposes a fast matching atom’s searching algorithm by means of designating the max energy points of complex seismic signal, searching for the most optimal atom in the neighbor area of these points according to its instantaneous frequency and instantaneous phase, and this promotes the calculating efficiency of seismic signal’s matching pursuit algorithm. According to these methods proposed above, this paper implements them by programmed calculation, compares them with some open algorithm and proves this paper’s conclusions. It also testifies the active results of various methods by the processing of actual signals. The problems need to be solved further and the aftertime researching targets are as follows: continuously seeking for more efficient fast matching pursuit algorithm and expanding its application range, and also study the actual usage of matching pursuit method.
Resumo:
As an important measure to understand oil and gas accumulation during petroleum exploration and development, Petroleum geological model is an integrated system of theories and methods, which includes sedimentology, reservoir geology, structural geology, petroleum geology and other geological theories, and is used to describe or predict the distribution of oil and gas. Progressive exploration and development for oil and gas is commonly used in terrestrial sedimentary basin in China for the oil and gas generation, accumulation and exploitation are very intricate. It is necessary to establish petroleum geological model, adaptive to different periods of progressive exploration and development practice. Meanwhile there is lack of an integrated system of theories and methods of petroleum geological model suitable for different exploration and development stages for oil and gas, because the current different models are intercrossed, which emphasize their different aspects. According to the characteristics of exploration and development for the Triassic oil and gas pool in Lunnan area, Tarim Basin, the Lunnan horst belt was selected as the major study object of this paper. On the basis of the study of petroleum geological model system, the petroleum geological models for different exploration and development stages are established, which could be applied to predict the distribution of oil and gas distribution. The main results are as follows. (1) The generation-accumulation and exploration-development of hydrocarbon are taken as an integrated system during the course of time, so petroleum exploration and development are closely combined. Under the guidance of some philosophical views that the whole world could be understood, the present writer realizes that any one kind of petroleum geological models can be used to predict and guide petroleum exploration and development practice. The writer do not recognize that any one kind of petroleum geological models can be viewed as sole model for guiding the petroleum exploration and development in the world. Based on the differences of extents and details of research work during various stage of exploration and development for oil and gas, the system of classification for petroleum geological models is established, which can be regarded as theoretical basis for progressive petroleum exploration and development. (2) A petroleum geological model was established based on detailed researches on the Triassic stratigraphy, structure, sedimentology and reservoir rocks in the Lunnan area, northern Tarim Basin. Some sub-belt of hydrocarbon accumulation in the Lunnan area are divided and the predominate controlling factors for oil and gas distribution in the Lunnan area are given out. (3) Geological models for Lunnan and Jiefangqudong oil fields were rebuilt by the combinations of seismology and geology, exploration and development, dynamic and static behavior, thus finding out the distribution of potential zones for oil and gas accumulations. Meanwhile Oil and gas accumulations were considered as the important unit in progressive exploration and development, and the classification was made for Lunnan Triassic pools. Petroleum geological model was created through 3D seismic fine interpretation and detailed description of characteristics of reservoir rocks and the distribution of oil and gas, especially for LN3 and LN26 well zones. The possible distribution of Triassic oil traps and their efficiency in the Lunnan area has been forecasted, and quantitative analysis for original oil(water) saturation in oil pools was performed. (4) The concept of oil cell is proposed by the writer for the first time. It represents the relatively oil-rich zones in oil pool, which were formed by the differences of fluid flows during the middle stage of reservoir development. The classification of oil cells is also given out in this paper. After the studies of physical and numerical modeling, the dominant controlling factors for the formation of various oil cells are analyzed. Oil cells are considered as the most important hydrocarbon potential zones after first recovery, which are main object of progressive development adjustment and improvement oil recovery. An example as main target of analysis was made for various oil cells of Triassic reservoir in the LN2 well area. (5) It is important and necessary that the classification of flow unit and the establishment of geological model of flow unit based on analysis of forecast for inter-well reservoir parameters connected with the statistical analysis of reservoir character of horizontal wells. With the help of self-adaptive interpolation and stochastic simulation, the geological model of flow units was built on the basis of division and correlation of flow units, with which the residual oil distribution in TIII reservoir in the LN2 well area after water flooding can be established.
Resumo:
This paper describes ways in which emergence engineering principles can be applied to the development of distributed applications. A distributed solution to the graph-colouring problem is used as a vehicle to illustrate some novel techniques. Each node acts autonomously to colour itself based only on its local view of its neighbourhood, and following a simple set of carefully tuned rules. Randomness breaks symmetry and thus enhances stability. The algorithm has been developed to enable self-configuration in wireless sensor networks, and to reflect real-world configurations the algorithm operates with 3 dimensional topologies (reflecting the propagation of radio waves and the placement of sensors in buildings, bridge structures etc.). The algorithm’s performance is evaluated and results presented. It is shown to be simultaneously highly stable and scalable whilst achieving low convergence times. The use of eavesdropping gives rise to low interaction complexity and high efficiency in terms of the communication overheads.
Resumo:
This paper proposes a novel image denoising technique based on the normal inverse Gaussian (NIG) density model using an extended non-negative sparse coding (NNSC) algorithm proposed by us. This algorithm can converge to feature basis vectors, which behave in the locality and orientation in spatial and frequency domain. Here, we demonstrate that the NIG density provides a very good fitness to the non-negative sparse data. In the denoising process, by exploiting a NIG-based maximum a posteriori estimator (MAP) of an image corrupted by additive Gaussian noise, the noise can be reduced successfully. This shrinkage technique, also referred to as the NNSC shrinkage technique, is self-adaptive to the statistical properties of image data. This denoising method is evaluated by values of the normalized signal to noise rate (SNR). Experimental results show that the NNSC shrinkage approach is indeed efficient and effective in denoising. Otherwise, we also compare the effectiveness of the NNSC shrinkage method with methods of standard sparse coding shrinkage, wavelet-based shrinkage and the Wiener filter. The simulation results show that our method outperforms the three kinds of denoising approaches mentioned above.
Resumo:
Tese de doutoramento, Informática (Engenharia Informática), Universidade de Lisboa, Faculdade de Ciências, 2014
Resumo:
Thèse réalisée en cotutelle entre l'Université de Montréal et l'Université de Technologie de Troyes
Resumo:
We propose a multi-resolution approach for surface reconstruction from clouds of unorganized points representing an object surface in 3D space. The proposed method uses a set of mesh operators and simple rules for selective mesh refinement, with a strategy based on Kohonen s self-organizing map. Basically, a self-adaptive scheme is used for iteratively moving vertices of an initial simple mesh in the direction of the set of points, ideally the object boundary. Successive refinement and motion of vertices are applied leading to a more detailed surface, in a multi-resolution, iterative scheme. Reconstruction was experimented with several point sets, induding different shapes and sizes. Results show generated meshes very dose to object final shapes. We include measures of performance and discuss robustness.
Resumo:
Image compress consists in represent by small amount of data, without loss a visual quality. Data compression is important when large images are used, for example satellite image. Full color digital images typically use 24 bits to specify the color of each pixel of the Images with 8 bits for each of the primary components, red, green and blue (RGB). Compress an image with three or more bands (multispectral) is fundamental to reduce the transmission time, process time and record time. Because many applications need images, that compression image data is important: medical image, satellite image, sensor etc. In this work a new compression color images method is proposed. This method is based in measure of information of each band. This technique is called by Self-Adaptive Compression (S.A.C.) and each band of image is compressed with a different threshold, for preserve information with better result. SAC do a large compression in large redundancy bands, that is, lower information and soft compression to bands with bigger amount of information. Two image transforms are used in this technique: Discrete Cosine Transform (DCT) and Principal Component Analysis (PCA). Primary step is convert data to new bands without relationship, with PCA. Later Apply DCT in each band. Data Loss is doing when a threshold discarding any coefficients. This threshold is calculated with two elements: PCA result and a parameter user. Parameters user define a compression tax. The system produce three different thresholds, one to each band of image, that is proportional of amount information. For image reconstruction is realized DCT and PCA inverse. SAC was compared with JPEG (Joint Photographic Experts Group) standard and YIQ compression and better results are obtain, in MSE (Mean Square Root). Tests shown that SAC has better quality in hard compressions. With two advantages: (a) like is adaptive is sensible to image type, that is, presents good results to divers images kinds (synthetic, landscapes, people etc., and, (b) it need only one parameters user, that is, just letter human intervention is required