863 resultados para Adaptive Information Dispersal Algorithm
Resumo:
This study proposes an optimized approach of designing in which a model specially shaped composite tank for spacecrafts is built by applying finite element analysis. The composite layers are preliminarily designed by combining quasi-network design method with numerical simulation, which determines the ratio between the angle and the thickness of layers as the initial value of the optimized design. By adopting an adaptive simulated annealing algorithm, the angles and the numbers of layers at each angle are optimized to minimize the weight of structure. Based on this, the stacking sequence of composite layers is formulated according to the number of layers in the optimized structure by applying the enumeration method and combining the general design parameters. Numerical simulation is finally adopted to calculate the buckling limit of tanks in different designing methods. This study takes a composite tank with a cone-shaped cylinder body as example, in which ellipsoid head section and outer wall plate are selected as the object to validate this method. The result shows that the quasi-network design method can improve the design quality of composite material layer in tanks with complex preliminarily loading conditions. The adaptive simulated annealing algorithm can reduce the initial design weight by 30%, which effectively probes the global optimal solution and optimizes the weight of structure. It can be therefore proved that, this optimization method is capable of designing and optimizing specially shaped composite tanks with complex loading conditions.
Resumo:
Savitzky-Golay (S-G) filters are finite impulse response lowpass filters obtained while smoothing data using a local least-squares (LS) polynomial approximation. Savitzky and Golay proved in their hallmark paper that local LS fitting of polynomials and their evaluation at the mid-point of the approximation interval is equivalent to filtering with a fixed impulse response. The problem that we address here is, ``how to choose a pointwise minimum mean squared error (MMSE) S-G filter length or order for smoothing, while preserving the temporal structure of a time-varying signal.'' We solve the bias-variance tradeoff involved in the MMSE optimization using Stein's unbiased risk estimator (SURE). We observe that the 3-dB cutoff frequency of the SURE-optimal S-G filter is higher where the signal varies fast locally, and vice versa, essentially enabling us to suitably trade off the bias and variance, thereby resulting in near-MMSE performance. At low signal-to-noise ratios (SNRs), it is seen that the adaptive filter length algorithm performance improves by incorporating a regularization term in the SURE objective function. We consider the algorithm performance on real-world electrocardiogram (ECG) signals. The results exhibit considerable SNR improvement. Noise performance analysis shows that the proposed algorithms are comparable, and in some cases, better than some standard denoising techniques available in the literature.
Resumo:
The current study presents an algorithm to retrieve surface Soil Moisture (SM) from multi-temporal Synthetic Aperture Radar (SAR) data. The developed algorithm is based on the Cumulative Density Function (CDF) transformation of multi-temporal RADARSAT-2 backscatter coefficient (BC) to obtain relative SM values, and then converts relative SM values into absolute SM values using soil information. The algorithm is tested in a semi-arid tropical region in South India using 30 satellite images of RADARSAT-2, SMOS L2 SM products, and 1262 SM field measurements in 50 plots spanning over 4 years. The validation with the field data showed the ability of the developed algorithm to retrieve SM with RMSE ranging from 0.02 to 0.06 m(3)/m(3) for the majority of plots. Comparison with the SMOS SM showed a good temporal behaviour with RMSE of approximately 0.05 m(3)/m(3) and a correlation coefficient of approximately 0.9. The developed model is compared and found to be better than the change detection and delta index model. The approach does not require calibration of any parameter to obtain relative SM and hence can easily be extended to any region having time series of SAR data available.
Resumo:
This thesis presents a novel framework for state estimation in the context of robotic grasping and manipulation. The overall estimation approach is based on fusing various visual cues for manipulator tracking, namely appearance and feature-based, shape-based, and silhouette-based visual cues. Similarly, a framework is developed to fuse the above visual cues, but also kinesthetic cues such as force-torque and tactile measurements, for in-hand object pose estimation. The cues are extracted from multiple sensor modalities and are fused in a variety of Kalman filters.
A hybrid estimator is developed to estimate both a continuous state (robot and object states) and discrete states, called contact modes, which specify how each finger contacts a particular object surface. A static multiple model estimator is used to compute and maintain this mode probability. The thesis also develops an estimation framework for estimating model parameters associated with object grasping. Dual and joint state-parameter estimation is explored for parameter estimation of a grasped object's mass and center of mass. Experimental results demonstrate simultaneous object localization and center of mass estimation.
Dual-arm estimation is developed for two arm robotic manipulation tasks. Two types of filters are explored; the first is an augmented filter that contains both arms in the state vector while the second runs two filters in parallel, one for each arm. These two frameworks and their performance is compared in a dual-arm task of removing a wheel from a hub.
This thesis also presents a new method for action selection involving touch. This next best touch method selects an available action for interacting with an object that will gain the most information. The algorithm employs information theory to compute an information gain metric that is based on a probabilistic belief suitable for the task. An estimation framework is used to maintain this belief over time. Kinesthetic measurements such as contact and tactile measurements are used to update the state belief after every interactive action. Simulation and experimental results are demonstrated using next best touch for object localization, specifically a door handle on a door. The next best touch theory is extended for model parameter determination. Since many objects within a particular object category share the same rough shape, principle component analysis may be used to parametrize the object mesh models. These parameters can be estimated using the action selection technique that selects the touching action which best both localizes and estimates these parameters. Simulation results are then presented involving localizing and determining a parameter of a screwdriver.
Lastly, the next best touch theory is further extended to model classes. Instead of estimating parameters, object class determination is incorporated into the information gain metric calculation. The best touching action is selected in order to best discern between the possible model classes. Simulation results are presented to validate the theory.
Resumo:
多数支持POSIX权能机制的安全操作系统提出了各自的权能遗传算法,但这些算法都只适用于特定的最小特权控制策略,并且存在语义冲突、安全目标不明确等问题,不能有效支持多种安全需求不同的特权策略。通过对一些现有算法的深入分析,提出了一种新的权能遗传算法,该算法引入策略关联的权能控制变量以及可信应用属性。实例分析表明本算法具有策略适应性和可用性,形式化分析和验证表明它可使系统满足特权策略的基本安全定理。
Resumo:
基于奇异值分解和能量最小原则,提出了一种自适应图像降噪算法,并给出了基于有界变差的能量降噪模型的代数形式。通过在矩阵范数意义下求能量最小,自适应确定去噪图像重构的奇异值个数。该算法的特点是将能量最小法则和奇异值分解结合起来,在代数空间中建立了一种自适应的图像降噪算法。与基于压缩比和奇异值分解的降噪方法相比,由于该算法避免了图像压缩比函数及其拐点的计算,因此具有快速去噪和简单可行的优点。实验结果证明,该算法是有效的。
Resumo:
在核酸扩增反应仪中,基因芯片核酸扩增反应过程要求实现温度高精度快速跟踪控制,常规温控方案和算法难以实现。将模糊推理系统与常规PID控制方式相结合,采用模糊自整定PID控制算法实现了温度快速跟踪控制。实验结果表明:模糊自整定PID控制算法比常规PID算法具有更强的鲁棒性,能够克服控制对象热惯性参数时变性的影响,降低了输出温度最大超调量,提高了稳态精度。
Resumo:
We present a new algorithm for exactly solving decision making problems represented as influence diagrams. We do not require the usual assumptions of no forgetting and regularity; this allows us to solve problems with simultaneous decisions and limited information. The algorithm is empirically shown to outperform a state-of-the-art algorithm on randomly generated problems of up to 150 variables and 10^64 solutions. We show that these problems are NP-hard even if the underlying graph structure of the problem has low treewidth and the variables take on a bounded number of states, and that they admit no provably good approximation if variables can take on an arbitrary number of states.
Resumo:
We present a new algorithm for exactly solving decision making problems represented as influence diagrams. We do not require the usual assumptions of no forgetting and regularity; this allows us to solve problems with simultaneous decisions and limited information. The algorithm is empirically shown to outperform a state-of-the-art algorithm on randomly generated problems of up to 150 variables and 10^64 solutions. We show that the problem is NP-hard even if the underlying graph structure of the problem has small treewidth and the variables take on a bounded number of states, but that a fully polynomial time approximation scheme exists for these cases. Moreover, we show that the bound on the number of states is a necessary condition for any efficient approximation scheme.
Resumo:
A multivariable hyperstable robust adaptive decoupling control algorithm based on a neural network is presented for the control of nonlinear multivariable coupled systems with unknown parameters and structure. The Popov theorem is used in the design of the controller. The modelling errors, coupling action and other uncertainties of the system are identified on-line by a neural network. The identified results are taken as compensation signals such that the robust adaptive control of nonlinear systems is realised. Simulation results are given.
Resumo:
This paper proposes a subspace based blind adaptive channel estimation algorithm for dual-rate DS-CDMA systems, which can operate at the low-rate (LR) or high-rate (HR) mode. Simulation results show that the proposed blind adaptive algorithm at the LR mode has a better performance than that at the HR mode, with the cost of an increased computational complexity.
Resumo:
This paper demonstrates by means of joint time-frequency analysis that the acoustic noise produced by the breaking of biscuits is dependent on relative humidity and water activity. It also shows that the time-frequency coefficients calculated using the adaptive Gabor transformation algorithm is dependent on the period of time a biscuit is exposed to humidity. This is a new methodology that can be used to assess the crispness of crisp foods. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Dados recentes mostram que os processos de destruição da floresta e formação de fragmentos estão avançando muito rapidamente na Amazônia brasileira. Definir como esses processos afetam a fauna nas diferentes fito fisionomias amazônicas é fundamental para que se possam planejar políticas visando avaliar a vulnerabilidade relativa de diferentes grupos biológicos a esse processo assim estimar o valor de áreas fragmentadas para a conservação. Os invertebrados podem ser usados como bons indicadores para esse objetivo, pois são grupos com grande capacidade adaptativa e de dispersão, dependendo diretamente do ambiente para sua sobrevivência. A utilização de aranhas para avaliar o efeito da fragmentação florestal é recente e ainda pouco explorada, apesar das aranhas serem um grupo megadiverso e com sua biologia diretamente relacionada com a composição e estrutura do ambiente em que vivem. Destarte este trabalho objetivou avaliar os efeitos do tamanho da área florestada, do grau de isolamento e da distância das estradas sobre as comunidades de aranhas em 15 ilhas de floresta, isoladas por matriz de savana e seis áreas de mata contínua no distrito de Alter do Chão, no município de Santarém, no oeste do estado do Pará. As amostragens envolveram um esforço de 252 horas, utilizando-se guarda-chuva entomológico e coleta manual noturna, ambas com controle de tempo e área, sendo a unidade amostral representada pela soma dos resultados obtidos por três coletores em cada área, em transectos de 250m. O protocolo resultou na captura de 7751 aranhas sendo 5477 imaturos e 2274 adultos. Após a identificação do material araneológico obteve-se uma lista com 306 espécies distribuídas em 32 famílias. Os padrões da comunidade de aranhas, analisados através de um MDS (Multidimensional Scaling ou escalonamento multidimensional) utilizando a distância de Bray-Curtis mostraram separação entre as áreas de mata contínua e ilhas de floresta. A análise da resposta à primeira dimensão da ordenação foi feita para as espécies com mais de 10 indivíduos na amostra e uma ordenação direta foi feita com as características das áreas (distância das ilhas de floresta para a floresta contínua, o tamanho e o índice de forma das ilhas de floresta). Uma análise GLM, utilizada para avaliar os efeitos da degradação ambiental, indicou diferenças significativas para o número de árvores por área florestada e para a distância das estradas: a fragmentação florestal sobre a comunidade de aranhas foi significativa apenas para o tamanho das ilhas em relação ao eixo 1 do MDS. A análise de variância (anova), que foi utilizada para se achar as médias das riquezas que foram maiores nas matas contínuas, diferindo do resultado das curvas de rarefação, que apontaram uma riqueza levemente maior nas ilhas de florest. O padrão de hierarquia da comunidade de aranhas foi achado no programa Nestedness Temperature Calculator Program - Nestcalc (Atmar; Patterson, 1995).