72 resultados para Adaptive Interface
Resumo:
The adaptive process in motor learning was examined in terms of effects of varying amounts of constant practice performed before random practice. Participants pressed five response keys sequentially, the last one coincident with the lighting of a final visual stimulus provided by a complex coincident timing apparatus. Different visual stimulus speeds were used during the random practice. 33 children (M age=11.6 yr.) were randomly assigned to one of three experimental groups: constant-random, constant-random 33%, and constant-random 66%. The constant-random group practiced constantly until they reached a criterion of performance stabilization three consecutive trials within 50 msec. of error. The other two groups had additional constant practice of 33 and 66%, respectively, of the number of trials needed to achieve the stabilization criterion. All three groups performed 36 trials under random practice; in the adaptation phase, they practiced at a different visual stimulus speed adopted in the stabilization phase. Global performance measures were absolute, constant, and variable errors, and movement pattern was analyzed by relative timing and overall movement time. There was no group difference in relation to global performance measures and overall movement time. However, differences between the groups were observed on movement pattern, since constant-random 66% group changed its relative timing performance in the adaptation phase.
Resumo:
This paper presents a novel adaptive control scheme. with improved convergence rate, for the equalization of harmonic disturbances such as engine noise. First, modifications for improving convergence speed of the standard filtered-X LMS control are described. Equalization capabilities are then implemented, allowing the independent tuning of harmonics. Eventually, by providing the desired order vs. engine speed profiles, the pursued sound quality attributes can be achieved. The proposed control scheme is first demonstrated with a simple secondary path model and, then, experimentally validated with the aid of a vehicle mockup which is excited with engine noise. The engine excitation is provided by a real-time sound quality equivalent engine simulator. Stationary and transient engine excitations are used to assess the control performance. The results reveal that the proposed controller is capable of large order-level reductions (up to 30 dB) for stationary excitation, which allows a comfortable margin for equalization. The same holds for slow run-ups ( > 15s) thanks to the improved convergence rate. This margin, however, gets narrower with shorter run-ups (<= 10s). (c) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This work presents a critical analysis of methodologies to evaluate the effective (or generalized) electromechanical coupling coefficient (EMCC) for structures with piezoelectric elements. First, a review of several existing methodologies to evaluate material and effective EMCC is presented. To illustrate the methodologies, a comparison is made between numerical, analytical and experimental results for two simple structures: a cantilever beam with bonded extension piezoelectric patches and a simply-supported sandwich beam with an embedded shear piezoceramic. An analysis of the electric charge cancelation effect on the effective EMCC observed in long piezoelectric patches is performed. It confirms the importance of reinforcing the electrodes equipotentiality condition in the finite element model. Its results indicate also that smaller (segmented) and independent piezoelectric patches could be more interesting for energy conversion efficiency. Then, parametric analyses and optimization are performed for a cantilever sandwich beam with several embedded shear piezoceramic patches. Results indicate that to fully benefit from the higher material coupling of shear piezoceramic patches, attention must be paid to the configuration design so that the shear strains in the patches are maximized. In particular, effective square EMCC values higher than 1% were obtained embedding nine well-spaced short piezoceramic patches in an aluminum/foam/aluminum sandwich beam.
Resumo:
This paper aims to formulate and investigate the application of various nonlinear H(infinity) control methods to a fiee-floating space manipulator subject to parametric uncertainties and external disturbances. From a tutorial perspective, a model-based approach and adaptive procedures based on linear parametrization, neural networks and fuzzy systems are covered by this work. A comparative study is conducted based on experimental implementations performed with an actual underactuated fixed-base planar manipulator which is, following the DEM concept, dynamically equivalent to a free-floating space manipulator. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The inclined plane test (IPT) is commonly performed to measure the interface shear strength between different materials as those used in cover systems of landfills. The test, when interpreted according to European test Standards provides the static interface friction angle, usually assumed for 50 mm displacement and denoted as phi(stat)(50). However, if interpreted considering the several phases of the sliding process, the test is capable of yielding more realistic information about the interface shear strength such as differentiating interfaces which exhibit the same value of phi(stat)(50) but different behavior for displacement less than 50 mm. In this paper, the IPT is used to evaluate the interface shear strength of some materials usually present in cover liner systems of landfill. The results of the tests were analyzed for both, the static and the dynamic phases of the sliding and were interpreted based on the static initial friction angle, phi(0), and the limit friction angle, phi(lim). It is shown that depending on the sliding behavior of the interfaces, phi(stat)(50), which is usually adopted as the designing parameter in stability analysis, can be larger than phi(0) and phi(lim). (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents an Adaptive Maximum Entropy (AME) approach for modeling biological species. The Maximum Entropy algorithm (MaxEnt) is one of the most used methods in modeling biological species geographical distribution. The approach presented here is an alternative to the classical algorithm. Instead of using the same set features in the training, the AME approach tries to insert or to remove a single feature at each iteration. The aim is to reach the convergence faster without affect the performance of the generated models. The preliminary experiments were well performed. They showed an increasing on performance both in accuracy and in execution time. Comparisons with other algorithms are beyond the scope of this paper. Some important researches are proposed as future works.
Resumo:
This paper contains a new proposal for the definition of the fundamental operation of query under the Adaptive Formalism, one capable of locating functional nuclei from descriptions of their semantics. To demonstrate the method`s applicability, an implementation of the query procedure constrained to a specific class of devices is shown, and its asymptotic computational complexity is discussed.
Resumo:
This work deals with the problem of minimizing the waste of space that occurs on a rotational placement of a set of irregular bi-dimensional items inside a bi-dimensional container. This problem is approached with a heuristic based on Simulated Annealing (SA) with adaptive neighborhood. The objective function is evaluated in a constructive approach, where the items are placed sequentially. The placement is governed by three different types of parameters: sequence of placement, the rotation angle and the translation. The rotation applied and the translation of the polygon are cyclic continuous parameters, and the sequence of placement defines a combinatorial problem. This way, it is necessary to control cyclic continuous and discrete parameters. The approaches described in the literature deal with only type of parameter (sequence of placement or translation). In the proposed SA algorithm, the sensibility of each continuous parameter is evaluated at each iteration increasing the number of accepted solutions. The sensibility of each parameter is associated to its probability distribution in the definition of the next candidate.
Resumo:
Simulated annealing (SA) is an optimization technique that can process cost functions with degrees of nonlinearities, discontinuities and stochasticity. It can process arbitrary boundary conditions and constraints imposed on these cost functions. The SA technique is applied to the problem of robot path planning. Three situations are considered here: the path is represented as a polyline; as a Bezier curve; and as a spline interpolated curve. In the proposed SA algorithm, the sensitivity of each continuous parameter is evaluated at each iteration increasing the number of accepted solutions. The sensitivity of each parameter is associated to its probability distribution in the definition of the next candidate. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Nanomaterials have triggered excitement in both fundamental science and technological applications in several fields However, the same characteristic high interface area that is responsible for their unique properties causes unconventional instability, often leading to local collapsing during application Thermodynamically, this can be attributed to an increased contribution of the interface to the free energy, activating phenomena such as sintering and grain growth The lack of reliable interface energy data has restricted the development of conceptual models to allow the control of nanoparticle stability on a thermodynamic basis. Here we introduce a novel and accessible methodology to measure interface energy of nanoparticles exploiting the heat released during sintering to establish a quantitative relation between the solid solid and solid vapor interface energies. We exploited this method in MgO and ZnO nanoparticles and determined that the ratio between the solid solid and solid vapor interface energy is 11 for MgO and 0.7 for ZnO. We then discuss that this ratio is responsible for a thermodynamic metastable state that may prevent collapsing of nanoparticles and, therefore, may be used as a tool to design long-term stable nanoparticles.
Resumo:
Controlling the phase stability of ZrO2 nanoparticles is of major importance in the development of new ZrO2-based nanotechnologies. Because of the fact that in nanoparticles the surface accounts for a larger fraction of the total atoms, the relative phase stability can be controlled throughout the surface composition, which can be toned by surface excess of one of the components of the system., The objective of this work is to delineate a relationship between surface excess (or solid solution) of MgO relative to ZrO2 and the polymorphic stability of (ZrO2)(1-x) - (MgO), nanopowders, where 0.0 <= x <= 0.6. The nanopowders were prepared by a liquid precursor method at 500 degrees C and characterized by N-2 adsorption (BET), X-ray diffraction (XRD), X-Ray photoelectron spectroscopy (XPS), and Raman spectroscopy. For pure ZrO2 samples, both tetragonal and monoclinic polymorphs were detected, as expected considering the literature. For MgO molar fractions varying from 0.05 to 0.10, extensive solid solution could not be detected, and a ZrO2 surface energy reduction, caused by Mg surface excess detected by XPS, promoted tetragonal polymorph thermodynamic stabilization with relation to monoclinic. For MgO molar fractions higher than 0.10 and up to 0.40, Mg solid solution could be detected and induced cubic phase stabilization. MgO periclase was observed only at x = 0.6. A discussion based on the relationship between the surface excess, surface energy, and polymorph stability is presented.
Resumo:
We propose a robust and low complexity scheme to estimate and track carrier frequency from signals traveling under low signal-to-noise ratio (SNR) conditions in highly nonstationary channels. These scenarios arise in planetary exploration missions subject to high dynamics, such as the Mars exploration rover missions. The method comprises a bank of adaptive linear predictors (ALP) supervised by a convex combiner that dynamically aggregates the individual predictors. The adaptive combination is able to outperform the best individual estimator in the set, which leads to a universal scheme for frequency estimation and tracking. A simple technique for bias compensation considerably improves the ALP performance. It is also shown that retrieval of frequency content by a fast Fourier transform (FFT)-search method, instead of only inspecting the angle of a particular root of the error predictor filter, enhances performance, particularly at very low SNR levels. Simple techniques that enforce frequency continuity improve further the overall performance. In summary we illustrate by extensive simulations that adaptive linear prediction methods render a robust and competitive frequency tracking technique.
Resumo:
In this paper, we propose an approach to the transient and steady-state analysis of the affine combination of one fast and one slow adaptive filters. The theoretical models are based on expressions for the excess mean-square error (EMSE) and cross-EMSE of the component filters, which allows their application to different combinations of algorithms, such as least mean-squares (LMS), normalized LMS (NLMS), and constant modulus algorithm (CMA), considering white or colored inputs and stationary or nonstationary environments. Since the desired universal behavior of the combination depends on the correct estimation of the mixing parameter at every instant, its adaptation is also taken into account in the transient analysis. Furthermore, we propose normalized algorithms for the adaptation of the mixing parameter that exhibit good performance. Good agreement between analysis and simulation results is always observed.
Distributed Estimation Over an Adaptive Incremental Network Based on the Affine Projection Algorithm
Resumo:
We study the problem of distributed estimation based on the affine projection algorithm (APA), which is developed from Newton`s method for minimizing a cost function. The proposed solution is formulated to ameliorate the limited convergence properties of least-mean-square (LMS) type distributed adaptive filters with colored inputs. The analysis of transient and steady-state performances at each individual node within the network is developed by using a weighted spatial-temporal energy conservation relation and confirmed by computer simulations. The simulation results also verify that the proposed algorithm provides not only a faster convergence rate but also an improved steady-state performance as compared to an LMS-based scheme. In addition, the new approach attains an acceptable misadjustment performance with lower computational and memory cost, provided the number of regressor vectors and filter length parameters are appropriately chosen, as compared to a distributed recursive-least-squares (RLS) based method.
Resumo:
As is well known, Hessian-based adaptive filters (such as the recursive-least squares algorithm (RLS) for supervised adaptive filtering, or the Shalvi-Weinstein algorithm (SWA) for blind equalization) converge much faster than gradient-based algorithms [such as the least-mean-squares algorithm (LMS) or the constant-modulus algorithm (CMA)]. However, when the problem is tracking a time-variant filter, the issue is not so clear-cut: there are environments for which each family presents better performance. Given this, we propose the use of a convex combination of algorithms of different families to obtain an algorithm with superior tracking capability. We show the potential of this combination and provide a unified theoretical model for the steady-state excess mean-square error for convex combinations of gradient- and Hessian-based algorithms, assuming a random-walk model for the parameter variations. The proposed model is valid for algorithms of the same or different families, and for supervised (LMS and RLS) or blind (CMA and SWA) algorithms.