960 resultados para Maximum Degree Proximity algorithm (MAX-DPA)
Resumo:
A fully conserving algorithm is developed in this paper for the integration of the equations of motion in nonlinear rod dynamics. The starting point is a re-parameterization of the rotation field in terms of the so-called Rodrigues rotation vector, which results in an extremely simple update of the rotational variables. The weak form is constructed with a non-orthogonal projection corresponding to the application of the virtual power theorem. Together with an appropriate time-collocation, it ensures exact conservation of momentum and total energy in the absence of external forces. Appealing is the fact that nonlinear hyperelastic materials (and not only materials with quadratic potentials) are permitted without any prejudice on the conservation properties. Spatial discretization is performed via the finite element method and the performance of the scheme is assessed by means of several numerical simulations.
Resumo:
This paper presents both the theoretical and the experimental approaches of the development of a mathematical model to be used in multi-variable control system designs of an active suspension for a sport utility vehicle (SUV), in this case a light pickup truck. A complete seven-degree-of-freedom model is successfully quickly identified, with very satisfactory results in simulations and in real experiments conducted with the pickup truth. The novelty of the proposed methodology is the use of commercial software in the early stages of the identification to speed up the process and to minimize the need for a large number of costly experiments. The paper also presents major contributions to the identification of uncertainties in vehicle suspension models and in the development of identification methods using the sequential quadratic programming, where an innovation regarding the calculation of the objective function is proposed and implemented. Results from simulations of and practical experiments with the real SUV are presented, analysed, and compared, showing the potential of the method.
Resumo:
Although the Hertz theory is not applicable in the analysis of the indentation of elastic-plastic materials, it is common practice to incorporate the concept of indenter/specimen combined modulus to consider indenter deformation. The appropriateness was assessed of the use of reduced modulus to incorporate the effect of indenter deformation in the analysis of the indentation with spherical indenters. The analysis based on finite element simulations considered four values of the ratio of the indented material elastic modulus to that of the diamond indenter, E/E(i) (0, 0.04, 0.19, 0.39), four values of the ratio of the elastic reduced modulus to the initial yield strength, E(r)/Y (0, 10, 20, 100), and two values of the ratio of the indenter radius to maximum total displacement, R/delta(max) (3, 10). Indenter deformation effects are better accounted for by the reduced modulus if the indented material behaves entirely elastically. In this case, identical load-displacement (P - delta) curves are obtained with rigid and elastic spherical indenters for the same elastic reduced modulus. Changes in the ratio E/E(i), from 0 to 0.39, resulted in variations lower than 5% for the load dimensionless functions, lower than 3% in the contact area, A(c), and lower than 5% in the ratio H/E(r). However, deformations of the elastic indenter made the actual radius of contact change, even in the indentation of elastic materials. Even though the load dimensionless functions showed only a little increase with the ratio E/E(i), the hardening coefficient and the yield strength could be slightly overestimated when algorithms based on rigid indenters are used. For the unloading curves, the ratio delta(e)/delta(max), where delta(e) is the point corresponding to zero load of a straight line with slope S from the point (P(max), delta(max)), varied less than 5% with the ratio E/E(i). Similarly, the relationship between reduced modulus and the unloading indentation curve, expressed by Sneddon`s equation, did not reveal the necessity of correction with the ratio E/E(i). The most affected parameter in the indentation curve, as a consequence of the indentation deformation, was the ratio between the residual indentation depth after complete unloading and the maximum indenter displacement, delta(r)/delta(max) (up to 26%), but this variation did not significantly decrease the capability to estimate hardness and elastic modulus based on the ratio of the residual indentation depth to maximum indentation depth, h(r)/h(max). In general, the results confirm the convenience of the use of the reduced modulus in the spherical instrumented indentation tests.
Resumo:
This article presents a systematic and logical study of the topology optimized design, microfabrication, and static/dynamic performance characterization of an electro-thermo-mechanical microgripper. The microgripper is designed using a topology optimization algorithm based on a spatial filtering technique and considering different penalization coefficients for different material properties during the optimization cycle. The microgripper design has a symmetric monolithic 2D structure which consists of a complex combination of rigid links integrating both the actuating and gripping mechanisms. The numerical simulation is performed by studying the effects of convective heat transfer, thermal boundary conditions at the fixed anchors, and microgripper performance considering temperature-dependent and independent material properties. The microgripper is fabricated from a 25 mm thick nickel foil using laser microfabrication technology and its static/dynamic performance is experimentally evaluated. The static and dynamic electro-mechanical characteristics are analyzed as step response functions with respect to tweezing/actuating displacements, applied current/power, and actual electric resistance. A microgripper prototype having overall dimensions of 1mm (L) X 2.5mm (W) is able to deliver the maximum tweezing and actuating displacements of 25.5 mm and 33.2 mm along X and Y axes, respectively, under an applied power of 2.32 W. Experimental performance is compared with finite element modeling simulation results.
Resumo:
This work discusses the determination of the breathing patterns in time sequence of images obtained from magnetic resonance (MR) and their use in the temporal registration of coronal and sagittal images. The registration is made without the use of any triggering information and any special gas to enhance the contrast. The temporal sequences of images are acquired in free breathing. The real movement of the lung has never been seen directly, as it is totally dependent on its surrounding muscles and collapses without them. The visualization of the lung in motion is an actual topic of research in medicine. The lung movement is not periodic and it is susceptible to variations in the degree of respiration. Compared to computerized tomography (CT), MR imaging involves longer acquisition times and it is preferable because it does not involve radiation. As coronal and sagittal sequences of images are orthogonal to each other, their intersection corresponds to a segment in the three-dimensional space. The registration is based on the analysis of this intersection segment. A time sequence of this intersection segment can be stacked, defining a two-dimension spatio-temporal (2DST) image. The algorithm proposed in this work can detect asynchronous movements of the internal lung structures and lung surrounding organs. It is assumed that the diaphragmatic movement is the principal movement and all the lung structures move almost synchronously. The synchronization is performed through a pattern named respiratory function. This pattern is obtained by processing a 2DST image. An interval Hough transform algorithm searches for synchronized movements with the respiratory function. A greedy active contour algorithm adjusts small discrepancies originated by asynchronous movements in the respiratory patterns. The output is a set of respiratory patterns. Finally, the composition of coronal and sagittal image pairs that are in the same breathing phase is realized by comparing of respiratory patterns originated from diaphragmatic and upper boundary surfaces. When available, the respiratory patterns associated to lung internal structures are also used. The results of the proposed method are compared with the pixel-by-pixel comparison method. The proposed method increases the number of registered pairs representing composed images and allows an easy check of the breathing phase. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Twelve samples with different grain sizes were prepared by normal grain growth and by primary recrystallization, and the hysteresis dissipated energy was measured by a quasi-static method. Results showed a linear relation between hysteresis energy loss and the inverse of grain size, which is here called Mager`s law, for maximum inductions from 0.6 to 1.5 T, and a Steinmetz power law relation between hysteresis loss and maximum induction for all samples. The combined effect is better described by a Mager`s law where the coefficients follow Steinmetz law.
Resumo:
The cost of a new ship design heavily depends on the principal dimensions of the ship; however, dimensions minimization often conflicts with the minimum oil outflow (in the event of an accidental spill). This study demonstrates one rational methodology for selecting the optimal dimensions and coefficients of form of tankers via the use of a genetic algorithm. Therein, a multi-objective optimization problem was formulated by using two objective attributes in the evaluation of each design, specifically, total cost and mean oil outflow. In addition, a procedure that can be used to balance the designs in terms of weight and useful space is proposed. A genetic algorithm was implemented to search for optimal design parameters and to identify the nondominated Pareto frontier. At the end of this study, three real ships are used as case studies. [DOI:10.1115/1.4002740]
Resumo:
We present a novel array RLS algorithm with forgetting factor that circumvents the problem of fading regularization, inherent to the standard exponentially-weighted RLS, by allowing for time-varying regularization matrices with generic structure. Simulations in finite precision show the algorithm`s superiority as compared to alternative algorithms in the context of adaptive beamforming.
Resumo:
We propose a robust and low complexity scheme to estimate and track carrier frequency from signals traveling under low signal-to-noise ratio (SNR) conditions in highly nonstationary channels. These scenarios arise in planetary exploration missions subject to high dynamics, such as the Mars exploration rover missions. The method comprises a bank of adaptive linear predictors (ALP) supervised by a convex combiner that dynamically aggregates the individual predictors. The adaptive combination is able to outperform the best individual estimator in the set, which leads to a universal scheme for frequency estimation and tracking. A simple technique for bias compensation considerably improves the ALP performance. It is also shown that retrieval of frequency content by a fast Fourier transform (FFT)-search method, instead of only inspecting the angle of a particular root of the error predictor filter, enhances performance, particularly at very low SNR levels. Simple techniques that enforce frequency continuity improve further the overall performance. In summary we illustrate by extensive simulations that adaptive linear prediction methods render a robust and competitive frequency tracking technique.
Distributed Estimation Over an Adaptive Incremental Network Based on the Affine Projection Algorithm
Resumo:
We study the problem of distributed estimation based on the affine projection algorithm (APA), which is developed from Newton`s method for minimizing a cost function. The proposed solution is formulated to ameliorate the limited convergence properties of least-mean-square (LMS) type distributed adaptive filters with colored inputs. The analysis of transient and steady-state performances at each individual node within the network is developed by using a weighted spatial-temporal energy conservation relation and confirmed by computer simulations. The simulation results also verify that the proposed algorithm provides not only a faster convergence rate but also an improved steady-state performance as compared to an LMS-based scheme. In addition, the new approach attains an acceptable misadjustment performance with lower computational and memory cost, provided the number of regressor vectors and filter length parameters are appropriately chosen, as compared to a distributed recursive-least-squares (RLS) based method.
Resumo:
We derive an easy-to-compute approximate bound for the range of step-sizes for which the constant-modulus algorithm (CMA) will remain stable if initialized close to a minimum of the CM cost function. Our model highlights the influence, of the signal constellation used in the transmission system: for smaller variation in the modulus of the transmitted symbols, the algorithm will be more robust, and the steady-state misadjustment will be smaller. The theoretical results are validated through several simulations, for long and short filters and channels.
Resumo:
Higher order (2,4) FDTD schemes used for numerical solutions of Maxwell`s equations are focused on diminishing the truncation errors caused by the Taylor series expansion of the spatial derivatives. These schemes use a larger computational stencil, which generally makes use of the two constant coefficients, C-1 and C-2, for the four-point central-difference operators. In this paper we propose a novel way to diminish these truncation errors, in order to obtain more accurate numerical solutions of Maxwell`s equations. For such purpose, we present a method to individually optimize the pair of coefficients, C-1 and C-2, based on any desired grid size resolution and size of time step. Particularly, we are interested in using coarser grid discretizations to be able to simulate electrically large domains. The results of our optimization algorithm show a significant reduction in dispersion error and numerical anisotropy for all modeled grid size resolutions. Numerical simulations of free-space propagation verifies the very promising theoretical results. The model is also shown to perform well in more complex, realistic scenarios.
Resumo:
Starting from the Durbin algorithm in polynomial space with an inner product defined by the signal autocorrelation matrix, an isometric transformation is defined that maps this vector space into another one where the Levinson algorithm is performed. Alternatively, for iterative algorithms such as discrete all-pole (DAP), an efficient implementation of a Gohberg-Semencul (GS) relation is developed for the inversion of the autocorrelation matrix which considers its centrosymmetry. In the solution of the autocorrelation equations, the Levinson algorithm is found to be less complex operationally than the procedures based on GS inversion for up to a minimum of five iterations at various linear prediction (LP) orders.
Resumo:
In this paper the continuous Verhulst dynamic model is used to synthesize a new distributed power control algorithm (DPCA) for use in direct sequence code division multiple access (DS-CDMA) systems. The Verhulst model was initially designed to describe the population growth of biological species under food and physical space restrictions. The discretization of the corresponding differential equation is accomplished via the Euler numeric integration (ENI) method. Analytical convergence conditions for the proposed DPCA are also established. Several properties of the proposed recursive algorithm, such as Euclidean distance from optimum vector after convergence, convergence speed, normalized mean squared error (NSE), average power consumption per user, performance under dynamics channels, and implementation complexity aspects, are analyzed through simulations. The simulation results are compared with two other DPCAs: the classic algorithm derived by Foschini and Miljanic and the sigmoidal of Uykan and Koivo. Under estimated errors conditions, the proposed DPCA exhibits smaller discrepancy from the optimum power vector solution and better convergence (under fixed and adaptive convergence factor) than the classic and sigmoidal DPCAs. (C) 2010 Elsevier GmbH. All rights reserved.
Resumo:
The main goal of this paper is to apply the so-called policy iteration algorithm (PIA) for the long run average continuous control problem of piecewise deterministic Markov processes (PDMP`s) taking values in a general Borel space and with compact action space depending on the state variable. In order to do that we first derive some important properties for a pseudo-Poisson equation associated to the problem. In the sequence it is shown that the convergence of the PIA to a solution satisfying the optimality equation holds under some classical hypotheses and that this optimal solution yields to an optimal control strategy for the average control problem for the continuous-time PDMP in a feedback form.