947 resultados para Iterative Closest Point (ICP) Algorithm


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Synchronization issues pose a big challenge in cooperative communications. The benefits of cooperative diversity could be easily undone by improper synchronization. The problem arises because it would be difficult, from a complexity perspective, for multiple transmitting nodes to synchronize to a single receiver. For OFDM based systems, loss of performance due to imperfect carrier synchronization is severe, since it results in inter-carrier interference (ICI). The use of space-time/space-frequency codes from orthogonal designs are attractive for cooperative encoding. But orthogonal designs suffer from inter-symbol interference (ISI) due to the violation of quasi-static assumption, which can arise due to frequency- or time-selectivity of the channel. In this paper, we are concerned with combating the effects of i) ICI induced by carrier frequency offsets (CFO), and ii) ISI induced by frequency selectivity of the channel, in a cooperative communication scheme using space-frequency block coded (SFBC) OFDM. Specifically, we present an iterative interference cancellation (IC) algorithm to combat the ISI and ICI effects. The proposed algorithm could be applied to any orthogonal or quasi-orthogonal designs in cooperative SFBC OFDM schemes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The singularity structure of the solutions of a general third-order system, with polynomial right-hand sides of degree less than or equal to two, is studied about a movable singular point, An algorithm for transforming the given third-order system to a third-order Briot-Bouquet system is presented, The dominant behavior of a solution of the given system near a movable singularity is used to construct a transformation that changes the given system directly to a third-order Briot-Bouquet system. The results of Horn for the third-order Briot-Bouquet system are exploited to give the complete form of the series solutions of the given third-order system; convergence of these series in a deleted neighborhood of the singularity is ensured, This algorithm is used to study the singularity structure of the solutions of the Lorenz system, the Rikitake system, the three-wave interaction problem, the Rabinovich system, the Lotka-Volterra system, and the May-Leonard system for different sets of parameter values. The proposed approach goes far beyond the ARS algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a method of tracking the peak power in a wind energy conversion system (WECS) is proposed, which is independent of the turbine parameters and air density. The algorithm searches for the peak power by varying the speed in the desired direction. The generator is operated in the speed control mode with the speed reference being dynamically modified in accordance with the magnitude and direction of change of active power. The peak power points in the P-omega curve correspond to dP/domega = 0. This fact is made use of in the optimum point search algorithm. The generator considered is a wound rotor induction machine whose stator is connected directly to the grid and the rotor is fed through back-to-back pulse-width-modulation (PWM) converters. Stator flux-oriented vector control is applied to control the active and reactive current loops independently. The turbine characteristics are generated by a dc motor fed from a commercial dc drive. All of the control loops are executed by a single-chip digital signal processor (DSP) controller TMS320F240. Experimental results show that the performance of the control algorithm compares well with the conventional torque control method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerical Linear Algebra (NLA) kernels are at the heart of all computational problems. These kernels require hardware acceleration for increased throughput. NLA Solvers for dense and sparse matrices differ in the way the matrices are stored and operated upon although they exhibit similar computational properties. While ASIC solutions for NLA Solvers can deliver high performance, they are not scalable, and hence are not commercially viable. In this paper, we show how NLA kernels can be accelerated on REDEFINE, a scalable runtime reconfigurable hardware platform. Compared to a software implementation, Direct Solver (Modified Faddeev's algorithm) on REDEFINE shows a 29X improvement on an average and Iterative Solver (Conjugate Gradient algorithm) shows a 15-20% improvement. We further show that solution on REDEFINE is scalable over larger problem sizes without any notable degradation in performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we study the problem of designing SVM classifiers when the kernel matrix, K, is affected by uncertainty. Specifically K is modeled as a positive affine combination of given positive semi definite kernels, with the coefficients ranging in a norm-bounded uncertainty set. We treat the problem using the Robust Optimization methodology. This reduces the uncertain SVM problem into a deterministic conic quadratic problem which can be solved in principle by a polynomial time Interior Point (IP) algorithm. However, for large-scale classification problems, IP methods become intractable and one has to resort to first-order gradient type methods. The strategy we use here is to reformulate the robust counterpart of the uncertain SVM problem as a saddle point problem and employ a special gradient scheme which works directly on the convex-concave saddle function. The algorithm is a simplified version of a general scheme due to Juditski and Nemirovski (2011). It achieves an O(1/T-2) reduction of the initial error after T iterations. A comprehensive empirical study on both synthetic data and real-world protein structure data sets show that the proposed formulations achieve the desired robustness, and the saddle point based algorithm outperforms the IP method significantly.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel Projection Error Propagation-based Regularization (PEPR) method is proposed to improve the image quality in Electrical Impedance Tomography (EIT). PEPR method defines the regularization parameter as a function of the projection error developed by difference between experimental measurements and calculated data. The regularization parameter in the reconstruction algorithm gets modified automatically according to the noise level in measured data and ill-posedness of the Hessian matrix. Resistivity imaging of practical phantoms in a Model Based Iterative Image Reconstruction (MoBIIR) algorithm as well as with Electrical Impedance Diffuse Optical Reconstruction Software (EIDORS) with PEPR. The effect of PEPR method is also studied with phantoms with different configurations and with different current injection methods. All the resistivity images reconstructed with PEPR method are compared with the single step regularization (STR) and Modified Levenberg Regularization (LMR) techniques. The results show that, the PEPR technique reduces the projection error and solution error in each iterations both for simulated and experimental data in both the algorithms and improves the reconstructed images with better contrast to noise ratio (CNR), percentage of contrast recovery (PCR), coefficient of contrast (COC) and diametric resistivity profile (DRP). (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation is concerned with the development of a new discrete element method (DEM) based on Non-Uniform Rational Basis Splines (NURBS). With NURBS, the new DEM is able to capture sphericity and angularity, the two particle morphological measures used in characterizing real grain geometries. By taking advantage of the parametric nature of NURBS, the Lipschitzian dividing rectangle (DIRECT) global optimization procedure is employed as a solution procedure to the closest-point projection problem, which enables the contact treatment of non-convex particles. A contact dynamics (CD) approach to the NURBS-based discrete method is also formulated. By combining particle shape flexibility, properties of implicit time-integration, and non-penetrating constraints, we target applications in which the classical DEM either performs poorly or simply fails, i.e., in granular systems composed of rigid or highly stiff angular particles and subjected to quasistatic or dynamic flow conditions. The CD implementation is made simple by adopting a variational framework, which enables the resulting discrete problem to be readily solved using off-the-shelf mathematical programming solvers. The capabilities of the NURBS-based DEM are demonstrated through 2D numerical examples that highlight the effects of particle morphology on the macroscopic response of granular assemblies under quasistatic and dynamic flow conditions, and a 3D characterization of material response in the shear band of a real triaxial specimen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is much common ground between the areas of coding theory and systems theory. Fitzpatrick has shown that a Göbner basis approach leads to efficient algorithms in the decoding of Reed-Solomon codes and in scalar interpolation and partial realization. This thesis simultaneously generalizes and simplifies that approach and presents applications to discrete-time modeling, multivariable interpolation and list decoding. Gröbner basis theory has come into its own in the context of software and algorithm development. By generalizing the concept of polynomial degree, term orders are provided for multivariable polynomial rings and free modules over polynomial rings. The orders are not, in general, unique and this adds, in no small way, to the power and flexibility of the technique. As well as being generating sets for ideals or modules, Gröbner bases always contain a element which is minimal with respect tot the corresponding term order. Central to this thesis is a general algorithm, valid for any term order, that produces a Gröbner basis for the solution module (or ideal) of elements satisfying a sequence of generalized congruences. These congruences, based on shifts and homomorphisms, are applicable to a wide variety of problems, including key equations and interpolations. At the core of the algorithm is an incremental step. Iterating this step lends a recursive/iterative character to the algorithm. As a consequence, not all of the input to the algorithm need be available from the start and different "paths" can be taken to reach the final solution. The existence of a suitable chain of modules satisfying the criteria of the incremental step is a prerequisite for applying the algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this study was to characterize the image quality of our dedicated, quasi-monochromatic spectrum, cone beam breast imaging system under scatter corrected and non-scatter corrected conditions for a variety of breast compositions. CT projections were acquired of a breast phantom containing two concentric sets of acrylic spheres that varied in size (1-8mm) based on their polar position. The breast phantom was filled with 3 different concentrations of methanol and water, simulating a range of breast densities (0.79-1.0g/cc); acrylic yarn was sometimes included to simulate connective tissue of a breast. For each phantom condition, 2D scatter was measured for all projection angles. Scatter-corrected and uncorrected projections were then reconstructed with an iterative ordered subsets convex algorithm. Reconstructed image quality was characterized using SNR and contrast analysis, and followed by a human observer detection task for the spheres in the different concentric rings. Results show that scatter correction effectively reduces the cupping artifact and improves image contrast and SNR. Results from the observer study indicate that there was no statistical difference in the number or sizes of lesions observed in the scatter versus non-scatter corrected images for all densities. Nonetheless, applying scatter correction for differing breast conditions improves overall image quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Very high speed and low area hardware architectures of the SHACAL-1 encryption algorithm are presented in this paper. The SHACAL algorithm was a submission to the New European Schemes for Signatures, Integrity and Encryption (NESSIE) project and it is based on the SHA-1 hash algorithm. To date, there have been no performance metrics published on hardware implementations of this algorithm. A fully pipelined SHACAL-1 encryption architecture is described in this paper and when implemented on a Virtex-II X2V4000 FPGA device, it runs at a throughput of 17 Gbps. A fully pipelined decryption architecture achieves a speed of 13 Gbps when implemented on the same device. In addition, iterative architectures of the algorithm are presented. The SHACAL-1 decryption algorithm is derived and also presented in this paper, since it was not provided in the submission to NESSIE. © Springer-Verlag Berlin Heidelberg 2003.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Energias Renováveis – Conversão Eléctrica e Utilização Sustentáveis

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a variable time step, fully adaptive in space, hybrid method for the accurate simulation of incompressible two-phase flows in the presence of surface tension in two dimensions. The method is based on the hybrid level set/front-tracking approach proposed in [H. D. Ceniceros and A. M. Roma, J. Comput. Phys., 205, 391400, 2005]. Geometric, interfacial quantities are computed from front-tracking via the immersed-boundary setting while the signed distance (level set) function, which is evaluated fast and to machine precision, is used as a fluid indicator. The surface tension force is obtained by employing the mixed Eulerian/Lagrangian representation introduced in [S. Shin, S. I. Abdel-Khalik, V. Daru and D. Juric, J. Comput. Phys., 203, 493-516, 2005] whose success for greatly reducing parasitic currents has been demonstrated. The use of our accurate fluid indicator together with effective Lagrangian marker control enhance this parasitic current reduction by several orders of magnitude. To resolve accurately and efficiently sharp gradients and salient flow features we employ dynamic, adaptive mesh refinements. This spatial adaption is used in concert with a dynamic control of the distribution of the Lagrangian nodes along the fluid interface and a variable time step, linearly implicit time integration scheme. We present numerical examples designed to test the capabilities and performance of the proposed approach as well as three applications: the long-time evolution of a fluid interface undergoing Rayleigh-Taylor instability, an example of bubble ascending dynamics, and a drop impacting on a free interface whose dynamics we compare with both existing numerical and experimental data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To become competitive, ultimately, photovoltaics should have its costs reduced and use photovoltaic systems of greater efficiency. The main steps in this direction are the use of new materials, the improvement in the manufacture of modules and the adoption of techniques of maximum power point tracking and of solar tracking. This article aims at presenting the project and development of an azimuth and elevation solar tracker, based on a new conception of the positioning sensor, composed of an array of four photoresistors. The two direct current motors that operate in the vertical and horizontal axes are controlled by a proportional-integral microcontroller. The conditions of the project were low cost, small energy consumption and versatility. The microcontroller can also incorporate a maximum power point tracking algorithm. The performance of solar tracker prototype in the initial phase of field tests can be considered appropriate. © Institution of Engineers Australia, 2013.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)