976 resultados para iterative algorithm


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Compressed sensing (CS) is a new information sampling theory for acquiring sparse or compressible data with much fewer measurements than those otherwise required by the Nyquist/Shannon counterpart. This is particularly important for some imaging applications such as magnetic resonance imaging or in astronomy. However, in the existing CS formulation, the use of the â„“ 2 norm on the residuals is not particularly efficient when the noise is impulsive. This could lead to an increase in the upper bound of the recovery error. To address this problem, we consider a robust formulation for CS to suppress outliers in the residuals. We propose an iterative algorithm for solving the robust CS problem that exploits the power of existing CS solvers. We also show that the upper bound on the recovery error in the case of non-Gaussian noise is reduced and then demonstrate the efficacy of the method through numerical studies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Learning robust subspaces to maximize class discrimination is challenging, and most current works consider a weak connection between dimensionality reduction and classifier design. We propose an alternate framework wherein these two steps are combined in a joint formulation to exploit the direct connection between dimensionality reduction and classification. Specifically, we learn an optimal subspace on the Grassmann manifold jointly minimizing the classification error of an SVM classifier. We minimize the regularized empirical risk over both the hypothesis space of functions that underlies this new generalized multi-class Lagrangian SVM and the Grassmann manifold such that a linear projection is to be found. We propose an iterative algorithm to meet the dual goal of optimizing both the classifier and projection. Extensive numerical studies on challenging datasets show robust performance of the proposed scheme over other alternatives in contexts wherein limited training data is used, verifying the advantage of the joint formulation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tendo como motivação o desenvolvimento de uma representação gráfica de redes com grande número de vértices, útil para aplicações de filtro colaborativo, este trabalho propõe a utilização de superfícies de coesão sobre uma base temática multidimensionalmente escalonada. Para isso, utiliza uma combinação de escalonamento multidimensional clássico e análise de procrustes, em algoritmo iterativo que encaminha soluções parciais, depois combinadas numa solução global. Aplicado a um exemplo de transações de empréstimo de livros pela Biblioteca Karl A. Boedecker, o algoritmo proposto produz saídas interpretáveis e coerentes tematicamente, e apresenta um stress menor que a solução por escalonamento clássico.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Blind Source Separation (BSS) refers to the problem of estimate original signals from observed linear mixtures with no knowledge about the sources or the mixing process. Independent Component Analysis (ICA) is a technique mainly applied to BSS problem and from the algorithms that implement this technique, FastICA is a high performance iterative algorithm of low computacional cost that uses nongaussianity measures based on high order statistics to estimate the original sources. The great number of applications where ICA has been found useful reects the need of the implementation of this technique in hardware and the natural paralelism of FastICA favors the implementation of this algorithm on digital hardware. This work proposes the implementation of FastICA on a reconfigurable hardware platform for the viability of it's use in blind source separation problems, more specifically in a hardware prototype embedded in a Field Programmable Gate Array (FPGA) board for the monitoring of beds in hospital environments. The implementations will be carried out by Simulink models and it's synthesizing will be done through the DSP Builder software from Altera Corporation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This dissertation presents a methodology to the optimization of a predial system of cold water distribution. It s about a study of a case applied to the Tropical Buzios Residential Condominium, located in the Búzio s Beach, Nísia Floresta city, the east coast of the Rio Grande do Norte state, twenty kilometers far from Natal. The design of cold water distribution networks according to Norm NBR 5626 of the ABNT - Brazilian Association of Techniques Norms, does not guarantee that the joined solution is the optimal solution of less cost. It s necessary the use of an optimization methodology, that supplies us, between all the possible solutions, the minimum cost solution. In the optimization process of the predial system of water distribution of the Tropical Búzios Condominium, is used Method Granados, that is an iterative algorithm of optimization, based on the Dynamic Programming, that supplies the minimum cost s network, in function of the piezometric quota of the reservoir. For the application of this Method in ramifies networks, is used a program of computer in C language. This process is divided in two stages: attainment of the previous solution and reduction of the piezometric quota of headboard. In the attainment of the previous solution, the minors possible diameters are used that guarantee the limit of maximum speed and the requirements of minimum pressures. The piezometric quota of headboard is raised to guarantee these requirements. In the second stage of the Granados Method, an iterative process is used and it objective is to reduce the quota of headboard gradually, considering the substitution of stretches of the network pipes for the subsequent diameters, considering a minimum addition of the network cost. The diameter change is made in the optimal stretch that presents the lesser Exchange Gradient. The process is locked up when the headboard quota of desired is reached. The optimized network s material costs are calculated, and is made the analysis of the same ones, through the comparison with the conventional network s costs

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The problem treated in this dissertation is to establish boundedness for the iterates of an iterative algorithm in 2, under some technical conditions. However, this paper uses non-trivial intuitive arguments and its proofs lack suficient rigor. In this dissertation we discuss and strengthen the results of this paper, in order to complete and simplify its proofs

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Maximum-likelihood decoding is often the optimal decoding rule one can use, but it is very costly to implement in a general setting. Much effort has therefore been dedicated to find efficient decoding algorithms that either achieve or approximate the error-correcting performance of the maximum-likelihood decoder. This dissertation examines two approaches to this problem. In 2003 Feldman and his collaborators defined the linear programming decoder, which operates by solving a linear programming relaxation of the maximum-likelihood decoding problem. As with many modern decoding algorithms, is possible for the linear programming decoder to output vectors that do not correspond to codewords; such vectors are known as pseudocodewords. In this work, we completely classify the set of linear programming pseudocodewords for the family of cycle codes. For the case of the binary symmetric channel, another approximation of maximum-likelihood decoding was introduced by Omura in 1972. This decoder employs an iterative algorithm whose behavior closely mimics that of the simplex algorithm. We generalize Omura's decoder to operate on any binary-input memoryless channel, thus obtaining a soft-decision decoding algorithm. Further, we prove that the probability of the generalized algorithm returning the maximum-likelihood codeword approaches 1 as the number of iterations goes to infinity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we propose nonlinear elliptical models for correlated data with heteroscedastic and/or autoregressive structures. Our aim is to extend the models proposed by Russo et al. [22] by considering a more sophisticated scale structure to deal with variations in data dispersion and/or a possible autocorrelation among measurements taken throughout the same experimental unit. Moreover, to avoid the possible influence of outlying observations or to take into account the non-normal symmetric tails of the data, we assume elliptical contours for the joint distribution of random effects and errors, which allows us to attribute different weights to the observations. We propose an iterative algorithm to obtain the maximum-likelihood estimates for the parameters and derive the local influence curvatures for some specific perturbation schemes. The motivation for this work comes from a pharmacokinetic indomethacin data set, which was analysed previously by Bocheng and Xuping [1] under normality.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In der vorliegenden Arbeit wird die Faktorisierungsmethode zur Erkennung von Inhomogenitäten der Leitfähigkeit in der elektrischen Impedanztomographie auf unbeschränkten Gebieten - speziell der Halbebene bzw. dem Halbraum - untersucht. Als Lösungsräume für das direkte Problem, d.h. die Bestimmung des elektrischen Potentials zu vorgegebener Leitfähigkeit und zu vorgegebenem Randstrom, führen wir gewichtete Sobolev-Räume ein. In diesen wird die Existenz von schwachen Lösungen des direkten Problems gezeigt und die Gültigkeit einer Integraldarstellung für die Lösung der Laplace-Gleichung, die man bei homogener Leitfähigkeit erhält, bewiesen. Mittels der Faktorisierungsmethode geben wir eine explizite Charakterisierung von Einschlüssen an, die gegenüber dem Hintergrund eine sprunghaft erhöhte oder erniedrigte Leitfähigkeit haben. Damit ist zugleich für diese Klasse von Leitfähigkeiten die eindeutige Rekonstruierbarkeit der Einschlüsse bei Kenntnis der lokalen Neumann-Dirichlet-Abbildung gezeigt. Die mittels der Faktorisierungsmethode erhaltene Charakterisierung der Einschlüsse haben wir in ein numerisches Verfahren umgesetzt und sowohl im zwei- als auch im dreidimensionalen Fall mit simulierten, teilweise gestörten Daten getestet. Im Gegensatz zu anderen bekannten Rekonstruktionsverfahren benötigt das hier vorgestellte keine Vorabinformation über Anzahl und Form der Einschlüsse und hat als nicht-iteratives Verfahren einen vergleichsweise geringen Rechenaufwand.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work we study localized electric potentials that have an arbitrarily high energy on some given subset of a domain and low energy on another. We show that such potentials exist for general L-infinity-conductivities (with positive infima) in almost arbitrarily shaped subregions of a domain, as long as these regions are connected to the boundary and a unique continuation principle is satisfied. From this we deduce a simple, but new, theoretical identifiability result for the famous Calderon problem with partial data. We also show how to construct such potentials numerically and use a connection with the factorization method to derive a new non-iterative algorithm for the detection of inclusions in electrical impedance tomography.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The main work of this thesis concerns the measurement of the production cross section using LHC 2011 data collected at a center-of-mass energy equal to 7 TeV by the ATLAS detector and resulting in a total integrated luminosity of 4.6 inverse fb. The ZZ total cross section is finally compared with the NLO prediction calculated with modern Monte Carlo generators. In addition, the three differential distributions (∆φ(l,l), ZpT and M4l) are shown unfolded back to the underlying distributions using a Bayesian iterative algorithm. Finally, the transverse momentum of the leading Z is used to provide limits on anoumalus triple gauge couplings forbidden in the Standard Model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Es wurde ein für bodengebundene Feldmessungen geeignetes System zur digital-holographischen Abbildung luftgetragener Objekte entwickelt und konstruiert. Es ist, abhängig von der Tiefenposition, geeignet zur direkten Bestimmung der Größe luftgetragener Objekte oberhalb von ca. 20 µm, sowie ihrer Form bei Größen oberhalb von ca. 100µm bis in den Millimeterbereich. Die Entwicklung umfaßte zusätzlich einen Algorithmus zur automatisierten Verbesserung der Hologrammqualität und zur semiautomatischen Entfernungsbestimmung großer Objekte entwickelt. Eine Möglichkeit zur intrinsischen Effizienzsteigerung der Bestimmung der Tiefenposition durch die Berechnung winkelgemittelter Profile wurde vorgestellt. Es wurde weiterhin ein Verfahren entwickelt, das mithilfe eines iterativen Ansatzes für isolierte Objekte die Rückgewinnung der Phaseninformation und damit die Beseitigung des Zwillingsbildes erlaubt. Weiterhin wurden mithilfe von Simulationen die Auswirkungen verschiedener Beschränkungen der digitalen Holographie wie der endlichen Pixelgröße untersucht und diskutiert. Die geeignete Darstellung der dreidimensionalen Ortsinformation stellt in der digitalen Holographie ein besonderes Problem dar, da das dreidimensionale Lichtfeld nicht physikalisch rekonstruiert wird. Es wurde ein Verfahren entwickelt und implementiert, das durch Konstruktion einer stereoskopischen Repräsentation des numerisch rekonstruierten Meßvolumens eine quasi-dreidimensionale, vergrößerte Betrachtung erlaubt. Es wurden ausgewählte, während Feldversuchen auf dem Jungfraujoch aufgenommene digitale Hologramme rekonstruiert. Dabei ergab sich teilweise ein sehr hoher Anteil an irregulären Kristallformen, insbesondere infolge massiver Bereifung. Es wurden auch in Zeiträumen mit formal eisuntersättigten Bedingungen Objekte bis hinunter in den Bereich ≤20µm beobachtet. Weiterhin konnte in Anwendung der hier entwickelten Theorie des ”Phasenrandeffektes“ ein Objekt von nur ca. 40µm Größe als Eisplättchen identifiziert werden. Größter Nachteil digitaler Holographie gegenüber herkömmlichen photographisch abbildenden Verfahren ist die Notwendigkeit der aufwendigen numerischen Rekonstruktion. Es ergibt sich ein hoher rechnerischer Aufwand zum Erreichen eines einer Photographie vergleichbaren Ergebnisses. Andererseits weist die digitale Holographie Alleinstellungsmerkmale auf. Der Zugang zur dreidimensionalen Ortsinformation kann der lokalen Untersuchung der relativen Objektabstände dienen. Allerdings zeigte sich, dass die Gegebenheiten der digitalen Holographie die Beobachtung hinreichend großer Mengen von Objekten auf der Grundlage einzelner Hologramm gegenwärtig erschweren. Es wurde demonstriert, dass vollständige Objektgrenzen auch dann rekonstruiert werden konnten, wenn ein Objekt sich teilweise oder ganz außerhalb des geometrischen Meßvolumens befand. Weiterhin wurde die zunächst in Simulationen demonstrierte Sub-Bildelementrekonstruktion auf reale Hologramme angewandt. Dabei konnte gezeigt werden, dass z.T. quasi-punktförmige Objekte mit Sub-Pixelgenauigkeit lokalisiert, aber auch bei ausgedehnten Objekten zusätzliche Informationen gewonnen werden konnten. Schließlich wurden auf rekonstruierten Eiskristallen Interferenzmuster beobachtet und teilweise zeitlich verfolgt. Gegenwärtig erscheinen sowohl kristallinterne Reflexion als auch die Existenz einer (quasi-)flüssigen Schicht als Erklärung möglich, wobei teilweise in Richtung der letztgenannten Möglichkeit argumentiert werden konnte. Als Ergebnis der Arbeit steht jetzt ein System zur Verfügung, das ein neues Meßinstrument und umfangreiche Algorithmen umfaßt. S. M. F. Raupach, H.-J. Vössing, J. Curtius und S. Borrmann: Digital crossed-beam holography for in-situ imaging of atmospheric particles, J. Opt. A: Pure Appl. Opt. 8, 796-806 (2006) S. M. F. Raupach: A cascaded adaptive mask algorithm for twin image removal and its application to digital holograms of ice crystals, Appl. Opt. 48, 287-301 (2009) S. M. F. Raupach: Stereoscopic 3D visualization of particle fields reconstructed from digital inline holograms, (zur Veröffentlichung angenommen, Optik - Int. J. Light El. Optics, 2009)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a novel variable decomposition approach for pose recovery of the distal locking holes using single calibrated fluoroscopic image. The problem is formulated as a model-based optimal fitting process, where the control variables are decomposed into two sets: (a) the angle between the nail axis and its projection on the imaging plane, and (b) the translation and rotation of the geometrical model of the distal locking hole around the nail axis. By using an iterative algorithm to find the optimal values of the latter set of variables for any given value of the former variable, we reduce the multiple-dimensional model-based optimal fitting problem to a one-dimensional search along a finite interval. We report the results of our in vitro experiments, which demonstrate that the accuracy of our approach is adequate for successful distal locking of intramedullary nails.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We describe an extension to the SOFTSUSY program that provides for the calculation of the sparticle spectrum in the Next-to-Minimal Supersymmetric Standard Model (NMSSM), where a chiral superfield that is a singlet of the Standard Model gauge group is added to the Minimal Supersymmetric Standard Model (MSSM) fields. Often, a Z3 symmetry is imposed upon the model. SOFTSUSY can calculate the spectrum in this case as well as the case where general Z3 violating (denoted as ) terms are added to the soft supersymmetry breaking terms and the superpotential. The user provides a theoretical boundary condition for the couplings and mass terms of the singlet. Radiative electroweak symmetry breaking data along with electroweak and CKM matrix data are used as weak-scale boundary conditions. The renormalisation group equations are solved numerically between the weak scale and a high energy scale using a nested iterative algorithm. This paper serves as a manual to the NMSSM mode of the program, detailing the approximations and conventions used.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A method to reduce truncation errors in near-field antenna measurements is presented. The method is based on the Gerchberg-Papoulis iterative algorithm used to extrapolate band-limited functions and it is able to extend the valid region of the calculated far-field pattern up to the whole forward hemisphere. The extension of the valid region is achieved by the iterative application of a transformation between two different domains. After each transformation, a filtering process that is based on known information at each domain is applied. The first domain is the spectral domain in which the plane wave spectrum (PWS) is reliable only within a known region. The second domain is the field distribution over the antenna under test (AUT) plane in which the desired field is assumed to be concentrated on the antenna aperture. The method can be applied to any scanning geometry, but in this paper, only the planar, cylindrical, and partial spherical near-field measurements are considered. Several simulation and measurement examples are presented to verify the effectiveness of the method.