12 resultados para Iterative algorithm
em Universidad Politécnica de Madrid
Resumo:
A method to reduce truncation errors in near-field antenna measurements is presented. The method is based on the Gerchberg-Papoulis iterative algorithm used to extrapolate band-limited functions and it is able to extend the valid region of the calculated far-field pattern up to the whole forward hemisphere. The extension of the valid region is achieved by the iterative application of a transformation between two different domains. After each transformation, a filtering process that is based on known information at each domain is applied. The first domain is the spectral domain in which the plane wave spectrum (PWS) is reliable only within a known region. The second domain is the field distribution over the antenna under test (AUT) plane in which the desired field is assumed to be concentrated on the antenna aperture. The method can be applied to any scanning geometry, but in this paper, only the planar, cylindrical, and partial spherical near-field measurements are considered. Several simulation and measurement examples are presented to verify the effectiveness of the method.
Resumo:
This paper describes new approaches to improve the local and global approximation (matching) and modeling capability of Takagi–Sugeno (T-S) fuzzy model. The main aim is obtaining high function approximation accuracy and fast convergence. The main problem encountered is that T-S identification method cannot be applied when the membership functions are overlapped by pairs. This restricts the application of the T-S method because this type of membership function has been widely used during the last 2 decades in the stability, controller design of fuzzy systems and is popular in industrial control applications. The approach developed here can be considered as a generalized version of T-S identification method with optimized performance in approximating nonlinear functions. We propose a noniterative method through weighting of parameters approach and an iterative algorithm by applying the extended Kalman filter, based on the same idea of parameters’ weighting. We show that the Kalman filter is an effective tool in the identification of T-S fuzzy model. A fuzzy controller based linear quadratic regulator is proposed in order to show the effectiveness of the estimation method developed here in control applications. An illustrative example of an inverted pendulum is chosen to evaluate the robustness and remarkable performance of the proposed method locally and globally in comparison with the original T-S model. Simulation results indicate the potential, simplicity, and generality of the algorithm. An illustrative example is chosen to evaluate the robustness. In this paper, we prove that these algorithms converge very fast, thereby making them very practical to use.
Resumo:
We have analyzed the performance of a PET demonstrator formed by two sectors of four monolithic detector blocks placed face-to-face. Both front-end and read-out electronics have been evaluated by means of coincidence measurements using a rotating 22Na source placed at the center of the sectors in order to emulate the behavior of a complete full ring. A continuous training method based on neural network (NN) algorithms has been carried out to determine the entrance points over the surface of the detectors. Reconstructed images from 1 MBq 22Na point source and 22Na Derenzo phantom have been obtained using both filtered back projection (FBP) analytic methods and the OSEM 3D iterative algorithm available in the STIR software package [1]. Preliminary data on image reconstruction from a 22Na point source with Ø = 0.25 mm show spatial resolutions from 1.7 to 2.1 mm FWHM in the transverse plane. The results confirm the viability of this design for the development of a full-ring brain PET scanner compatible with magnetic resonance imaging for human studies.
Resumo:
One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.
Resumo:
En esta tesis se propone un procedimiento para evaluar la resistencia mecánica de obleas de silicio cristalino y se aplica en diferentes casos válidos para la industria. En el sector de la industria fotovoltaica predomina la tecnología basada en paneles de silicio cristalino. Estos paneles están compuestos por células solares conectadas en serie y estas células se forman a partir de obleas de silicio. Con el objetivo de disminuir el coste del panel, en los últimos años se ha observado una clara tendencia a la reducción del espesor de las obleas. Esta reducción del espesor modifica la rigidez de las obleas por lo que ha sido necesario modificar la manera tradicional de manipularlas con el objetivo de mantener un bajo ratio de rotura. Para ello, es necesario conocer la resistencia mecánica de las obleas. En la primera parte del trabajo se describen las obleas de silicio, desde su proceso de formación hasta sus propiedades mecánicas. Se muestra la influencia de la estructura cristalográfica en la resistencia y en el comportamiento ya que el cristal de silicio es anisótropo. Se propone también el método de caracterización de la resistencia. Se utiliza un criterio probabilista basado en los métodos de dimensionamiento de materiales frágiles en el que la resistencia queda determinada por los parámetros de la ley de Weibull triparamétrica. Se propone el procedimiento para obtener estos parámetros a partir de campañas de ensayos, modelización numérica por elementos finitos y un algoritmo iterativo de ajuste de los resultados. En la segunda parte de la tesis se describen los diferentes tipos de ensayos que se suelen llevar a cabo con este material. Se muestra además, para cada uno de los ensayos descritos, un estudio comparativo de diferentes modelos de elementos finitos simulando los ensayos. Se comparan tanto los resultados aportados por cada modelo como los tiempos de cálculo. Por último, se presentan tres aplicaciones diferentes donde se ha aplicado este procedimiento de estudio. La primera aplicación consiste en la comparación de la resistencia mecánica de obleas de silicio en función del método de crecimiento del lingote. La resistencia de las tradicionales obleas monocristalinas obtenidas por el método Czochralski y obleas multicristalinas es comparada con las novedosas obleas quasi-monocristalinas obtenidas por métodos de fundición. En la segunda aplicación se evalúa la profundidad de las grietas generadas en el proceso de corte del lingote en obleas. Este estudio se realiza de manera indirecta: caracterizando la resistencia de grupos de obleas sometidas a baños químicos de diferente duración. El baño químico reduce el espesor de las obleas eliminando las capas más dañadas. La resistencia de cada grupo es analizada y la comparación permite obtener la profundidad de las grietas generadas en el proceso de corte. Por último, se aplica este procedimiento a un grupo de obleas con características muy especiales: obleas preparadas para formar células de contacto posterior EWT. Estas obleas presentan miles de agujeros que las debilitan considerablemente. Se aplica el procedimiento de estudio propuesto con un grupo de estas obleas y se compara la resistencia obtenida con un grupo de referencia. Además, se propone un método simplificado de estudio basado en la aplicación de una superficie de intensificación de tensiones. ABSTRACT In this thesis, a procedure to evaluate the mechanical strength of crystalline silicon wafers is proposed and applied in different studies. The photovoltaic industry is mainly based on crystalline silicon modules. These modules are composed of solar cells which are based on silicon wafers. Regarding the cost reduction of solar modules, a clear tendency to use thinner wafers has been observed during last years. Since the stiffness varies with thickness, the manipulation techniques need to be modified in order to guarantee a low breakage rate. To this end, the mechanical strength has to be characterized correctly. In the first part of the thesis, silicon wafers are described including the different ways to produce them and the mechanical properties of interest. The influence of the crystallographic structure in the strength and the behaviour (the anisotropy of the silicon crystal) is shown. In addition, a method to characterize the mechanical strength is proposed. This probabilistic procedure is based on methods to characterize brittle materials. The strength is characterized by the values of the three parameters of the Weibull cumulative distribution function (cdf). The proposed method requires carrying out several tests, to simulate them through Finite Element models and an iterative algorithm in order to estimate the parameters of the Weibull cdf. In the second part of the thesis, the different types of test that are usually employed with these samples are described. Moreover, different Finite Element models for the simulation of each test are compared regarding the information supplied by each model and the calculation times. Finally, the method of characterization is applied to three examples of practical applications. The first application consists in the comparison of the mechanical strength of silicon wafers depending on the ingot growth method. The conventional monocrystalline wafers based on the Czochralski method and the multicrystalline ones are compared with the new quasi-monocrystalline substrates. The second application is related to the estimation of the crack length caused by the drilling process. An indirect way is used to this end: several sets of silicon wafers are subjected to chemical etchings of different duration. The etching procedure reduces the thickness of the wafers removing the most damaged layers. The strength of each set is obtained by means of the proposed method and the comparison permits to estimate the crack length. At last, the procedure is applied to determine the strength of wafers used for the design of back-contact cells of type ETW. These samples are drilled in a first step resulting in silicon wafers with thousands of tiny holes. The strength of the drilled wafers is obtained and compared with the one of a standard set without holes. Moreover, a simplified approach based on a stress intensification surface is proposed.
Resumo:
The option value problem with two costs is written as a variational inequality. The advantage of this formulation is that it takes place in a fixed domain. Thus no front tracking is needed for numerical approximation of the free boundary. An iterative algorithm is proposed which can be used to solve the nonlinear system obtained by finite differences or finite elements procedures. Especial care has to be taken in the design of differences finites schemes o finite elements due to the degeneracy of the differential operator. These schemes can be absortion or convection dominated nearly to the axis. This is a preliminary note to the study of this kind of problems.
Resumo:
We present a framework for the analysis of the decoding delay in multiview video coding (MVC). We show that in real-time applications, an accurate estimation of the decoding delay is essential to achieve a minimum communication latency. As opposed to single-view codecs, the complexity of the multiview prediction structure and the parallel decoding of several views requires a systematic analysis of this decoding delay, which we solve using graph theory and a model of the decoder hardware architecture. Our framework assumes a decoder implementation in general purpose multi-core processors with multi-threading capabilities. For this hardware model, we show that frame processing times depend on the computational load of the decoder and we provide an iterative algorithm to compute jointly frame processing times and decoding delay. Finally, we show that decoding delay analysis can be applied to design decoders with the objective of minimizing the communication latency of the MVC system.
Resumo:
Electric probes are objects immersed in the plasma with sharp boundaries which collect of emit charged particles. Consequently, the nearby plasma evolves under abrupt imposed and/or naturally emerging conditions. There could be localized currents, different time scales for plasma species evolution, charge separation and absorbing-emitting walls. The traditional numerical schemes based on differences often transform these disparate boundary conditions into computational singularities. This is the case of models using advection-diffusion differential equations with source-sink terms (also called Fokker-Planck equations). These equations are used in both, fluid and kinetic descriptions, to obtain the distribution functions or the density for each plasma species close to the boundaries. We present a resolution method grounded on an integral advancing scheme by using approximate Green's functions, also called short-time propagators. All the integrals, as a path integration process, are numerically calculated, what states a robust grid-free computational integral method, which is unconditionally stable for any time step. Hence, the sharp boundary conditions, as the current emission from a wall, can be treated during the short-time regime providing solutions that works as if they were known for each time step analytically. The form of the propagator (typically a multivariate Gaussian) is not unique and it can be adjusted during the advancing scheme to preserve the conserved quantities of the problem. The effects of the electric or magnetic fields can be incorporated into the iterative algorithm. The method allows smooth transitions of the evolving solutions even when abrupt discontinuities are present. In this work it is proposed a procedure to incorporate, for the very first time, the boundary conditions in the numerical integral scheme. This numerical scheme is applied to model the plasma bulk interaction with a charge-emitting electrode, dealing with fluid diffusion equations combined with Poisson equation self-consistently. It has been checked the stability of this computational method under any number of iterations, even for advancing in time electrons and ions having different time scales. This work establishes the basis to deal in future work with problems related to plasma thrusters or emissive probes in electromagnetic fields.
Resumo:
A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries.
Resumo:
This paper presents a time-domain stochastic system identification method based on maximum likelihood estimation (MLE) with the expectation maximization (EM) algorithm. The effectiveness of this structural identification method is evaluated through numerical simulation in the context of the ASCE benchmark problem on structural health monitoring. The benchmark structure is a four-story, two-bay by two-bay steel-frame scale model structure built in the Earthquake Engineering Research Laboratory at the University of British Columbia, Canada. This paper focuses on Phase I of the analytical benchmark studies. A MATLAB-based finite element analysis code obtained from the IASC-ASCE SHM Task Group web site is used to calculate the dynamic response of the prototype structure. A number of 100 simulations have been made using this MATLAB-based finite element analysis code in order to evaluate the proposed identification method. There are several techniques to realize system identification. In this work, stochastic subspace identification (SSI)method has been used for comparison. SSI identification method is a well known method and computes accurate estimates of the modal parameters. The principles of the SSI identification method has been introduced in the paper and next the proposed MLE with EM algorithm has been explained in detail. The advantages of the proposed structural identification method can be summarized as follows: (i) the method is based on maximum likelihood, that implies minimum variance estimates; (ii) EM is a computational simpler estimation procedure than other optimization algorithms; (iii) estimate more parameters than SSI, and these estimates are accurate. On the contrary, the main disadvantages of the method are: (i) EM algorithm is an iterative procedure and it consumes time until convergence is reached; and (ii) this method needs starting values for the parameters. Modal parameters (eigenfrequencies, damping ratios and mode shapes) of the benchmark structure have been estimated using both the SSI method and the proposed MLE + EM method. The numerical results show that the proposed method identifies eigenfrequencies, damping ratios and mode shapes reasonably well even in the presence of 10% measurement noises. These modal parameters are more accurate than the SSI estimated modal parameters.
Resumo:
The boundary element method (BEM) has been applied successfully to many engineering problems during the last decades. Compared with domain type methods like the finite element method (FEM) or the finite difference method (FDM) the BEM can handle problems where the medium extends to infinity much easier than domain type methods as there is no need to develop special boundary conditions (quiet or absorbing boundaries) or infinite elements at the boundaries introduced to limit the domain studied. The determination of the dynamic stiffness of arbitrarily shaped footings is just one of these fields where the BEM has been the method of choice, especially in the 1980s. With the continuous development of computer technology and the available hardware equipment the size of the problems under study grew and, as the flop count for solving the resulting linear system of equations grows with the third power of the number of equations, there was a need for the development of iterative methods with better performance. In [1] the GMRES algorithm was presented which is now widely used for implementations of the collocation BEM. While the FEM results in sparsely populated coefficient matrices, the BEM leads, in general, to fully or densely populated ones, depending on the number of subregions, posing a serious memory problem even for todays computers. If the geometry of the problem permits the surface of the domain to be meshed with equally shaped elements a lot of the resulting coefficients will be calculated and stored repeatedly. The present paper shows how these unnecessary operations can be avoided reducing the calculation time as well as the storage requirement. To this end a similar coefficient identification algorithm (SCIA), has been developed and implemented in a program written in Fortran 90. The vertical dynamic stiffness of a single pile in layered soil has been chosen to test the performance of the implementation. The results obtained with the 3-d model may be compared with those obtained with an axisymmetric formulation which are considered to be the reference values as the mesh quality is much better. The entire 3D model comprises more than 35000 dofs being a soil region with 21168 dofs the biggest single region. Note that the memory necessary to store all coefficients of this single region is about 6.8 GB, an amount which is usually not available with personal computers. In the problem under study the interface zone between the two adjacent soil regions as well as the surface of the top layer may be meshed with equally sized elements. In this case the application of the SCIA leads to an important reduction in memory requirements. The maximum memory used during the calculation has been reduced to 1.2 GB. The application of the SCIA thus permits problems to be solved on personal computers which otherwise would require much more powerful hardware.
Resumo:
In this contribution a novel iterative bit- and power allocation (IBPA) approach has been developed when transmitting a given bit/s/Hz data rate over a correlated frequency non-selective (4× 4) Multiple-Input MultipleOutput (MIMO) channel. The iterative resources allocation algorithm developed in this investigation is aimed at the achievement of the minimum bit-error rate (BER) in a correlated MIMO communication system. In order to achieve this goal, the available bits are iteratively allocated in the MIMO active layers which present the minimum transmit power requirement per time slot.