53 resultados para post-processing method
em Universidad Politécnica de Madrid
Resumo:
One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.
Resumo:
One of the main obstacles to the widespread adoption of quantum cryptography has been the difficulty of integration into standard optical networks, largely due to the tremendous difference in power of classical signals compared with the single quantum used for quantum key distribution. This makes the technology expensive and hard to deploy. In this letter, we show an easy and straightforward integration method of quantum cryptography into optical access networks. In particular, we analyze how a quantum key distribution system can be seamlessly integrated in a standard access network based on the passive optical and time division multiplexing paradigms. The novelty of this proposal is based on the selective post-processing that allows for the distillation of secret keys avoiding the noise produced by other network users. Importantly, the proposal does not require the modification of the quantum or classical hardware specifications neither the use of any synchronization mechanism between the network and quantum cryptography devices.
Resumo:
This paper describes the UPM system for translation task at the EMNLP 2011 workshop on statistical machine translation (http://www.statmt.org/wmt11/), and it has been used for both directions: Spanish-English and English-Spanish. This system is based on Moses with two new modules for pre and post processing the sentences. The main contribution is the method proposed (based on the similarity with the source language test set) for selecting the sentences for training the models and adjusting the weights. With system, we have obtained a 23.2 BLEU for Spanish-English and 21.7 BLEU for EnglishSpanish
Resumo:
El estudio de materiales, especialmente biológicos, por medios no destructivos está adquiriendo una importancia creciente tanto en las aplicaciones científicas como industriales. Las ventajas económicas de los métodos no destructivos son múltiples. Existen numerosos procedimientos físicos capaces de extraer información detallada de las superficie de la madera con escaso o nulo tratamiento previo y mínima intrusión en el material. Entre los diversos métodos destacan las técnicas ópticas y las acústicas por su gran versatilidad, relativa sencillez y bajo coste. Esta tesis pretende establecer desde la aplicación de principios simples de física, de medición directa y superficial, a través del desarrollo de los algoritmos de decisión mas adecuados basados en la estadística, unas soluciones tecnológicas simples y en esencia, de coste mínimo, para su posible aplicación en la determinación de la especie y los defectos superficiales de la madera de cada muestra tratando, en la medida de lo posible, no alterar su geometría de trabajo. Los análisis desarrollados han sido los tres siguientes: El primer método óptico utiliza las propiedades de la luz dispersada por la superficie de la madera cuando es iluminada por un laser difuso. Esta dispersión produce un moteado luminoso (speckle) cuyas propiedades estadísticas permiten extraer propiedades muy precisas de la estructura tanto microscópica como macroscópica de la madera. El análisis de las propiedades espectrales de la luz laser dispersada genera ciertos patrones mas o menos regulares relacionados con la estructura anatómica, composición, procesado y textura superficial de la madera bajo estudio que ponen de manifiesto características del material o de la calidad de los procesos a los que ha sido sometido. El uso de este tipo de láseres implica también la posibilidad de realizar monitorizaciones de procesos industriales en tiempo real y a distancia sin interferir con otros sensores. La segunda técnica óptica que emplearemos hace uso del estudio estadístico y matemático de las propiedades de las imágenes digitales obtenidas de la superficie de la madera a través de un sistema de scanner de alta resolución. Después de aislar los detalles mas relevantes de las imágenes, diversos algoritmos de clasificacion automatica se encargan de generar bases de datos con las diversas especies de maderas a las que pertenecían las imágenes, junto con los márgenes de error de tales clasificaciones. Una parte fundamental de las herramientas de clasificacion se basa en el estudio preciso de las bandas de color de las diversas maderas. Finalmente, numerosas técnicas acústicas, tales como el análisis de pulsos por impacto acústico, permiten complementar y afinar los resultados obtenidos con los métodos ópticos descritos, identificando estructuras superficiales y profundas en la madera así como patologías o deformaciones, aspectos de especial utilidad en usos de la madera en estructuras. La utilidad de estas técnicas esta mas que demostrada en el campo industrial aun cuando su aplicación carece de la suficiente expansión debido a sus altos costes y falta de normalización de los procesos, lo cual hace que cada análisis no sea comparable con su teórico equivalente de mercado. En la actualidad gran parte de los esfuerzos de investigación tienden a dar por supuesto que la diferenciación entre especies es un mecanismo de reconocimiento propio del ser humano y concentran las tecnologías en la definición de parámetros físicos (módulos de elasticidad, conductividad eléctrica o acústica, etc.), utilizando aparatos muy costosos y en muchos casos complejos en su aplicación de campo. Abstract The study of materials, especially the biological ones, by non-destructive techniques is becoming increasingly important in both scientific and industrial applications. The economic advantages of non-destructive methods are multiple and clear due to the related costs and resources necessaries. There are many physical processes capable of extracting detailed information on the wood surface with little or no previous treatment and minimal intrusion into the material. Among the various methods stand out acoustic and optical techniques for their great versatility, relative simplicity and low cost. This thesis aims to establish from the application of simple principles of physics, surface direct measurement and through the development of the more appropriate decision algorithms based on statistics, a simple technological solutions with the minimum cost for possible application in determining the species and the wood surface defects of each sample. Looking for a reasonable accuracy without altering their work-location or properties is the main objetive. There are three different work lines: Empirical characterization of wood surfaces by means of iterative autocorrelation of laser speckle patterns: A simple and inexpensive method for the qualitative characterization of wood surfaces is presented. it is based on the iterative autocorrelation of laser speckle patterns produced by diffuse laser illumination of the wood surfaces. The method exploits the high spatial frequency content of speckle images. A similar approach with raw conventional photographs taken with ordinary light would be very difficult. A few iterations of the algorithm are necessary, typically three or four, in order to visualize the most important periodic features of the surface. The processed patterns help in the study of surface parameters, to design new scattering models and to classify the wood species. Fractal-based image enhancement techniques inspired by differential interference contrast microscopy: Differential interference contrast microscopy is a very powerful optical technique for microscopic imaging. Inspired by the physics of this type of microscope, we have developed a series of image processing algorithms aimed at the magnification, noise reduction, contrast enhancement and tissue analysis of biological samples. These algorithms use fractal convolution schemes which provide fast and accurate results with a performance comparable to the best present image enhancement algorithms. These techniques can be used as post processing tools for advanced microscopy or as a means to improve the performance of less expensive visualization instruments. Several examples of the use of these algorithms to visualize microscopic images of raw pine wood samples with a simple desktop scanner are provided. Wood species identification using stress-wave analysis in the audible range: Stress-wave analysis is a powerful and flexible technique to study mechanical properties of many materials. We present a simple technique to obtain information about the species of wood samples using stress-wave sounds in the audible range generated by collision with a small pendulum. Stress-wave analysis has been used for flaw detection and quality control for decades, but its use for material identification and classification is less cited in the literature. Accurate wood species identification is a time consuming task for highly trained human experts. For this reason, the development of cost effective techniques for automatic wood classification is a desirable goal. Our proposed approach is fully non-invasive and non-destructive, reducing significantly the cost and complexity of the identification and classification process.
Resumo:
Corrosion of reinforcing steel in concrete due to chloride ingress is one of the main causes of the deterioration of reinforced concrete structures. Structures most affected by such a corrosion are marine zone buildings and structures exposed to de-icing salts like highways and bridges. Such process is accompanied by an increase in volume of the corrosión products on the rebarsconcrete interface. Depending on the level of oxidation, iron can expand as much as six times its original volume. This increase in volume exerts tensile stresses in the surrounding concrete which result in cracking and spalling of the concrete cover if the concrete tensile strength is exceeded. The mechanism by which steel embedded in concrete corrodes in presence of chloride is the local breakdown of the passive layer formed in the highly alkaline condition of the concrete. It is assumed that corrosion initiates when a critical chloride content reaches the rebar surface. The mathematical formulation idealized the corrosion sequence as a two-stage process: an initiation stage, during which chloride ions penetrate to the reinforcing steel surface and depassivate it, and a propagation stage, in which active corrosion takes place until cracking of the concrete cover has occurred. The aim of this research is to develop computer tools to evaluate the duration of the service life of reinforced concrete structures, considering both the initiation and propagation periods. Such tools must offer a friendly interface to facilitate its use by the researchers even though their background is not in numerical simulation. For the evaluation of the initiation period different tools have been developed: Program TavProbabilidade: provides means to carry out a probability analysis of a chloride ingress model. Such a tool is necessary due to the lack of data and general uncertainties associated with the phenomenon of the chloride diffusion. It differs from the deterministic approach because it computes not just a chloride profile at a certain age, but a range of chloride profiles for each probability or occurrence. Program TavProbabilidade_Fiabilidade: carries out reliability analyses of the initiation period. It takes into account the critical value of the chloride concentration on the steel that causes breakdown of the passive layer and the beginning of the propagation stage. It differs from the deterministic analysis in that it does not predict if the corrosion is going to begin or not, but to quantifies the probability of corrosion initiation. Program TavDif_1D: was created to do a one dimension deterministic analysis of the chloride diffusion process by the finite element method (FEM) which numerically solves Fick’second Law. Despite of the different FEM solver already developed in one dimension, the decision to create a new code (TavDif_1D) was taken because of the need to have a solver with friendly interface for pre- and post-process according to the need of IETCC. An innovative tool was also developed with a systematic method devised to compare the ability of the different 1D models to predict the actual evolution of chloride ingress based on experimental measurements, and also to quantify the degree of agreement of the models with each others. For the evaluation of the entire service life of the structure: a computer program has been developed using finite elements method to do the coupling of both service life periods: initiation and propagation. The program for 2D (TavDif_2D) allows the complementary use of two external programs in a unique friendly interface: • GMSH - an finite element mesh generator and post-processing viewer • OOFEM – a finite element solver. This program (TavDif_2D) is responsible to decide in each time step when and where to start applying the boundary conditions of fracture mechanics module in function of the amount of chloride concentration and corrosion parameters (Icorr, etc). This program is also responsible to verify the presence and the degree of fracture in each element to send the Information of diffusion coefficient variation with the crack width. • GMSH - an finite element mesh generator and post-processing viewer • OOFEM – a finite element solver. The advantages of the FEM with the interface provided by the tool are: • the flexibility to input the data such as material property and boundary conditions as time dependent function. • the flexibility to predict the chloride concentration profile for different geometries. • the possibility to couple chloride diffusion (initiation stage) with chemical and mechanical behavior (propagation stage). The OOFEM code had to be modified to accept temperature, humidity and the time dependent values for the material properties, which is necessary to adequately describe the environmental variations. A 3-D simulation has been performed to simulate the behavior of the beam on both, action of the external load and the internal load caused by the corrosion products, using elements of imbedded fracture in order to plot the curve of the deflection of the central region of the beam versus the external load to compare with the experimental data.
Resumo:
Reverberation chambers are well known for providing a random-like electric field distribution. Detection of directivity or gain thereof requires an adequate procedure and smart post-processing. In this paper, a new method is proposed for estimating the directivity of radiating devices in a reverberation chamber (RC). The method is based on the Rician K-factor whose estimation in an RC benefits from recent improvements. Directivity estimation relies on the accurate determination of the K-factor with respect to a reference antenna. Good agreement is reported with measurements carried out in near-field anechoic chamber (AC) and using a near-field to far-field transformation.
Resumo:
In this paper, a new method is presented to ensure automatic synchronization of intracardiac ECG data, yielding a three-stage algorithm. We first compute a robust estimate of the derivative of the data to remove low-frequency perturbations. Then we provide a grouped-sparse representation of the data, by means of the Group LASSO, to ensure that all the electrical spikes are simultaneously detected. Finally, a post-processing step, based on a variance analysis, is performed to discard false alarms. Preliminary results on real data for sinus rhythm and atrial fibrillation show the potential of this approach.
Resumo:
Existen en el mercado numerosas aplicaciones para la generación de reverberación y para la medición de respuestas al impulso acústicas. Sin embargo, éstas son de precios muy elevados y/o no se permite acceder a su código y, mucho menos, distribuir de forma totalmente libre. Además, las herramientas que ofrecen para la medición de respuestas al impulso requieren de un tedioso proceso para la generación de la señal de excitación, su reproducción y grabación y, finalmente, su post-procesado. Este procedimiento puede llevar en ocasiones al usuario a cometer errores debido a la falta de conocimientos técnicos. El propósito de este proyecto es dar solución a algunos de los inconvenientes planteados. Con tal fin se llevó a cabo el desarrollo e implementación de un módulo de reverberación por convolución particionada en tiempo real, haciendo uso de software gratuito y de libre distribución. En concreto, se eligió la estación digital de trabajo (DAW. Digital Audio Worksation) REAPER de la compañía Cockos. Además de incluir las funcionalidades básicas de edición y secuenciación presentes en cualquier DAW, el programa incluye un entorno para la implementación de efectos de audio en lenguaje JS (Jesusonic), y se distribuye con licencias completamente gratuitas y sin limitaciones de uso. Complementariamente, se propone una extensión para REAPER que permite la medición de respuestas al impulso de recintos acústicos de una forma completamente automatizada y amigable para el usuario. Estas respuestas podrán ser almacenadas y posteriormente cargadas en el módulo de reverberación, permitiendo aplicar sobre nuestras pistas de audio la respuesta acústica de cualquier recinto en el que se hayan realizado medidas. La implementación del sistema de medida de respuestas se llevó a cabo empleando la herramienta ReaScript de REAPER, que permite la ejecución de pequeños scripts Python. El programa genera un Barrido Sinusoidal Logarítmico que excita el recinto acústico cuya respuesta se desea medir, grabando la misma en un archivo .wav. Este procedimiento es sencillo, intuitivo y está al alcance de cualquier usuario doméstico, ya que no requiere la utilización de sofisticado instrumental de medida. ABSTRACT. There are numerous applications in the market for the generation of reverb and measurement of acoustic impulse responses. However, they are usually very costly and closed source. In addition, the provided tools for measuring impulse responses require tedious processes for the generation and reproduction of the excitation signal, the recording of the response and its final post-processing. This procedure can sometimes drive the user to make mistakes due to the lack of technical knowledge. The purpose of this project is to solve some of the mentioned problems. To that end we developed and implemented a real-time partitioned convolution reverb module using free open source software. Specifically, the chosen software was the Cockos’ digital audio workstation (DAW) REAPER. In addition to the basic features included in any DAW, such as editing and sequencing, the program includes an environment for implementing audio effects in JS (Jesusonic) language of free distribution and features an unrestricted license. As an extension for REAPER, we propose a fully automated and user-friendly method for measuring rooms’ acoustic impulse responses. These will be stored and then loaded into the reverb module, allowing the user to apply the acoustical response of any room where measurement have been taken to any audio track. The implementation of the impulse response measurement system was done using REAPER’s ReaScript tool that allows the execution of small Python scripts. The program generates a logarithmic sine sweep that excites the room and its response is recorded in a .wav file. This procedure is simple, intuitive and it is accessible to any home user as it does not require the use of sophisticated measuring equipment.
Resumo:
There exists an interest in performing pin-by-pin calculations coupled with thermal hydraulics so as to improve the accuracy of nuclear reactor analysis. In the framework of the EU NURISP project, INRNE and UPM have generated an experimental version of a few group diffusion cross sections library with discontinuity factors intended for VVER analysis at the pin level with the COBAYA3 code. The transport code APOLLO2 was used to perform the branching calculations. As a first proof of principle the library was created for fresh fuel and covers almost the full parameter space of steady state and transient conditions. The main objective is to test the calculation schemes and post-processing procedures, including multi-pin branching calculations. Two library options are being studied: one based on linear table interpolation and another one using a functional fitting of the cross sections. The libraries generated with APOLLO2 have been tested with the pin-by-pin diffusion model in COBAYA3 including discontinuity factors; first comparing 2D results against the APOLLO2 reference solutions and afterwards using the libraries to compute a 3D assembly problem coupled with a simplified thermal-hydraulic model.
Resumo:
Information reconciliation is a crucial procedure in the classical post-processing of quantum key distribution (QKD). Poor reconciliation e?ciency, revealing more information than strictly needed, may compromise the maximum attainable distance, while poor performance of the algorithm limits the practical throughput in a QKD device. Historically, reconciliation has been mainly done using close to minimal information disclosure but heavily interactive procedures, like Cascade, or using less e?cient but also less interactive ?just one message is exchanged? procedures, like the ones based in low-density parity-check (LDPC) codes. The price to pay in the LDPC case is that good e?ciency is only attained for very long codes and in a very narrow range centered around the quantum bit error rate (QBER) that the code was designed to reconcile, thus forcing to have several codes if a broad range of QBER needs to be catered for. Real world implementations of these methods are thus very demanding, either on computational or communication resources or both, to the extent that the last generation of GHz clocked QKD systems are ?nding a bottleneck in the classical part. In order to produce compact, high performance and reliable QKD systems it would be highly desirable to remove these problems. Here we analyse the use of short-length LDPC codes in the information reconciliation context using a low interactivity, blind, protocol that avoids an a priori error rate estimation. We demonstrate that 2×103 bits length LDPC codes are suitable for blind reconciliation. Such codes are of high interest in practice, since they can be used for hardware implementations with very high throughput.
Resumo:
CaCu3Ti4O12 (CCTO) was prepared by a conventional synthesis (CS) and through reaction sintering, in which synthesis and sintering of the material take place in one single step. The microstructure and the dielectric properties of CCTO have been studied by XRD, FE-SEM, EDS, AFM, and impedance spectroscopy to correlate structure, microstructure, and electrical properties. Samples prepared by reactive sintering show very similar dielectric behavior to those prepared by CS. Therefore, it is possible to prepare CCTO by means of a single-step processing method.
Resumo:
This paper proposes an architecture, based on statistical machine translation, for developing the text normalization module of a text to speech conversion system. The main target is to generate a language independent text normalization module, based on data and flexible enough to deal with all situa-tions presented in this task. The proposed architecture is composed by three main modules: a tokenizer module for splitting the text input into a token graph (tokenization), a phrase-based translation module (token translation) and a post-processing module for removing some tokens. This paper presents initial exper-iments for numbers and abbreviations. The very good results obtained validate the proposed architecture.
Resumo:
CaCu3(Ti4xHfx)O12 ceramics (JC = 0.04, 0.1 and 0.2) were prepared by conventional synthesis (CS) and through reactive sintering (RS), in which synthesis and sintering of the material take place in one single step. The microstructure and the dielectric properties of Hf-doped CCTO (CCTOHf) have been studied by XRD, FE-SEM, AFM, Raman and impedance spectroscopy (IS) in order to correlate the structure, microstructure and the electrical properties. Samples prepared by reactive sintering show slightly higher dielectric constant than those prepared by conventional synthesis in the same way than the pure CCTO. Dielectric constant and dielectric losses decrease slightly increasing Hf content. For CCTOHf ceramics with x> 0.04 for CS and x> 0.1 for RS, a secondary phase HfTi04 appears. As expected, the reactive sintering processing method allows a higher incorporation of Hf in the CCTO lattice than the conventional synthesis one.
Resumo:
The advantages of fast-spectrum reactors consist not only of an efficient use of fuel through the breeding of fissile material and the use of natural or depleted uranium, but also of the potential reduction of the amount of actinides such as americium and neptunium contained in the irradiated fuel. The first aspect means a guaranteed future nuclear fuel supply. The second fact is key for high-level radioactive waste management, because these elements are the main responsible for the radioactivity of the irradiated fuel in the long term. The present study aims to analyze the hypothetical deployment of a Gen-IV Sodium Fast Reactor (SFR) fleet in Spain. A nuclear fleet of fast reactors would enable a fuel cycle strategy different than the open cycle, currently adopted by most of the countries with nuclear power. A transition from the current Gen-II to Gen-IV fleet is envisaged through an intermediate deployment of Gen-III reactors. Fuel reprocessing from the Gen-II and Gen-III Light Water Reactors (LWR) has been considered. In the so-called advanced fuel cycle, the reprocessed fuel used to produce energy will breed new fissile fuel and transmute minor actinides at the same time. A reference case scenario has been postulated and further sensitivity studies have been performed to analyze the impact of the different parameters on the required reactor fleet. The potential capability of Spain to supply the required fleet for the reference scenario using national resources has been verified. Finally, some consequences on irradiated final fuel inventory are assessed. Calculations are performed with the Monte Carlo transport-coupled depletion code SERPENT together with post-processing tools.
Resumo:
En este proyecto se ha desarrollado un código de MATLAB para el procesamiento de imágenes tomográficas 3D, de muestras de asfalto de carreteras en Polonia. Estas imágenes en 3D han sido tomadas por un equipo de investigación de la Universidad Tecnológica de Lodz (LUT). El objetivo de este proyecto es crear una herramienta que se pueda utilizar para estudiar las diferentes muestras de asfalto 3D y pueda servir para estudiar las pruebas de estrés que experimentan las muestras en el laboratorio. Con el objetivo final de encontrar soluciones a la degradación sufrida en las carreteras de Polonia, debido a diferentes causas, como son las condiciones meteorológicas. La degradación de las carreteras es un tema que se ha investigado desde hace muchos años, debido a la fuerte degradación causada por diferentes factores como son climáticos, la falta de mantenimiento o el tráfico excesivo en algunos casos. Es en Polonia, donde estos tres factores hacen que la composición de muchas carreteras se degrade rápidamente, sobre todo debido a las condiciones meteorológicas sufridas a lo largo del año, con temperaturas que van desde 30° C en verano a -20° C en invierno. Esto hace que la composición de las carreteras sufra mucho y el asfalto se levante, lo que aumenta los costos de mantenimiento y los accidentes de carretera. Este proyecto parte de la base de investigación que se lleva a cabo en la LUT, tratando de mejorar el análisis de las muestras de asfalto, por lo que se realizarán las pruebas de estrés y encontrar soluciones para mejorar el asfalto en las carreteras polacas. Esto disminuiría notablemente el costo de mantenimiento. A pesar de no entrar en aspectos muy técnicos sobre el asfalto y su composición, se ha necesitado realizar un estudio profundo sobre todas sus características, para crear un código capaz de obtener los mejores resultados. Por estas razones, se ha desarrollado en Matlab, los algoritmos que permiten el estudio de los especímenes 3D de asfalto. Se ha utilizado este software, ya que Matlab es una poderosa herramienta matemática que permite operar con matrices para realización de operaciones rápidamente, permitiendo desarrollar un código específico para el tratamiento y procesamiento de imágenes en 3D. Gracias a esta herramienta, estos algoritmos realizan procesos tales como, la segmentación de la imagen 3D, pre y post procesamiento de la imagen, filtrado o todo tipo de análisis microestructural de las muestras de asfalto que se están estudiando. El código presentado para la segmentación de las muestras de asfalto 3D es menos complejo en su diseño y desarrollo, debido a las herramientas de procesamiento de imágenes que incluye Matlab, que facilitan significativamente la tarea de programación, así como el método de segmentación utilizado. Respecto al código, este ha sido diseñado teniendo en cuenta el objetivo de facilitar el trabajo de análisis y estudio de las imágenes en 3D de las muestras de asfalto. Por lo tanto, el principal objetivo es el de crear una herramienta para el estudio de este código, por ello fue desarrollado para que pueda ser integrado en un entorno visual, de manera que sea más fácil y simple su utilización. Ese es el motivo por el cual todos estos algoritmos y funciones, que ha sido desarrolladas, se integrarán en una herramienta visual que se ha desarrollado con el GUIDE de Matlab. Esta herramienta ha sido creada en colaboración con Jorge Vega, y fue desarrollada en su proyecto final de carrera, cuyo título es: Segmentación microestructural de Imágenes en 3D de la muestra de asfalto utilizando Matlab. En esta herramienta se ha utilizado todo las funciones programadas en este proyecto, y tiene el objetivo de desarrollar una herramienta que permita crear un entorno gráfico intuitivo y de fácil uso para el estudio de las muestras de 3D de asfalto. Este proyecto se ha dividido en 4 capítulos, en un primer lugar estará la introducción, donde se presentarán los aspectos más importante que se va a componer el proyecto. En el segundo capítulo se presentarán todos los datos técnicos que se han tenido que estudiar para desarrollar la herramienta, entre los que cabe los tres temas más importantes que se han estudiado en este proyecto: materiales asfálticos, los principios de la tomografías 3D y el procesamiento de imágenes. Esta será la base para el tercer capítulo, que expondrá la metodología utilizada en la elaboración del código, con la explicación del entorno de trabajo utilizado en Matlab y todas las funciones de procesamiento de imágenes utilizadas. Además, se muestra todo el código desarrollado, así como una descripción teórica de los métodos utilizados para el pre-procesamiento y segmentación de las imagenes en 3D. En el capítulo 4, se mostrarán los resultados obtenidos en el estudio de una de las muestras de asfalto, y, finalmente, el último capítulo se basa en las conclusiones sobre el desarrollo de este proyecto. En este proyecto se ha llevado han realizado todos los puntos que se establecieron como punto de partida en el anteproyecto para crear la herramienta, a pesar de que se ha dejado para futuros proyectos nuevas posibilidades de este codigo, como por ejemplo, la detección automática de las diferentes regiones de una muestra de asfalto debido a su composición. Como se muestra en este proyecto, las técnicas de procesamiento de imágenes se utilizan cada vez más en multitud áreas, como pueden ser industriales o médicas. En consecuencia, este tipo de proyecto tiene multitud de posibilidades, y pudiendo ser la base para muchas nuevas aplicaciones que se puedan desarrollar en un futuro. Por último, se concluye que este proyecto ha contribuido a fortalecer las habilidades de programación, ampliando el conocimiento de Matlab y de la teoría de procesamiento de imágenes. Del mismo modo, este trabajo proporciona una base para el desarrollo de un proyecto más amplio cuyo alcance será una herramienta que puedas ser utilizada por el equipo de investigación de la Universidad Tecnológica de Lodz y en futuros proyectos. ABSTRACT In this project has been developed one code in MATLAB to process X-ray tomographic 3D images of asphalt specimens. These images 3D has been taken by a research team of the Lodz University of Technology (LUT). The aim of this project is to create a tool that can be used to study differents asphalt specimen and can be used to study them after stress tests undergoing the samples. With the final goal to find solutions to the degradation suffered roads in Poland due to differents causes, like weather conditions. The degradation of the roads is an issue that has been investigated many years ago, due to strong degradation suffered caused by various factors such as climate, poor maintenance or excessive traffic in some cases. It is in Poland where these three factors make the composition of many roads degrade rapidly, especially due to the weather conditions suffered along the year, with temperatures ranging from 30 o C in summer to -20 ° C in winter. This causes the roads suffers a lot and asphalt rises shortly after putting, increasing maintenance costs and road accident. This project part of the base that research is taking place at the LUT, in order to better analyze the asphalt specimens, they are tested for stress and find solutions to improve the asphalt on Polish roads. This would decrease remarkable maintenance cost. Although this project will not go into the technical aspect as asphalt and composition, but it has been required a deep study about all of its features, to create a code able to obtain the best results. For these reasons, there have been developed in Matlab, algorithms that allow the study of 3D specimens of asphalt. Matlab is a powerful mathematical tool, which allows arrays operate fastly, allowing to develop specific code for the treatment and processing of 3D images. Thus, these algorithms perform processes such as the multidimensional matrix sgementation, pre and post processing with the same filtering algorithms or microstructural analysis of asphalt specimen which being studied. All these algorithms and function that has been developed to be integrated into a visual tool which it be developed with the GUIDE of Matlab. This tool has been created in the project of Jorge Vega which name is: Microstructural segmentation of 3D images of asphalt specimen using Matlab engine. In this tool it has been used all the functions programmed in this project, and it has the aim to develop an easy and intuitive graphical environment for the study of 3D samples of asphalt. This project has been divided into 4 chapters plus the introduction, the second chapter introduces the state-of-the-art of the three of the most important topics that have been studied in this project: asphalt materials, principle of X-ray tomography and image processing. This will be the base for the third chapter, which will outline the methodology used in developing the code, explaining the working environment of Matlab and all the functions of processing images used. In addition, it will be shown all the developed code created, as well as a theoretical description of the methods used for preprocessing and 3D image segmentation. In Chapter 4 is shown the results obtained from the study of one of the specimens of asphalt, and finally the last chapter draws the conclusions regarding the development of this project.