21 resultados para NORMALIZATION

em Universidad Politécnica de Madrid


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes an architecture, based on statistical machine translation, for developing the text normalization module of a text to speech conversion system. The main target is to generate a language independent text normalization module, based on data and flexible enough to deal with all situa-tions presented in this task. The proposed architecture is composed by three main modules: a tokenizer module for splitting the text input into a token graph (tokenization), a phrase-based translation module (token translation) and a post-processing module for removing some tokens. This paper presents initial exper-iments for numbers and abbreviations. The very good results obtained validate the proposed architecture.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sedentarism has become one of the major concerns of our times. Nowadays people spend most of the time sitting down and moving by mechanical means instead of exercising themselves. Younger generations do only a little more sport today than their counterparts did a decade ago. In other words, sedentary habits have become common in our society, especially among the young. What cultural mechanisms have contributed to this? What are the consequences of a sedentary lifestyle for our health and well-being? These are the questions we have posed in this study. We conducted qualitative research among Spanish young people, and the results have provided important clues to help us understand better how ?active sedentarism? has become the norm among young people.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Determining as accurate as possible spent nuclear fuel isotopic content is gaining importance due to its safety and economic implications. Since nowadays higher burn ups are achievable through increasing initial enrichments, more efficient burn up strategies within the reactor cores and the extension of the irradiation periods, establishing and improving computation methodologies is mandatory in order to carry out reliable criticality and isotopic prediction calculations. Several codes (WIMSD5, SERPENT 1.1.7, SCALE 6.0, MONTEBURNS 2.0 and MCNP-ACAB) and methodologies are tested here and compared to consolidated benchmarks (OECD/NEA pin cell moderated with light water) with the purpose of validating them and reviewing the state of the isotopic prediction capabilities. These preliminary comparisons will suggest what can be generally expected of these codes when applied to real problems. In the present paper, SCALE 6.0 and MONTEBURNS 2.0 are used to model the same reported geometries, material compositions and burn up history of the Spanish Van de llós II reactor cycles 7-11 and to reproduce measured isotopies after irradiation and decay times. We analyze comparisons between measurements and each code results for several grades of geometrical modelization detail, using different libraries and cross-section treatment methodologies. The power and flux normalization method implemented in MONTEBURNS 2.0 is discussed and a new normalization strategy is developed to deal with the selected and similar problems, further options are included to reproduce temperature distributions of the materials within the fuel assemblies and it is introduced a new code to automate series of simulations and manage material information between them. In order to have a realistic confidence level in the prediction of spent fuel isotopic content, we have estimated uncertainties using our MCNP-ACAB system. This depletion code, which combines the neutron transport code MCNP and the inventory code ACAB, propagates the uncertainties in the nuclide inventory assessing the potential impact of uncertainties in the basic nuclear data: cross-section, decay data and fission yields

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La tierra ha sido utilizada como material de construcción desde hace siglos. No obstante, la normativa al respecto está muy dispersa, y en la mayoría de países desarrollados surgen numerosos problemas técnicos y legales para llevar a cabo una construcción con este material. Este artículo estudia el panorama normativo para las construcciones con tierra cruda a nivel internacional, analizando cincuenta y cinco normas y reglamentos de países repartidos por los cinco continentes, que representan el estado del arte de la normalización de la tierra cruda como material de construcción. Es un estudio referenciado sobre las normas y reglamentos vigentes desarrollados por los organismos nacionales de normalización o autoridades correspondientes. Se presentan las normativas y los organismos que las emiten, analizando la estructura y contenido de cada una. Se estudian y analizan los aspectos más relevantes, como la estabilización, selección de los suelos, requisitos de los productos y ensayos existentes, comparando las diferentes normativas. Este trabajo puede ser de gran utilidad para el desarrollo de futuras normas y como referencia para arquitectos e ingenieros que trabajen con tierra. For centuries, earth has been used as a construction material. Nevertheless, the normative in this matter is very scattered, and in the most developed countries, carrying out a construction with this material implies a variety of technical and legal problems. This article analyzes, in an international level, the normative panorama about constructions with earth, analyzing fifty five standards and regulations of countries all around the five continents; these represent the state of art that normalizes the earth as a construction material. It is a study indexed on the actual procedures and regulations developed by the national organisms of normalization or correspondent authorities. The standards and the organisms that produce them appear, analyzing the structure and the content of each one. We have studied and analyzed the most relevant aspects, such as stabilization, soil selections, the requisites of the products and the existent test, comparing the diverse normative. The knowledge from this study could be very useful for the development of future standards and as a reference for architects and engineers that work with earth.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We show a procedure for constructing a probabilistic atlas based on affine moment descriptors. It uses a normalization procedure over the labeled atlas. The proposed linear registration is defined by closed-form expressions involving only geometric moments. This procedure applies both to atlas construction as atlas-based segmentation. We model the likelihood term for each voxel and each label using parametric or nonparametric distributions and the prior term is determined by applying the vote-rule. The probabilistic atlas is built with the variability of our linear registration. We have two segmentation strategy: a) it applies the proposed affine registration to bring the target image into the coordinate frame of the atlas or b) the probabilistic atlas is non-rigidly aligning with the target image, where the probabilistic atlas is previously aligned to the target image with our affine registration. Finally, we adopt a graph cut - Bayesian framework for implementing the atlas-based segmentation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a level set based variational approach that incorporates shape priors into edge-based and region-based models. The evolution of the active contour depends on local and global information. It has been implemented using an efficient narrow band technique. For each boundary pixel we calculate its dynamic according to its gray level, the neighborhood and geometric properties established by training shapes. We also propose a criterion for shape aligning based on affine transformation using an image normalization procedure. Finally, we illustrate the benefits of the our approach on the liver segmentation from CT images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Improving the security of mobile phones is one of the crucial points required to assure the personal information and the operations that can be performed from them. This article presents an authentication procedure consisting of verifying the identity of people by making a signature in the air while holding the mobile phone. Different temporal distance algorithms have been proposed and evaluated through a database of 50 people making their signatures in the air and 6 people trying to forge each of them by studying their records. Approaches based on DTW have obtained better EER results than those based on LCS (2.80% against 3.34%). Besides, different signal normalization methods have been evaluated not finding any with better EER results that when no normalization has carried out.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este Proyecto Fin de Carrera trata sobre el reconocimiento e identificación de caracteres de matrículas de automóviles. Este tipo de sistemas de reconocimiento también se los conoce mundialmente como sistemas ANPR ("Automatic Number Plate Recognition") o LPR ("License Plate Recognition"). La gran cantidad de vehículos y logística que se mueve cada segundo por todo el planeta, hace necesaria su registro para su tratamiento y control. Por ello, es necesario implementar un sistema que pueda identificar correctamente estos recursos, para su posterior procesado, construyendo así una herramienta útil, ágil y dinámica. El presente trabajo ha sido estructurado en varias partes. La primera de ellas nos muestra los objetivos y las motivaciones que se persiguen con la realización de este proyecto. En la segunda, se abordan y desarrollan todos los diferentes procesos teóricos y técnicos, así como matemáticos, que forman un sistema ANPR común, con el fin de implementar una aplicación práctica que pueda demostrar la utilidad de estos en cualquier situación. En la tercera, se desarrolla esa parte práctica en la que se apoya la base teórica del trabajo. En ésta se describen y desarrollan los diversos algoritmos, creados con el fin de estudiar y comprobar todo lo planteado hasta ahora, así como observar su comportamiento. Se implementan varios procesos característicos del reconocimiento de caracteres y patrones, como la detección de áreas o patrones, rotado y transformación de imágenes, procesos de detección de bordes, segmentación de caracteres y patrones, umbralización y normalización, extracción de características y patrones, redes neuronales, y finalmente el reconocimiento óptico de caracteres o comúnmente conocido como OCR. La última parte refleja los resultados obtenidos a partir del sistema de reconocimiento de caracteres implementado para el trabajo y se exponen las conclusiones extraídas a partir de éste. Finalmente se plantean las líneas futuras de mejora, desarrollo e investigación, para poder realizar un sistema más eficiente y global. This Thesis deals about license plate characters recognition and identification. These kinds of systems are also known worldwide as ANPR systems ("Automatic Number Plate Recognition") or LPR ("License Plate Recognition"). The great number of vehicles and logistics moving every second all over the world, requires a registration for treatment and control. Thereby, it’s therefore necessary to implement a system that can identify correctly these resources, for further processing, thus building a useful, flexible and dynamic tool. This work has been structured into several parts. The first one shows the objectives and motivations attained by the completion of this project. In the second part, it’s developed all the different theoretical and technical processes, forming a common ANPR system in order to implement a practical application that can demonstrate the usefulness of these ones on any situation. In the third, the practical part is developed, which is based on the theoretical work. In this one are described and developed various algorithms, created to study and verify all the questions until now suggested, and complain the behavior of these systems. Several recognition of characters and patterns characteristic processes are implemented, such as areas or patterns detection, image rotation and transformation, edge detection processes, patterns and character segmentation, thresholding and normalization, features and patterns extraction, neural networks, and finally the optical character recognition or commonly known like OCR. The last part shows the results obtained from the character recognition system implemented for this thesis and the outlines conclusions drawn from it. Finally, future lines of improvement, research and development are proposed, in order to make a more efficient and comprehensive system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a linear regression method for estimating Weibull parameters from life tests. The method uses stochastic models of the unreliability at each failure instant. As a result, a heteroscedastic regression problem arises that is solved by weighted least squares minimization. The main feature of our method is an innovative s-normalization of the failure data models, to obtain analytic expressions of centers and weights for the regression. The method has been Monte Carlo contrasted with Benard?s approximation, and Maximum Likelihood Estimation; and it has the highest global scores for its robustness, and performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La diabetes mellitus es un trastorno del metabolismo de los carbohidratos producido por la insuficiente o nula producción de insulina o la reducida sensibilidad a esta hormona. Es una enfermedad crónica con una mayor prevalencia en los países desarrollados debido principalmente a la obesidad, la vida sedentaria y disfunciones en el sistema endocrino relacionado con el páncreas. La diabetes Tipo 1 es una enfermedad autoinmune en la que son destruidas las células beta del páncreas, que producen la insulina, y es necesaria la administración de insulina exógena. Un enfermo de diabetes Tipo 1 debe seguir una terapia con insulina administrada por la vía subcutánea que debe estar adaptada a sus necesidades metabólicas y a sus hábitos de vida, esta terapia intenta imitar el perfil insulínico de un páncreas no patológico. La tecnología actual permite abordar el desarrollo del denominado “páncreas endocrino artificial”, que aportaría precisión, eficacia y seguridad para los pacientes, en cuanto a la normalización del control glucémico y reducción del riesgo de hipoglucemias. Permitiría que el paciente no estuviera tan pendiente de su enfermedad. El páncreas artificial consta de un sensor continuo de glucosa, una bomba de infusión de insulina y un algoritmo de control, que calcula la insulina a infusionar usando la glucosa como información principal. Este trabajo presenta un método de control en lazo semi-cerrado mediante un sistema borroso experto basado en reglas. La regulación borrosa se fundamenta en la ambigüedad del lenguaje del ser humano. Esta incertidumbre sirve para la formación de una serie de reglas que representan el pensamiento humano, pero a la vez es el sistema que controla un proceso, en este caso el sistema glucorregulatorio. Este proyecto está enfocado en el diseño de un controlador borroso que haciendo uso de variables como la glucosa, insulina y dieta, sea capaz de restaurar la función endocrina del páncreas de forma tecnológica. La validación del algoritmo se ha realizado principalmente mediante experimentos en simulación utilizando una población de pacientes sintéticos, evaluando los resultados con estadísticos de primer orden y algunos más específicos como el índice de riesgo de Kovatchev, para después comparar estos resultados con los obtenidos por otros métodos de control anteriores. Los resultados demuestran que el control borroso (FBPC) mejora el control glucémico con respecto a un sistema predictivo experto basado en reglas booleanas (pBRES). El FBPC consigue reducir siempre la glucosa máxima y aumentar la mínima respecto del pBRES pero es en terapias desajustadas, donde el FBPC es especialmente robusto, hace descender la glucosa máxima 8,64 mg/dl, el uso de insulina es 3,92 UI menor, aumenta la glucosa mínima 3,32 mg/dl y lleva al rango de glucosa 80 – 110 mg/dl 15,33 muestras más. Por lo tanto se puede concluir que el FBPC realiza un mejor control glucémico que el controlador pBRES haciéndole especialmente efectivo, robusto y seguro en condiciones de desajustes de terapia basal y con gran capacidad de mejora futura. SUMMARY The diabetes mellitus is a metabolic disorder caused by a poor or null insulin secretion or a reduced sensibility to insulin. Diabetes is a chronic disease with a higher prevalence in the industrialized countries, mainly due to obesity, the sedentary life and endocrine disfunctions connected with the pancreas. Type 1 diabetes is a self-immune disease where the beta cells of the pancreas, which are the responsible of secreting insulin, are damaged. Hence, it is necessary an exogenous delivery of insulin. The Type 1 diabetic patient has to follow a therapy with subcutaneous insulin administration which should be adjusted to his/her metabolic needs and life style. This therapy tries to mimic the insulin profile of a non-pathological pancreas. Current technology lets the development of the so-called endocrine artificial pancreas that would provide accuracy, efficiency and safety to patients, in regards to the glycemic control normalization and reduction of the risk of hypoglycemic. In addition, it would help the patient not to be so concerned about his disease. The artificial pancreas has a continuous glucose sensor, an insulin infusion pump and a control algorithm, that calculates the insulin infusion using the glucose as main information. This project presents a method of control in semi-closed-loop, through an expert fuzzy system based on rules. The fuzzy regulation is based on the human language ambiguity. This uncertainty serves for construction of some rules that represent the human language besides it is the system that controls a process, in this case the glucoregulatory system. This project is focus on the design of a fuzzy controller that, using variables like glucose insulin and diet, will be able to restore the pancreas endocrine function with technology. The algorithm assessment has mainly been done through experiments in simulation using a population of synthetic patients, evaluating the results with first order statistical parameters and some other more specific such as the Kovatchev risk index, to compare later these results with the ones obtained in others previous methods of control. The results demonstrate that the fuzzy control (FBPC) improves the glycemic control connected with a predictive expert system based on Booleans rules (pBRES). The FBPC is always able to reduce the maximum level of glucose and increase the minimum level as compared with pBRES but it is in unadjusted therapies where FBPC is especially strong, it manages to decrease the maximum level of glucose and insulin used by 8,64 mg/dl and 3,92 UI respectively, also increases the value of minimum glucose by 3,32 mg/dl, getting 15,33 samples more inside the 80-110 mg/dl glucose rank. Therefore we can conclude that FBPC achieves a better glycemic control than the controller pBRES doing it especially effective, robust and safe in conditions of mismatch basal therapy and with a great capacity for future improvements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ultrasonic sound velocity measurements with hand-held equipment remain due to their simplicity among the most used methods for non-destructive grading of sawn woods, yet a dedicated normalization effort with respect to strength classes for Spanish species is still required. As part of an ongoing project with the aim of definition of standard testing methods, the effect of the dimensions of commonly tested Scots pine (Pinus sylvestris L.) timbers and equipment testing frequency on ultrasonic velocity were investigated. A dedicated full-wave finite-difference time-domain software allowed simulation of pulse propagation through timbers of representative length and section combinations. Sound velocity measurements vL were performed along the grain with the indirect method at 22 kHz and 45 kHz for grids of measurement points at specific distances. For sample sections larger than the cross-sectional wavelength ?RT, the simulated sound velocity vL converges to vL = (CL/?)0.5. For smaller square sections the sound velocity drops down to vL = (EL/?)0.5, where CL, EL and ? are the stiffness, E-modul and density, respectively. The experiments confirm a linear regression between time of flight and measurement distance even at less than two wavelength menor que2?L distance, the fitted sound speed values increased by 15% between the two tested frequencies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we propose a flexible Multi-Agent Architecture together with a methodology for indoor location which allows us to locate any mobile station (MS) such as a Laptop, Smartphone, Tablet or a robotic system in an indoor environment using wireless technology. Our technology is complementary to the GPS location finder as it allows us to locate a mobile system in a specific room on a specific floor using the Wi-Fi networks. The idea is that any MS will have an agent known at a Fuzzy Location Software Agent (FLSA) with a minimum capacity processing at its disposal which collects the power received at different Access Points distributed around the floor and establish its location on a plan of the floor of the building. In order to do so it will have to communicate with the Fuzzy Location Manager Software Agent (FLMSA). The FLMSAs are local agents that form part of the management infrastructure of the Wi-Fi network of the Organization. The FLMSA implements a location estimation methodology divided into three phases (measurement, calibration and estimation) for locating mobile stations (MS). Our solution is a fingerprint-based positioning system that overcomes the problem of the relative effect of doors and walls on signal strength and is independent of the network device manufacturer. In the measurement phase, our system collects received signal strength indicator (RSSI) measurements from multiple access points. In the calibration phase, our system uses these measurements in a normalization process to create a radio map, a database of RSS patterns. Unlike traditional radio map-based methods, our methodology normalizes RSS measurements collected at different locations on a floor. In the third phase, we use Fuzzy Controllers to locate an MS on the plan of the floor of a building. Experimental results demonstrate the accuracy of the proposed method. From these results it is clear that the system is highly likely to be able to locate an MS in a room or adjacent room.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Moment invariants have been thoroughly studied and repeatedly proposed as one of the most powerful tools for 2D shape identification. In this paper a set of such descriptors is proposed, being the basis functions discontinuous in a finite number of points. The goal of using discontinuous functions is to avoid the Gibbs phenomenon, and therefore to yield a better approximation capability for discontinuous signals, as images. Moreover, the proposed set of moments allows the definition of rotation invariants, being this the other main design concern. Translation and scale invariance are achieved by means of standard image normalization. Tests are conducted to evaluate the behavior of these descriptors in noisy environments, where images are corrupted with Gaussian noise up to different SNR values. Results are compared to those obtained using Zernike moments, showing that the proposed descriptor has the same performance in image retrieval tasks in noisy environments, but demanding much less computational power for every stage in the query chain.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quantification of neurotransmission Single-Photon Emission Computed Tomography (SPECT) studies of the dopaminergic system can be used to track, stage and facilitate early diagnosis of the disease. The aim of this study was to implement QuantiDOPA, a semi-automatic quantification software of application in clinical routine to reconstruct and quantify neurotransmission SPECT studies using radioligands which bind the dopamine transporter (DAT). To this end, a workflow oriented framework for the biomedical imaging (GIMIAS) was employed. QuantiDOPA allows the user to perform a semiautomatic quantification of striatal uptake by following three stages: reconstruction, normalization and quantification. QuantiDOPA is a useful tool for semi-automatic quantification inDAT SPECT imaging and it has revealed simple and flexible

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes the text normalization module of a text to speech fully-trainable conversion system and its application to number transcription. The main target is to generate a language independent text normalization module, based on data instead of on expert rules. This paper proposes a general architecture based on statistical machine translation techniques. This proposal is composed of three main modules: a tokenizer for splitting the text input into a token graph, a phrase-based translation module for token translation, and a post-processing module for removing some tokens. This architecture has been evaluated for number transcription in several languages: English, Spanish and Romanian. Number transcription is an important aspect in the text normalization problem.