850 resultados para wide area based information embedded power system (IEPS-W)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper outlines an automatic computervision system for the identification of avena sterilis which is a special weed seed growing in cereal crops. The final goal is to reduce the quantity of herbicide to be sprayed as an important and necessary step for precision agriculture. So, only areas where the presence of weeds is important should be sprayed. The main problems for the identification of this kind of weed are its similar spectral signature with respect the crops and also its irregular distribution in the field. It has been designed a new strategy involving two processes: image segmentation and decision making. The image segmentation combines basic suitable image processing techniques in order to extract cells from the image as the low level units. Each cell is described by two area-based attributes measuring the relations among the crops and weeds. The decision making is based on the SupportVectorMachines and determines if a cell must be sprayed. The main findings of this paper are reflected in the combination of the segmentation and the SupportVectorMachines decision processes. Another important contribution of this approach is the minimum requirements of the system in terms of memory and computation power if compared with other previous works. The performance of the method is illustrated by comparative analysis against some existing strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report presents an overview of the current work performed by us in the context of the efficient parallel implementation of traditional logic programming systems. The work is based on the &-Prolog System, a system for the automatic parallelization and execution of logic programming languages within the Independent And-parallelism model, and the global analysis and parallelization tools which have been developed for this system. In order to make the report self-contained, we first describe the "classical" tools of the &-Prolog system. We then explain in detail the work performed in improving and generalizing the global analysis and parallelization tools. Also, we describe the objectives which will drive our future work in this area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper presents a method to analyze robust stability and transient performance of a distributed power system consisting of commercial converter modules interconnected through a common input filter. The method is based on the use of four transfer functions, which are measurable from the converter input and output terminals. It is shown that these parameters provide important information on the power module sensitivity to the interactions caused by the external impedances. Practical characterization for the described system structure is performed introducing special transfer functions utilized for the interactions assessment. Experimental results are provided to support the presented analysis procedure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El actual proyecto consiste en la creación de una interfaz gráfica de usuario (GUI) en entorno de MATLAB que realice una representación gráfica de la base de datos de HRTF (Head-Related Transfer Function). La función de transferencia de la cabeza es una herramienta muy útil en el estudio de la capacidad del ser humano para percibir su entorno sonoro, además de la habilidad de éste en la localización de fuentes sonoras en el espacio que le rodea. La HRTF biaural (terminología para referirse al conjunto de HRTF del oído izquierdo y del oído derecho) en sí misma, posee información de especial interés ya que las diferencias entre las HRTF de cada oído, conceden la información que nuestro sistema de audición utiliza en la percepción del campo sonoro. Por ello, la funcionalidad de la interfaz gráfica creada presenta gran provecho dentro del estudio de este campo. Las diferencias interaurales se caracterizan en amplitud y en tiempo, variando en función de la frecuencia. Mediante la transformada inversa de Fourier de la señal HRTF, se obtiene la repuesta al impulso de la cabeza, es decir, la HRIR (Head-Related Impulse Response). La cual, además de tener una gran utilidad en la creación de software o dispositivos de generación de sonido envolvente, se utiliza para obtener las diferencias ITD (Interaural Time Difference) e ILD (Interaural Time Difference), comúnmente denominados “parámetros de localización espacial”. La base de datos de HRTF contiene la información biaural de diferentes puntos de ubicación de la fuente sonora, formando una red de coordenadas esféricas que envuelve la cabeza del sujeto. Dicha red, según las medidas realizadas en la cámara anecoica de la EUITT (Escuela Universitaria de Ingeniería Técnica de Telecomunicación), presenta una precisión en elevación de 10º y en azimut de 5º. Los receptores son dos micrófonos alojados en el maniquí acústico llamado HATS (Hats and Torso Simulator) modelo 4100D de Brüel&Kjaer. Éste posee las características físicas que influyen en la percepción del entorno como son las formas del pabellón auditivo (pinna), de la cabeza, del cuello y del torso humano. Será necesario realizar los cálculos de interpolación para todos aquellos puntos no contenidos en la base de datos HRTF, este proceso es sumamente importante no solo para potenciar la capacidad de la misma sino por su utilidad para la comparación entre otras bases de datos existentes en el estudio de este ámbito. La interfaz gráfica de usuario está concebida para un manejo sencillo, claro y predecible, a la vez que interactivo. Desde el primer boceto del programa se ha tenido clara su filosofía, impuesta por las necesidades de un usuario que busca una herramienta práctica y de manejo intuitivo. Su diseño de una sola ventana reúne tanto los componentes de obtención de datos como los que hacen posible la representación gráfica de las HRTF, las HRIR y los parámetros de localización espacial, ITD e ILD. El usuario podrá ir alternando las representaciones gráficas a la vez que introduce las coordenadas de los puntos que desea visualizar, definidas por phi (elevación) y theta (azimut). Esta faceta de la interfaz es la que le otorga una gran facilidad de acceso y lectura de la información representada en ella. Además, el usuario puede introducir valores incluidos en la base de datos o valores intermedios a estos, de esta manera, se indica a la interfaz la necesidad de realizar la interpolación de los mismos. El método de interpolación escogido es el de la ponderación de la distancia inversa entre puntos. Dependiendo de los valores introducidos por el usuario se realizará una interpolación de dos o cuatro puntos, siendo éstos limítrofes al valor introducido, ya sea de phi o theta. Para añadir versatilidad a la interfaz gráfica de usuario, se ha añadido la opción de generar archivos de salida en forma de imagen de las gráficas representadas, de tal forma que el usuario pueda extraer los datos que le interese para cualquier valor de phi y theta. Se completa el presente proyecto fin de carrera con un trabajo de investigación y estudio comparativo de la función y la aplicación de las bases de datos de HRTF dentro del marco científico y de investigación. Esto ha hecho posible concentrar información relacionada a través de revistas científicas de investigación como la JAES (Journal of the Audio Engineering Society) o la ASA (Acoustical Society of America), además, del IEEE ( Institute of Electrical and Electronics Engineers) o la “Web of knowledge” entre otras. Además de realizar la búsqueda en estas fuentes, se ha optado por vías de información más comunes como Google Académico o el portal de acceso “Ingenio” a los todos los recursos electrónicos contenidos en la base de datos de la universidad. El estudio genera una ampliación en el conocimiento de la labor práctica de las HRTF. La mayoría de los estudios enfocan sus esfuerzos en mejorar la percepción del evento sonoro mediante su simulación en la escucha estéreo o multicanal. A partir de las HRTF, esto es posible mediante el análisis y el cálculo de datos como pueden ser las regresiones, siendo éstas muy útiles en la predicción de una medida basándose en la información de la actual. Otro campo de especial interés es el de la generación de sonido 3D. Mediante la base de datos HRTF es posible la simulación de una señal biaural. Se han diseñado algoritmos que son implementados en dispositivos DSP, de tal manera que por medio de retardos interaurales y de diferencias espectrales es posible llegar a un resultado óptimo de sonido envolvente, sin olvidar la importancia de los efectos de reverberación para conseguir un efecto creíble de sonido envolvente. Debido a la complejidad computacional que esto requiere, gran parte de los estudios coinciden en desarrollar sistemas más eficientes, llegando a objetivos tales como la generación de sonido 3D en tiempo real. ABSTRACT. This project involves the creation of a Graphic User Interface (GUI) in the Matlab environment which creates a graphic representation of the HRTF (Head-Related Transfer Function) database. The head transfer function is a very useful tool in the study of the capacity of human beings to perceive their sound environment, as well as their ability to localise sound sources in the area surrounding them. The binaural HRTF (terminology which refers to the HRTF group of the left and right ear) in itself possesses information of special interest seeing that the differences between the HRTF of each ear admits the information that our system of hearing uses in the perception of each sound field. For this reason, the functionality of the graphic interface created presents great benefits within the study of this field. The interaural differences are characterised in space and in time, varying depending on the frequency. By means of Fourier's transformed inverse of the HRTF signal, the response to the head impulse is obtained, in other words, the HRIR (Head-Related Impulse Response). This, as well as having a great use in the creation of software or surround sound generating devices, is used to obtain ITD differences (Interaural Time Difference) and ILD (Interaural Time Difference), commonly named “spatial localisation parameters”. The HRTF database contains the binaural information of different points of sound source location, forming a network of spherical coordinates which surround the subject's head. This network, according to the measures carried out in the anechoic chamber at the EUITT (School of Telecommunications Engineering) gives a precision in elevation of 10º and in azimuth of 5º. The receivers are two microphones placed on the acoustic mannequin called HATS (Hats and Torso Simulator) Brüel&Kjaer model 4100D. This has the physical characteristics which affect the perception of the surroundings which are the forms of the auricle (pinna), the head, neck and human torso. It will be necessary to make interpolation calculations for all those points which are not contained the HRTF database. This process is extremely important not only to strengthen the database's capacity but also for its usefulness in making comparisons with other databases that exist in the study of this field. The graphic user interface is conceived for a simple, clear and predictable use which is also interactive. Since the first outline of the program, its philosophy has been clear, based on the needs of a user who requires a practical tool with an intuitive use. Its design with only one window unites not only the components which obtain data but also those which make the graphic representation of the HRTFs possible, the hrir and the ITD and ILD spatial location parameters. The user will be able to alternate the graphic representations at the same time as entering the point coordinates that they wish to display, defined by phi (elevation) and theta (azimuth). The facet of the interface is what provides the great ease of access and reading of the information displayed on it. In addition, the user can enter values included in the database or values which are intermediate to these. It is, likewise, indicated to the interface the need to carry out the interpolation of these values. The interpolation method is the deliberation of the inverse distance between points. Depending on the values entered by the user, an interpolation of two or four points will be carried out, with these being adjacent to the entered value, whether that is phi or theta. To add versatility to the graphic user interface, the option of generating output files in the form of an image of the graphics displayed has been added. This is so that the user may extract the information that interests them for any phi and theta value. This final project is completed with a research and comparative study essay on the function and application of HRTF databases within the scientific and research framework. It has been possible to collate related information by means of scientific research magazines such as the JAES (Journal of the Audio Engineering Society), the ASA (Acoustical Society of America) as well as the IEEE (Institute of Electrical and Electronics Engineers) and the “Web of knowledge” amongst others. In addition to carrying out research with these sources, I also opted to use more common sources of information such as Academic Google and the “Ingenio” point of entry to all the electronic resources contained on the university databases. The study generates an expansion in the knowledge of the practical work of the HRTF. The majority of studies focus their efforts on improving the perception of the sound event by means of its simulation in stereo or multichannel listening. With the HRTFs, this is possible by means of analysis and calculation of data as can be the regressions. These are very useful in the prediction of a measure being based on the current information. Another field of special interest is that of the generation of 3D sound. Through HRTF databases it is possible to simulate the binaural signal. Algorithms have been designed which are implemented in DSP devices, in such a way that by means of interaural delays and wavelength differences it is possible to achieve an excellent result of surround sound, without forgetting the importance of the effects of reverberation to achieve a believable effect of surround sound. Due to the computational complexity that this requires, a great many studies agree on the development of more efficient systems which achieve objectives such as the generation of 3D sound in real time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En el presente trabajo se estudia la producción potencial de biomasa procedente de los cultivos de centeno y triticale en las seis comarcas agrarias de la Comunidad de Madrid (CM) y la posibilidad de su aplicación a la producción de bioelectricidad en cada una de ellas. En primer lugar se realiza un estudio bibliográfico de la situación actual de la bioelectricidad. Uno de los principales datos a tener en cuenta es que en el PER 2011- 2020 se estima que el total de potencia eléctrica instalada a partir de biomasa en España en el año 2020 sea de 1.350 MW, unas dos veces y media la existente a finales de 2010. Además, se comenta el estado de la incentivación del uso de biomasa de cultivos energéticos para producción de electricidad, la cual se regula actualmente según el Real Decreto-ley 9/2013, de 12 de Julio, por el que se adoptaron medidas urgentes para garantizar la estabilidad financiera del sistema eléctrico, y se consideran los criterios de sostenibilidad en el uso de biocombustibles sólidos. Se realiza una caracterización de las seis comarcas agrarias que forman la Comunidad Autónoma de Madrid: Área Metropolitana, Campiña, Guadarrama, Lozoya- Somosierra, Sur-Occidental y Vegas, la cual consta de dos partes: una descripción de la climatología y otra de la distribución de la superficie dedicada a barbecho y cultivos herbáceos. Se hace una recopilación bibliográfica de los modelos de simulación más representativos de crecimiento de los cultivos (CERES y Cereal YES), así como de ensayos realizados con los cultivos de centeno y triticale para la producción de biomasa y de estudios efectuados mediante herramientas GIS y técnicas de análisis multicriterio para la ubicación de centrales de bioelectricidad y el estudio de la logística de la biomasa. Se propone un modelo de simulación de la productividad de biomasa de centeno y de triticale para la CM, que resulta de la combinación de un modelo de producción de grano en base a datos climatológicos y a la relación biomasa/grano media de ambos cultivos obtenida en una experiencia previa. Los modelos obtenidos responden a las siguientes ecuaciones (siendo TN = temperatura media normalizada a 9,9 ºC y PN = precipitación acumulada normalizada a 496,7 mm): - Producción biomasa centeno (t m.s./ha) = 2,785 * [1,078 * ln(TN + 2*PN) + 2,3256] - Producción biomasa triticale (t m.s./ha) = 2,595 * [2,4495 * ln(TN + 2*PN) + 2,6103] Posteriormente, aplicando los modelos desarrollados, se cuantifica el potencial de producción de biomasa de centeno y triticale en las distintas comarcas agrarias de la CM en cada uno de los escenarios establecidos, que se consideran según el uso de la superficie de barbecho de secano disponible (25%, 50%, 75% y 100%). Las producciones potenciales de biomasa, que se podrían alcanzar en la CM utilizando el 100% de la superficie de barbecho de secano, en base a los cultivos de centeno y triticale, se estimaron en 169.710,72 - 149.811,59 - 140.217,54 - 101.583,01 - 26.961,88 y 1.886,40 t anuales para las comarcas de Campiña - Vegas, Sur - Occidental - Área Metropolitana - Lozoya-Somosierra y Guadarrama, respectivamente. Se realiza un análisis multicriterio basado en la programación de compromiso para definir las comarcas agrarias con mejores características para la ubicación de centrales de bioelectricidad en base a los criterios de potencial de biomasa, infraestructura eléctrica, red de carreteras, espacios protegidos y superficie de núcleos urbanos. Al efectuar el análisis multicriterio, se obtiene la siguiente ordenación jerárquica en base a los criterios establecidos: Campiña, Sur Occidental, Vegas, Área Metropolitana, Lozoya-Somosierra y Guadarrama. Mediante la utilización de técnicas GIS se estudia la localización más conveniente de una central de bioelectricidad de 2,2 MW en cada una de las comarcas agrarias y según el uso de la superficie de barbecho de secano disponible (25%, 50%, 75% y 100%), siempre que exista potencial suficiente. Para el caso de la biomasa de centeno y de triticale en base seca se considera un PCI de 3500 kcal/kg, por lo que se necesitarán como mínimo 17.298,28 toneladas para satisfacer las necesidades de cada una de las centrales de 2,2 MW. Se analiza el potencial máximo de bioelectricidad en cada una de las comarcas agrarias en base a los cultivos de centeno y triticale como productores de biomasa. Según se considere el 25% o el 100% del barbecho de secano para producción de biomasa, la potencia máxima de bioelectricidad que se podría instalar en cada una de las comarcas agrarias variaría entre 5,4 y 21,58 MW en la comarca Campiña, entre 4,76 y 19,05 MW en la comarca Vegas, entre 4,46 y 17,83 MW en la comarca Sur Occidental, entre 3,23 y 12,92 MW en la comarca Área Metropolitana, entre 0,86 y 3,43 MW en la comarca Lozoya Somosierra y entre 0,06 y 0,24 MW en la comarca Guadarrama. La potencia total que se podría instalar en la CM a partir de la biomasa de centeno y triticale podría variar entre 18,76 y 75,06 MW según que se utilice el 25% o el 100% de las tierras de barbecho de secano para su cultivo. ABSTRACT In this work is studied the potential biomass production from rye and triticale crops in the six Madrid Community (MC) agricultural regions and the possibility of its application to the bioelectricity production in each of them. First is performed a bibliographical study of the current situation of bioelectricity. One of the main elements to be considered is that in the PER 2011-2020 is estimated that the total installed electric power from biomass in Spain in 2020 was 1.350 MW, about two and a half times as at end 2010. Also is discussed the status of enhancing the use of biomass energy crops for electricity production, which is currently regulated according to the Real Decreto-ley 9/2013, of July 12, by which urgent measures were adopted to ensure financial stability of the electrical system, and there are considered the sustainability criteria in the use of solid biofuels. A characterization of the six Madrid Community agricultural regions is carried out: Area Metropolitana, Campiña, Guadarrama, Lozoya-Somosierra, Sur-Occidental and Vegas, which consists of two parts: a description of the climatology and another about the distribution of the area under fallow and arable crops. It makes a bibliographic compilation of the most representative crop growth simulation models (CERES and Cereal YES), as well as trials carried out with rye and triticale crops for biomass production and studies conducted by GIS tools and techniques multicriteria analysis for the location of bioelectricity centrals and the study of the logistics of biomass. Is proposed a biomass productivity simulation model for rye and triticale for MC that results from the combination of grain production model based on climatological data and the average relative biomass/grain of both crops obtained in a prior experience. The models obtained correspond to the following equations (where TN = normalized average temperature and PN = normalized accumulated precipitation): - Production rye biomass (t d.m./ha) = 2.785 * [1.078 * ln (TN + 2*PN) + 2.3256] - Production triticale biomass (t d.m./ha) = 2,595 * [2.4495 * ln (TN + 2*PN) + 2.6103] Subsequently, applying the developed models, the biomass potential of the MC agricultural regions is quantified in each of the scenarios established, which are considered as the use of dry fallow area available (25%, 50%, 75 % and 100%). The potential biomass production that can be achieved within the MC using 100% of the rainfed fallow area based on rye and triticale crops, were estimated at 169.710,72 - 149.811,59 - 140.217,54 - 101.583,01 - 26.961,88 and 1.886,40 t annual for the regions of Campiña, Vegas, Sur Occidental, Area Metropolitana, Lozoya- Somosierra and Guadarrama, respectively. A multicriteria analysis is performed, based on compromise programming to define the agricultural regions with better features for the location of bioelectricity centrals, on the basis of biomass potential, electrical infrastructure, road network, protected areas and urban area criteria. Upon multicriteria analysis, is obtained the following hierarchical order based on criteria: Campiña, Sur Occidental, Vegas, Area Metropolitana, Lozoya-Somosierra and Guadarrama. Likewise, through the use of GIS techniques, the most suitable location for a 2,2 MW bioelectricity plant is studied in each of the agricultural regions and according to the use of dry fallow area available (25%, 50% , 75% and 100%), if there is sufficient potential. In the case of biomass rye and triticale dry basis is considered a PCI of 3500 kcal/kg, so it will take at least 17,298.28 t to satisfy the needs of each plant. Is analyzed the maximum bioelectricity potential on each of the agricultural regions on the basis of the rye and triticale crops as biomass producers. As deemed 25% or 100% dry fallow for biomass, the maximum bioelectricity potential varies between 5,4 and 21,58 MW in the Campiña region, between 4,76 and 19,05 MW in the Vegas region, between 4,46 and 17,83 MW in the Sur Occidental region, between 3,23 and 12,92 MW in the Area Metropolitana region, between 0,86 and 3,43 MW in the Lozoya-Somosierra region and between 0,06 and 0,24 MW in the Guadarrama region. The total power that could be installed in the CM from rye and triticale biomass could vary between 18.76 and 75.06 MW if is used the 25% or 100% of fallow land for rainfed crop.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Autonomous systems require, in most of the cases, reasoning and decision-making capabilities. Moreover, the decision process has to occur in real time. Real-time computing means that every situation or event has to have an answer before a temporal deadline. In complex applications, these deadlines are usually in the order of milliseconds or even microseconds if the application is very demanding. In order to comply with these timing requirements, computing tasks have to be performed as fast as possible. The problem arises when computations are no longer simple, but very time-consuming operations. A good example can be found in autonomous navigation systems with visual-tracking submodules where Kalman filtering is the most extended solution. However, in recent years, some interesting new approaches have been developed. Particle filtering, given its more general problem-solving features, has reached an important position in the field. The aim of this thesis is to design, implement and validate a hardware platform that constitutes itself an embedded intelligent system. The proposed system would combine particle filtering and evolutionary computation algorithms to generate intelligent behavior. Traditional approaches to particle filtering or evolutionary computation have been developed in software platforms, including parallel capabilities to some extent. In this work, an additional goal is fully exploiting hardware implementation advantages. By using the computational resources available in a FPGA device, better performance results in terms of computation time are expected. These hardware resources will be in charge of extensive repetitive computations. With this hardware-based implementation, real-time features are also expected.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La necesidad de desarrollar técnicas para predecir la respuesta vibroacústica de estructuras espaciales lia ido ganando importancia en los últimos años. Las técnicas numéricas existentes en la actualidad son capaces de predecir de forma fiable el comportamiento vibroacústico de sistemas con altas o bajas densidades modales. Sin embargo, ambos rangos no siempre solapan lo que hace que sea necesario el desarrollo de métodos específicos para este rango, conocido como densidad modal media. Es en este rango, conocido también como media frecuencia, donde se centra la presente Tesis doctoral, debido a la carencia de métodos específicos para el cálculo de la respuesta vibroacústica. Para las estructuras estudiadas en este trabajo, los mencionados rangos de baja y alta densidad modal se corresponden, en general, con los rangos de baja y alta frecuencia, respectivamente. Los métodos numéricos que permiten obtener la respuesta vibroacústica para estos rangos de frecuencia están bien especificados. Para el rango de baja frecuencia se emplean técnicas deterministas, como el método de los Elementos Finitos, mientras que, para el rango de alta frecuencia las técnicas estadísticas son más utilizadas, como el Análisis Estadístico de la Energía. En el rango de medias frecuencias ninguno de estos métodos numéricos puede ser usado con suficiente precisión y, como consecuencia -a falta de propuestas más específicas- se han desarrollado métodos híbridos que combinan el uso de métodos de baja y alta frecuencia, intentando que cada uno supla las deficiencias del otro en este rango medio. Este trabajo propone dos soluciones diferentes para resolver el problema de la media frecuencia. El primero de ellos, denominado SHFL (del inglés Subsystem based High Frequency Limit procedure), propone un procedimiento multihíbrido en el cuál cada subestructura del sistema completo se modela empleando una técnica numérica diferente, dependiendo del rango de frecuencias de estudio. Con este propósito se introduce el concepto de límite de alta frecuencia de una subestructura, que marca el límite a partir del cual dicha subestructura tiene una densidad modal lo suficientemente alta como para ser modelada utilizando Análisis Estadístico de la Energía. Si la frecuencia de análisis es menor que el límite de alta frecuencia de la subestructura, ésta se modela utilizando Elementos Finitos. Mediante este método, el rango de media frecuencia se puede definir de una forma precisa, estando comprendido entre el menor y el mayor de los límites de alta frecuencia de las subestructuras que componen el sistema completo. Los resultados obtenidos mediante la aplicación de este método evidencian una mejora en la continuidad de la respuesta vibroacústica, mostrando una transición suave entre los rangos de baja y alta frecuencia. El segundo método propuesto se denomina HS-CMS (del inglés Hybrid Substructuring method based on Component Mode Synthesis). Este método se basa en la clasificación de la base modal de las subestructuras en conjuntos de modos globales (que afectan a todo o a varias partes del sistema) o locales (que afectan a una única subestructura), utilizando un método de Síntesis Modal de Componentes. De este modo es posible situar espacialmente los modos del sistema completo y estudiar el comportamiento del mismo desde el punto de vista de las subestructuras. De nuevo se emplea el concepto de límite de alta frecuencia de una subestructura para realizar la clasificación global/local de los modos en la misma. Mediante dicha clasificación se derivan las ecuaciones globales del movimiento, gobernadas por los modos globales, y en las que la influencia del conjunto de modos locales se introduce mediante modificaciones en las mismas (en su matriz dinámica de rigidez y en el vector de fuerzas). Las ecuaciones locales se resuelven empleando Análisis Estadístico de Energías. Sin embargo, este último será un modelo híbrido, en el cual se introduce la potencia adicional aportada por la presencia de los modos globales. El método ha sido probado para el cálculo de la respuesta de estructuras sometidas tanto a cargas estructurales como acústicas. Ambos métodos han sido probados inicialmente en estructuras sencillas para establecer las bases e hipótesis de aplicación. Posteriormente, se han aplicado a estructuras espaciales, como satélites y reflectores de antenas, mostrando buenos resultados, como se concluye de la comparación de las simulaciones y los datos experimentales medidos en ensayos, tanto estructurales como acústicos. Este trabajo abre un amplio campo de investigación a partir del cual es posible obtener metodologías precisas y eficientes para reproducir el comportamiento vibroacústico de sistemas en el rango de la media frecuencia. ABSTRACT Over the last years an increasing need of novel prediction techniques for vibroacoustic analysis of space structures has arisen. Current numerical techniques arc able to predict with enough accuracy the vibro-acoustic behaviour of systems with low and high modal densities. However, space structures are, in general, very complex and they present a range of frequencies in which a mixed behaviour exist. In such cases, the full system is composed of some sub-structures which has low modal density, while others present high modal density. This frequency range is known as the mid-frequency range and to develop methods for accurately describe the vibro-acoustic response in this frequency range is the scope of this dissertation. For the structures under study, the aforementioned low and high modal densities correspond with the low and high frequency ranges, respectively. For the low frequency range, deterministic techniques as the Finite Element Method (FEM) are used while, for the high frequency range statistical techniques, as the Statistical Energy Analysis (SEA), arc considered as more appropriate. In the mid-frequency range, where a mixed vibro-acoustic behaviour is expected, any of these numerical method can not be used with enough confidence level. As a consequence, it is usual to obtain an undetermined gap between low and high frequencies in the vibro-acoustic response function. This dissertation proposes two different solutions to the mid-frequency range problem. The first one, named as The Subsystem based High Frequency Limit (SHFL) procedure, proposes a multi-hybrid procedure in which each sub-structure of the full system is modelled with the appropriate modelling technique, depending on the frequency of study. With this purpose, the concept of high frequency limit of a sub-structure is introduced, marking out the limit above which a substructure has enough modal density to be modelled by SEA. For a certain analysis frequency, if it is lower than the high frequency limit of the sub-structure, the sub-structure is modelled through FEM and, if the frequency of analysis is higher than the high frequency limit, the sub-structure is modelled by SEA. The procedure leads to a number of hybrid models required to cover the medium frequency range, which is defined as the frequency range between the lowest substructure high frequency limit and the highest one. Using this procedure, the mid-frequency range can be define specifically so that, as a consequence, an improvement in the continuity of the vibro-acoustic response function is achieved, closing the undetermined gap between the low and high frequency ranges. The second proposed mid-frequency solution is the Hybrid Sub-structuring method based on Component Mode Synthesis (HS-CMS). The method adopts a partition scheme based on classifying the system modal basis into global and local sets of modes. This classification is performed by using a Component Mode Synthesis, in particular a Craig-Bampton transformation, in order to express the system modal base into the modal bases associated with each sub-structure. Then, each sub-structure modal base is classified into global and local set, fist ones associated with the long wavelength motion and second ones with the short wavelength motion. The high frequency limit of each sub-structure is used as frequency frontier between both sets of modes. From this classification, the equations of motion associated with global modes are derived, which include the interaction of local modes by means of corrections in the dynamic stiffness matrix and the force vector of the global problem. The local equations of motion are solved through SEA, where again interactions with global modes arc included through the inclusion of an additional input power into the SEA model. The method has been tested for the calculation of the response function of structures subjected to structural and acoustic loads. Both methods have been firstly tested in simple structures to establish their basis and main characteristics. Methods are also verified in space structures, as satellites and antenna reflectors, providing good results as it is concluded from the comparison with experimental results obtained in both, acoustic and structural load tests. This dissertation opens a wide field of research through which further studies could be performed to obtain efficient and accurate methodologies to appropriately reproduce the vibro-acoustic behaviour of complex systems in the mid-frequency range.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El continuo crecimiento de la demanda del transporte aéreo, junto con los nuevos escenarios de intervención militar, están obligando a una optimización en el uso del espacio aéreo. De este modo, la UE y los EEUU (a través de SESAR y NextGen respectivamente) han asentado las bases para una nueva gestión del tráfico aéreo (ATM). Con ello, se pretende aumentar la capacidad de aeropuertos y rutas aéreas, otorgando mayor flexibilidad al uso del espacio aéreo sin comprometer la seguridad de los usuarios. Desde un punto de vista puramente técnico, la clave de este cambio de modelo está en el conocimiento de la posición de cada aeronave en cada instante. En este sentido, la tendencia en ATM es el uso de ADS-B como fuente principal de posicionamiento. Sin embargo, debido a que este sistema está basado en la difusión de la posición obtenida a través de GPS, es necesario un sistema de seguimiento independiente. Actualmente, la intención es migrar del radar secundario de vigilancia (SSR) a la multilateración de área extensa (WAM), con el fin de mejorar la integridad de la posición para aplicaciones en ruta. Aprovechando el rápido despliegue de ADS-B, se pretende reutilizar sus estaciones base para WAM. Cada estación base que recibe el mensaje ADS-B de la aeronave envía conjuntamente la medida del tiempo de llegada (TOA) de dicho mensaje al centro de tráfico aéreo. La posición de la aeronave se obtiene mediante multilateración, cuya técnica consiste en utilizar las medidas de TOA de un mismo mensaje ADS-B obtenidas en las distintas estaciones base. El objetivo es estimar la posición de cada aeronave con la mayor precisión posible. Para poder diseñar el sistema que permite alcanzar este objetivo, son dos los aspectos básicos a estudiar. Por una parte, la identificación y posterior caracterización de los errores (tanto sistemáticos como aleatorios) que afectan a la medida de TOA. Por otra parte, es necesario el estudio de los sistemas de seguimiento, basados en versiones sofisticadas del filtro de Kalman (IMM, UKF). Una vez establecidos estos dos pilares, la presente tesis doctoral propone un sistema que permite efectuar el seguimiento de las aeronaves, corrigiendo los efectos de las principales distorsiones que afectan a la medida de TOA: la refracción troposférica y el error de sincronismo. La mejora en la precisión de la localización ha sido evaluada mediante simulación de escenarios hipotéticos. ABSTRACT The ever-growing demand in the air transportation and the new military intervention scenarios, are generating a need to optimize the use of the airspace. This way, the EU and the USA (through SESAR and NextGen respectively) have set the ground to overhaul the current air traffic management. The intention is to enhance the capacity of airports and air routes, providing greater flexibility in the use of airspace without jeopardizing the security of the end-users. From a technical perspective, the key for this change lies in the knowledge of the aircraft position. The trend in Air Traffic Management (ATM) is to rely on ADS-B as the main source for aircraft positioning. However, this system is based on the aircraft’s self-declaration of its own (often GPS-based) navigation solution. It is therefore necessary to have an independent surveillance system. Nowadays, the intention is to gradually migrate from Secondary Surveillance Radar (SSR) towards Wide Area Multilateration (WAM) in order to enhance surveillance integrity for en-route applications. Given the fast deployment of ADS-B, the aim is to use its base stations for WAM. Each station sends the Time of Arrival (TOA) of the received ADS-B messages to the air traffic center (ATC). The aircraft position is obtained through multilateration, using the TOA of the same message measured by each station. The aim is to accurately estimate the position of each aircraft. Knowledge from two key areas has to be gathered prior to designing such a system. It is necessary to identify and then characterize the errors (both systematic and random) affecting the TOA measurements. The second element is the study of tracking systems based on sophisticated versions of the Kalman filtering (e.g. IMM, UKF). Based on this knowledge, the main contribution of this Ph.D. is an aircraft tracking system that corrects the effects of the main errors involved in the TOA measurement: tropospheric refraction and synchronization issues. Performance gains in positioning accuracy have been assessed by simulating hypothetical WAM scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las Field-Programmable Gate Arrays (FPGAs) SRAM se construyen sobre una memoria de configuración de tecnología RAM Estática (SRAM). Presentan múltiples características que las hacen muy interesantes para diseñar sistemas empotrados complejos. En primer lugar presentan un coste no-recurrente de ingeniería (NRE) bajo, ya que los elementos lógicos y de enrutado están pre-implementados (el diseño de usuario define su conexionado). También, a diferencia de otras tecnologías de FPGA, pueden ser reconfiguradas (incluso en campo) un número ilimitado de veces. Es más, las FPGAs SRAM de Xilinx soportan Reconfiguración Parcial Dinámica (DPR), la cual permite reconfigurar la FPGA sin interrumpir la aplicación. Finalmente, presentan una alta densidad de lógica, una alta capacidad de procesamiento y un rico juego de macro-bloques. Sin embargo, un inconveniente de esta tecnología es su susceptibilidad a la radiación ionizante, la cual aumenta con el grado de integración (geometrías más pequeñas, menores tensiones y mayores frecuencias). Esta es una precupación de primer nivel para aplicaciones en entornos altamente radiativos y con requisitos de alta confiabilidad. Este fenómeno conlleva una degradación a largo plazo y también puede inducir fallos instantáneos, los cuales pueden ser reversibles o producir daños irreversibles. En las FPGAs SRAM, los fallos inducidos por radiación pueden aparecer en en dos capas de arquitectura diferentes, que están físicamente superpuestas en el dado de silicio. La Capa de Aplicación (o A-Layer) contiene el hardware definido por el usuario, y la Capa de Configuración contiene la memoria de configuración y la circuitería de soporte. Los fallos en cualquiera de estas capas pueden hacer fracasar el sistema, lo cual puede ser ás o menos tolerable dependiendo de los requisitos de confiabilidad del sistema. En el caso general, estos fallos deben gestionados de alguna manera. Esta tesis trata sobre la gestión de fallos en FPGAs SRAM a nivel de sistema, en el contexto de sistemas empotrados autónomos y confiables operando en un entorno radiativo. La tesis se centra principalmente en aplicaciones espaciales, pero los mismos principios pueden aplicarse a aplicaciones terrenas. Las principales diferencias entre ambas son el nivel de radiación y la posibilidad de mantenimiento. Las diferentes técnicas para la gestión de fallos en A-Layer y C-Layer son clasificados, y sus implicaciones en la confiabilidad del sistema son analizados. Se proponen varias arquitecturas tanto para Gestores de Fallos de una capa como de doble-capa. Para estos últimos se propone una arquitectura novedosa, flexible y versátil. Gestiona las dos capas concurrentemente de manera coordinada, y permite equilibrar el nivel de redundancia y la confiabilidad. Con el objeto de validar técnicas de gestión de fallos dinámicas, se desarrollan dos diferentes soluciones. La primera es un entorno de simulación para Gestores de Fallos de C-Layer, basado en SystemC como lenguaje de modelado y como simulador basado en eventos. Este entorno y su metodología asociada permite explorar el espacio de diseño del Gestor de Fallos, desacoplando su diseño del desarrollo de la FPGA objetivo. El entorno incluye modelos tanto para la C-Layer de la FPGA como para el Gestor de Fallos, los cuales pueden interactuar a diferentes niveles de abstracción (a nivel de configuration frames y a nivel físico JTAG o SelectMAP). El entorno es configurable, escalable y versátil, e incluye capacidades de inyección de fallos. Los resultados de simulación para algunos escenarios son presentados y comentados. La segunda es una plataforma de validación para Gestores de Fallos de FPGAs Xilinx Virtex. La plataforma hardware aloja tres Módulos de FPGA Xilinx Virtex-4 FX12 y dos Módulos de Unidad de Microcontrolador (MCUs) de 32-bits de propósito general. Los Módulos MCU permiten prototipar Gestores de Fallos de C-Layer y A-Layer basados en software. Cada Módulo FPGA implementa un enlace de A-Layer Ethernet (a través de un switch Ethernet) con uno de los Módulos MCU, y un enlace de C-Layer JTAG con el otro. Además, ambos Módulos MCU intercambian comandos y datos a través de un enlace interno tipo UART. Al igual que para el entorno de simulación, se incluyen capacidades de inyección de fallos. Los resultados de pruebas para algunos escenarios son también presentados y comentados. En resumen, esta tesis cubre el proceso completo desde la descripción de los fallos FPGAs SRAM inducidos por radiación, pasando por la identificación y clasificación de técnicas de gestión de fallos, y por la propuesta de arquitecturas de Gestores de Fallos, para finalmente validarlas por simulación y pruebas. El trabajo futuro está relacionado sobre todo con la implementación de Gestores de Fallos de Sistema endurecidos para radiación. ABSTRACT SRAM-based Field-Programmable Gate Arrays (FPGAs) are built on Static RAM (SRAM) technology configuration memory. They present a number of features that make them very convenient for building complex embedded systems. First of all, they benefit from low Non-Recurrent Engineering (NRE) costs, as the logic and routing elements are pre-implemented (user design defines their connection). Also, as opposed to other FPGA technologies, they can be reconfigured (even in the field) an unlimited number of times. Moreover, Xilinx SRAM-based FPGAs feature Dynamic Partial Reconfiguration (DPR), which allows to partially reconfigure the FPGA without disrupting de application. Finally, they feature a high logic density, high processing capability and a rich set of hard macros. However, one limitation of this technology is its susceptibility to ionizing radiation, which increases with technology scaling (smaller geometries, lower voltages and higher frequencies). This is a first order concern for applications in harsh radiation environments and requiring high dependability. Ionizing radiation leads to long term degradation as well as instantaneous faults, which can in turn be reversible or produce irreversible damage. In SRAM-based FPGAs, radiation-induced faults can appear at two architectural layers, which are physically overlaid on the silicon die. The Application Layer (or A-Layer) contains the user-defined hardware, and the Configuration Layer (or C-Layer) contains the (volatile) configuration memory and its support circuitry. Faults at either layers can imply a system failure, which may be more ore less tolerated depending on the dependability requirements. In the general case, such faults must be managed in some way. This thesis is about managing SRAM-based FPGA faults at system level, in the context of autonomous and dependable embedded systems operating in a radiative environment. The focus is mainly on space applications, but the same principles can be applied to ground applications. The main differences between them are the radiation level and the possibility for maintenance. The different techniques for A-Layer and C-Layer fault management are classified and their implications in system dependability are assessed. Several architectures are proposed, both for single-layer and dual-layer Fault Managers. For the latter, a novel, flexible and versatile architecture is proposed. It manages both layers concurrently in a coordinated way, and allows balancing redundancy level and dependability. For the purpose of validating dynamic fault management techniques, two different solutions are developed. The first one is a simulation framework for C-Layer Fault Managers, based on SystemC as modeling language and event-driven simulator. This framework and its associated methodology allows exploring the Fault Manager design space, decoupling its design from the target FPGA development. The framework includes models for both the FPGA C-Layer and for the Fault Manager, which can interact at different abstraction levels (at configuration frame level and at JTAG or SelectMAP physical level). The framework is configurable, scalable and versatile, and includes fault injection capabilities. Simulation results for some scenarios are presented and discussed. The second one is a validation platform for Xilinx Virtex FPGA Fault Managers. The platform hosts three Xilinx Virtex-4 FX12 FPGA Modules and two general-purpose 32-bit Microcontroller Unit (MCU) Modules. The MCU Modules allow prototyping software-based CLayer and A-Layer Fault Managers. Each FPGA Module implements one A-Layer Ethernet link (through an Ethernet switch) with one of the MCU Modules, and one C-Layer JTAG link with the other. In addition, both MCU Modules exchange commands and data over an internal UART link. Similarly to the simulation framework, fault injection capabilities are implemented. Test results for some scenarios are also presented and discussed. In summary, this thesis covers the whole process from describing the problem of radiationinduced faults in SRAM-based FPGAs, then identifying and classifying fault management techniques, then proposing Fault Manager architectures and finally validating them by simulation and test. The proposed future work is mainly related to the implementation of radiation-hardened System Fault Managers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Se está produciendo en la geodesia un cambio de paradigma en la concepción de los modelos digitales del terreno, pasando de diseñar el modelo con el menor número de puntos posibles a hacerlo con cientos de miles o millones de puntos. Este cambio ha sido consecuencia de la introducción de nuevas tecnologías como el escáner láser, la interferometría radar y el tratamiento de imágenes. La rápida aceptación de estas nuevas tecnologías se debe principalmente a la gran velocidad en la toma de datos, a la accesibilidad por no precisar de prisma y al alto grado de detalle de los modelos. Los métodos topográficos clásicos se basan en medidas discretas de puntos que considerados en su conjunto forman un modelo; su precisión se deriva de la precisión en la toma singular de estos puntos. La tecnología láser escáner terrestre (TLS) supone una aproximación diferente para la generación del modelo del objeto observado. Las nubes de puntos, producto del escaneo con TLS, pasan a ser tratadas en su conjunto mediante análisis de áreas, de forma que ahora el modelo final no es el resultado de una agregación de puntos sino la de la mejor superficie que se adapta a las nubes de puntos. Al comparar precisiones en la captura de puntos singulares realizados con métodos taquimétricos y equipos TLS la inferioridad de estos últimos es clara; sin embargo es en el tratamiento de las nubes de puntos, con los métodos de análisis basados en áreas, se han obtenido precisiones aceptables y se ha podido considerar plenamente la incorporación de esta tecnología en estudios de deformaciones y movimientos de estructuras. Entre las aplicaciones del TLS destacan las de registro del patrimonio, registro de las fases en la construcción de plantas industriales y estructuras, atestados de accidentes y monitorización de movimientos del terreno y deformaciones de estructuras. En la auscultación de presas, comparado con la monitorización de puntos concretos dentro, en coronación o en el paramento de la presa, disponer de un modelo continuo del paramento aguas abajo de la presa abre la posibilidad de introducir los métodos de análisis de deformaciones de superficies y la creación de modelos de comportamiento que mejoren la comprensión y previsión de sus movimientos. No obstante, la aplicación de la tecnología TLS en la auscultación de presas debe considerarse como un método complementario a los existentes. Mientras que los péndulos y la reciente técnica basada en el sistema de posicionamiento global diferencial (DGPS) dan una información continua de los movimientos de determinados puntos de la presa, el TLS permite ver la evolución estacional y detectar posibles zonas problemáticas en todo el paramento. En este trabajo se analizan las características de la tecnología TLS y los parámetros que intervienen en la precisión final de los escaneos. Se constata la necesidad de utilizar equipos basados en la medida directa del tiempo de vuelo, también llamados pulsados, para distancias entre 100 m y 300 m Se estudia la aplicación del TLS a la modelización de estructuras y paramentos verticales. Se analizan los factores que influyen en la precisión final, como el registro de nubes, tipo de dianas y el efecto conjunto del ángulo y la distancia de escaneo. Finalmente, se hace una comparación de los movimientos dados por los péndulos directos de una presa con los obtenidos del análisis de las nubes de puntos correspondientes a varias campañas de escaneos de la misma presa. Se propone y valida el empleo de gráficos patrón para relacionar las variables precisión o exactitud con los factores distancia y ángulo de escaneo en el diseño de trabajos de campo. Se expone su aplicación en la preparación del trabajo de campo para la realización de una campaña de escaneos dirigida al control de movimientos de una presa y se realizan recomendaciones para la aplicación de la técnica TLS a grandes estructuras. Se ha elaborado el gráfico patrón de un equipo TLS concreto de alcance medio. Para ello se hicieron dos ensayos de campo en condiciones reales de trabajo, realizando escaneos en todo el rango de distancias y ángulos de escaneo del equipo. Se analizan dos métodos para obtener la precisión en la modelización de paramentos y la detección de movimientos de estos: el método del “plano de mejor ajuste” y el método de la “deformación simulada”. Por último, se presentan los resultados de la comparación de los movimientos estacionales de una presa arco-gravedad entre los registrados con los péndulos directos y los obtenidos a partir de los escaneos realizados con un TLS. Los resultados muestran diferencias de milímetros, siendo el mejor de ellos del orden de un milímetro. Se explica la metodología utilizada y se hacen consideraciones respecto a la densidad de puntos de las nubes y al tamaño de las mallas de triángulos. A shift of paradigm in the conception of the survey digital models is taking place in geodesy, moving from designing a model with the fewer possible number of points to models of hundreds of thousand or million points. This change has happened because of the introduction of new technologies like the laser scanner, the interferometry radar and the processing of images. The fast acceptance of these new technologies has been due mainly to the great speed getting the data, to the accessibility as reflectorless technique, and to the high degree of detail of the models. Classic survey methods are based on discreet measures of points that, considered them as a whole, form a model; the precision of the model is then derived from the precision measuring the single points. The terrestrial laser scanner (TLS) technology supposes a different approach to the model generation of the observed object. Point cloud, the result of a TLS scan, must be treated as a whole, by means of area-based analysis; so, the final model is not an aggregation of points but the one resulting from the best surface that fits with the point cloud. Comparing precisions between the one resulting from the capture of singular points made with tachometric measurement methods and with TLS equipment, the inferiority of this last one is clear; but it is in the treatment of the point clouds, using area-based analysis methods, when acceptable precisions have been obtained and it has been possible to consider the incorporation of this technology for monitoring structures deformations. Among TLS applications it have to be emphasized those of registry of the cultural heritage, stages registry during construction of industrial plants and structures, police statement of accidents and monitorization of land movements and structures deformations. Compared with the classical dam monitoring, approach based on the registry of a set of points, the fact having a continuous model of the downstream face allows the possibility of introducing deformation analysis methods and behavior models that would improve the understanding and forecast of dam movements. However, the application of TLS technology for dam monitoring must be considered like a complementary method with the existing ones. Pendulums and recently the differential global positioning system (DGPS) give a continuous information of the movements of certain points of the dam, whereas TLS allows following its seasonal evolution and to detect damaged zones of the dam. A review of the TLS technology characteristics and the factors affecting the final precision of the scanning data is done. It is stated the need of selecting TLS based on the direct time of flight method, also called pulsed, for scanning distances between 100m and 300m. Modelling of structures and vertical walls is studied. Factors that influence in the final precision, like the registry of point clouds, target types, and the combined effect of scanning distance and angle of incidence are analyzed. Finally, a comparison among the movements given by the direct pendulums of a dam and the ones obtained from the analysis of point clouds is done. A new approach to obtain a complete map-type plot of the precisions of TLS equipment based on the direct measurement of time of flight method at midrange distances is presented. Test were developed in field-like conditions, similar to dam monitoring and other civil engineering works. Taking advantage of graphic semiological techniques, a “distance - angle of incidence” map based was designed and evaluated for field-like conditions. A map-type plot was designed combining isolines with sized and grey scale points, proportional to the precision values they represent. Precisions under different field conditions were compared with specifications. For this purpose, point clouds were evaluated under two approaches: the standar "plane-of-best-fit" and the proposed "simulated deformation”, that showed improved performance. These results lead to a discussion and recommendations about optimal TLS operation in civil engineering works. Finally, results of the comparison of seasonal movements of an arc-gravity dam between the registered by the direct pendulums ant the obtained from the TLS scans, are shown. The results show differences of millimeters, being the best around one millimeter. The used methodology is explained and considerations with respect to the point cloud density and to the size of triangular meshes are done.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A more natural, intuitive, user-friendly, and less intrusive Human–Computer interface for controlling an application by executing hand gestures is presented. For this purpose, a robust vision-based hand-gesture recognition system has been developed, and a new database has been created to test it. The system is divided into three stages: detection, tracking, and recognition. The detection stage searches in every frame of a video sequence potential hand poses using a binary Support Vector Machine classifier and Local Binary Patterns as feature vectors. These detections are employed as input of a tracker to generate a spatio-temporal trajectory of hand poses. Finally, the recognition stage segments a spatio-temporal volume of data using the obtained trajectories, and compute a video descriptor called Volumetric Spatiograms of Local Binary Patterns (VS-LBP), which is delivered to a bank of SVM classifiers to perform the gesture recognition. The VS-LBP is a novel video descriptor that constitutes one of the most important contributions of the paper, which is able to provide much richer spatio-temporal information than other existing approaches in the state of the art with a manageable computational cost. Excellent results have been obtained outperforming other approaches of the state of the art.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O transformador de potência é um importante equipamento utilizado no sistema elétrico de potência, responsável por transmitir energia elétrica ou potência elétrica de um circuito a outro e transformar tensões e correntes de um circuito elétrico. O transformador de potência tem ampla aplicação, podendo ser utilizado em subestações de usinas de geração, transmissão e distribuição. Neste sentido, mudanças recentes ocorridas no sistema elétrico brasileiro, causadas principalmente pelo aumento considerável de carga e pelo desenvolvimento tecnológico tem proporcionado a fabricação de um transformador com a aplicação de alta tecnologia, aumentando a confiabilidade deste equipamento e, em paralelo, a redução do seu custo global. Tradicionalmente, os transformadores são fabricados com um sistema de isolação que associa isolantes sólidos e celulose, ambos, imersos em óleo mineral isolante, constituição esta que define um limite à temperatura operacional contínua. No entanto, ao se substituir este sistema de isolação formado por papel celulose e óleo mineral isolante por um sistema de isolação semi- híbrida - aplicação de papel NOMEX e óleo vegetal isolante, a capacidade de carga do transformador pode ser aumentada por suportar maiores temperaturas. Desta forma, o envelhecimento do sistema de isolação poderá ser em longo prazo, significativamente reduzido. Esta técnica de aumentar os limites térmicos do transformador pode eliminar, essencialmente, as restrições térmicas associadas à isolação celulósica, provendo uma solução econômica para aperfeiçoar o uso de transformadores de potência, aumentando a sua confiabilidade operacional. Adicionalmente, à aplicação de sensores de fibra óptica, em substituição aos sensores de imagem térmica no monitoramento das temperaturas internas do transformador, se apresentam como importante opção na definição do equacionamento do comportamento do transformador sob o ponto de vista térmico.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High-quality software, delivered on time and budget, constitutes a critical part of most products and services in modern society. Our government has invested billions of dollars to develop software assets, often to redevelop the same capability many times. Recognizing the waste involved in redeveloping these assets, in 1992 the Department of Defense issued the Software Reuse Initiative. The vision of the Software Reuse Initiative was "To drive the DoD software community from its current "re-invent the software" cycle to a process-driven, domain-specific, architecture-centric, library-based way of constructing software.'' Twenty years after issuing this initiative, there is evidence of this vision beginning to be realized in nonembedded systems. However, virtually every large embedded system undertaken has incurred large cost and schedule overruns. Investigations into the root cause of these overruns implicates reuse. Why are we seeing improvements in the outcomes of these large scale nonembedded systems and worse outcomes in embedded systems? This question is the foundation for this research. The experiences of the Aerospace industry have led to a number of questions about reuse and how the industry is employing reuse in embedded systems. For example, does reuse in embedded systems yield the same outcomes as in nonembedded systems? Are the outcomes positive? If the outcomes are different, it may indicate that embedded systems should not use data from nonembedded systems for estimation. Are embedded systems using the same development approaches as nonembedded systems? Does the development approach make a difference? If embedded systems develop software differently from nonembedded systems, it may mean that the same processes do not apply to both types of systems. What about the reuse of different artifacts? Perhaps there are certain artifacts that, when reused, contribute more or are more difficult to use in embedded systems. Finally, what are the success factors and obstacles to reuse? Are they the same in embedded systems as in nonembedded systems? The research in this dissertation is comprised of a series of empirical studies using professionals in the aerospace and defense industry as its subjects. The main focus has been to investigate the reuse practices of embedded systems professionals and nonembedded systems professionals and compare the methods and artifacts used against the outcomes. The research has followed a combined qualitative and quantitative design approach. The qualitative data were collected by surveying software and systems engineers, interviewing senior developers, and reading numerous documents and other studies. Quantitative data were derived from converting survey and interview respondents' answers into coding that could be counted and measured. From the search of existing empirical literature, we learned that reuse in embedded systems are in fact significantly different from nonembedded systems, particularly in effort in model based development approach and quality where the development approach was not specified. The questionnaire showed differences in the development approach used in embedded projects from nonembedded projects, in particular, embedded systems were significantly more likely to use a heritage/legacy development approach. There was also a difference in the artifacts used, with embedded systems more likely to reuse hardware, test products, and test clusters. Nearly all the projects reported using code, but the questionnaire showed that the reuse of code brought mixed results. One of the differences expressed by the respondents to the questionnaire was the difficulty in reuse of code for embedded systems when the platform changed. The semistructured interviews were performed to tell us why the phenomena in the review of literature and the questionnaire were observed. We asked respected industry professionals, such as senior fellows, fellows and distinguished members of technical staff, about their experiences with reuse. We learned that many embedded systems used heritage/legacy development approaches because their systems had been around for many years, before models and modeling tools became available. We learned that reuse of code is beneficial primarily when the code does not require modification, but, especially in embedded systems, once it has to be changed, reuse of code yields few benefits. Finally, while platform independence is a goal for many in nonembedded systems, it is certainly not a goal for the embedded systems professionals and in many cases it is a detriment. However, both embedded and nonembedded systems professionals endorsed the idea of platform standardization. Finally, we conclude that while reuse in embedded systems and nonembedded systems is different today, they are converging. As heritage embedded systems are phased out, models become more robust and platforms are standardized, reuse in embedded systems will become more like nonembedded systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional visual servoing systems have been widely studied in the last years. These systems control the position of the camera attached to the robot end-effector guiding it from any position to the desired one. These controllers can be improved by using the event-based control paradigm. The system proposed in this paper is based on the idea of activating the visual controller only when something significant has occurred in the system (e.g. when any visual feature can be loosen because it is going outside the frame). Different event triggers have been defined in the image space in order to activate or deactivate the visual controller. The tests implemented to validate the proposal have proved that this new scheme avoids visual features to go out of the image whereas the system complexity is reduced considerably. Events can be used in the future to change different parameters of the visual servoing systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present a complete system for the treatment of both geographical and temporal dimensions in text and its application to information retrieval. This system has been evaluated in both the GeoTime task of the 8th and 9th NTCIR workshop in the years 2010 and 2011 respectively, making it possible to compare the system to contemporary approaches to the topic. In order to participate in this task we have added the temporal dimension to our GIR system. The system proposed here has a modular architecture in order to add or modify features. In the development of this system, we have followed a QA-based approach as well as multi-search engines to improve the system performance.