917 resultados para Sensors and actuators


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Performing activity recognition using the information provided by the different sensors embedded in a smartphone face limitations due to the capabilities of those devices when the computations are carried out in the terminal. In this work a fuzzy inference module is implemented in order to decide which classifier is the most appropriate to be used at a specific moment regarding the application requirements and the device context characterized by its battery level, available memory and CPU load. The set of classifiers that is considered is composed of Decision Tables and Trees that have been trained using different number of sensors and features. In addition, some classifiers perform activity recognition regardless of the on-body device position and others rely on the previous recognition of that position to use a classifier that is trained with measurements gathered with the mobile placed on that specific position. The modules implemented show that an evaluation of the classifiers allows sorting them so the fuzzy inference module can choose periodically the one that best suits the device context and application requirements.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Over the last ten years, Salamanca has been considered among the most polluted cities in México. This paper presents a Self-Organizing Maps (SOM) Neural Network application to classify pollution data and automatize the air pollution level determination for Sulphur Dioxide (SO2) in Salamanca. Meteorological parameters are well known to be important factors contributing to air quality estimation and prediction. In order to observe the behavior and clarify the influence of wind parameters on the SO2 concentrations a SOM Neural Network have been implemented along a year. The main advantages of the SOM is that it allows to integrate data from different sensors and provide readily interpretation results. Especially, it is powerful mapping and classification tool, which others information in an easier way and facilitates the task of establishing an order of priority between the distinguished groups of concentrations depending on their need for further research or remediation actions in subsequent management steps. The results show a significative correlation between pollutant concentrations and some environmental variables.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El presente proyecto tiene como objetivo la creación de un controlador MIDI económico que haga uso de la tecnología actual, y partiendo de la idea del instrumento clásico, el Theremin, desarrollado por Lev Serguéievich Termen. Para ello se ha dividido el proyecto en dos principales bloques, el primero, hardware y el segundo, software. En la parte del hardware, se explica cual ha sido la razón de la utilización del microprocesador Arduino Uno, sus características técnicas y el uso de sensores de ultrasonido, ya que proporcionan la característica de poder interactuar con el controlador a través de gestos con las manos, al igual que un Theremin clásico. Se explica el montaje de los dispositivos que conforman el controlador, así como la mejora realizada, con la utilización de 4 de estos sensores, para dar más capacidades de interactuación con el controlador MIDI. También se ve en ese apartado, como se programa la tarjeta de Arduino, para que se encargue de realizar medidas con los sensores y enviarlas por el puerto serial USB. En el apartado del software se da una introducción al entorno de programación Max/MSP. Se ve el plug in desarrollado con este lenguaje, para poder comunicar el controlador MIDI con un software de audio profesional (Ableton Live) y se explica con detalle los bloques que conforman el plug in de control de sensores y como es transformada la información que entrega el microprocesador Arduino por el puerto USB, en datos MIDI. También, se da una explicación sobre el manejo correcto del controlador a la hora de mover las manos sobre los sensores y de donde situar el instrumento para que no se produzcan problemas de interferencias con las señales que envían los ultrasonidos. Además, se proporciona un presupuesto del coste de los materiales, y otro del coste del desarrollo realizado por el ingeniero. ABSTRACT The aim of this Project is the creation of an economical MIDI controller that uses nowadays technology and that is based on the idea of the Theremin, a classical instrument conceived by Lev Serguéievich Termen. In order to accomplish this, the project has been divided into two sections: hardware and software. The hardware section explains why the microprocessor Arduino Uno has been chosen, sets out its technical specifications and the use of ultrasonic sensors. These sensors enable the user to interact with the controller through hand gestures like the Theremin. The assembly of the devices is exposed as well as the improvements made with the use of four of these sensors to offer more interactive capabilities with the MIDI controller. The Arduino singleboard programming that performs the measurements with the sensors and sends these measurements through the USB serial port is also explained here. The software section introduces Max/MSP programming environment as well as the plug in developed with this language that connects the MIDI controller with professional audio software (Ableton Live). The blocks that build the sensor controller plug in are explained in detail along with the way the Arduino delivers the information through the USB port into MIDI data. In addition, an explanation of the correct handling of the MIDI controller is given focusing on how the user should move his hands above the sensors and where to place the instrument to avoid interference problems with the signals sent. Also, a cost estimation of both materials and engineering is provided.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En este proyecto se van a aplicar las técnicas de análisis de ruido para caracterizar la respuesta dinámica de varios sensores de temperatura, tanto termorresistencias de platino como de termopares. Estos sensores son imprescindibles para él correcto funcionamiento de las centrales nucleares y requieren vigilancia para garantizar la exactitud de las medidas. Las técnicas de análisis de ruido son técnicas pasivas, es decir, no afectan a la operación de la planta y permiten realizar una vigilancia in situ de los sensores. Para el caso de los sensores de temperatura, dado que se pueden asimilar a sistemas de primer orden, el parámetro fundamental a vigilar es el tiempo de respuesta. Éste puede obtenerse para cada una de las sondas por medio de técnicas en el dominio de la frecuencia (análisis espectral) o por medio de técnicas en el dominio del tiempo (modelos autorregresivos). Además de la estimación del tiempo de respuesta, se realizará una caracterización estadística de las sondas. El objetivo es conocer el comportamiento de los sensores y vigilarlos de manera que se puedan diagnosticar las averías aunque éstas estén en una etapa incipiente. ABSTRACT In this project we use noise analysis technique to study the dynamic response of RTDs (Resistant temperature detectors) and thermocouples. These sensors are essential for the proper functioning of nuclear power plants and therefore need to be monitored to guarantee accurate measurements. The noise analysis techniques do not affect plant operation and allow in situ monitoring of the sensors. Temperature sensors are equivalent to first order systems. In these systems the main parameter to monitor is the response time which can be obtained by means of techniques in the frequency domain (spectral analysis) as well as time domain (autoregressive models). Besides response time estimation the project will also include a statistical study of the probes. The goal is to understand the behavior of the sensors and monitor them in order to detect any anomalies or malfunctions even if they occur in an early stage.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En este proyecto se desarrolla un sistema electrónico para variar la geometría de un motor de un monoplaza que participa en la competición Fórmula SAE. Fórmula SAE es una competición de diseño de monoplazas para estudiantes, organizado por “Society of Automotive Enginners” (SAE). Este concurso busca la innovación tecnológica de la automoción, así como que estudiantes participen en un trabajo real, en el cual el objetivo es obtener resultados competitivos cumpliendo con una serie de requisitos. La variación de la geometría de un motor en un vehículo permite mejorar el rendimiento del monoplaza consiguiendo elevar el par de potencia del motor. Cualquier mejora en del vehículo en un ámbito de competición puede resultar determinante en el desenlace de la misma. El objetivo del proyecto es realizar esta variación mediante el control de la longitud de los tubos de admisión de aire o “runners” del motor de combustión, empleando un motor lineal paso a paso. A partir de la información obtenida por sensores de revoluciones del motor de combustión y la posición del acelerador se debe controlar la distancia de dichos tubos. Integrando este sistema en el bus CAN del vehículo para que comparta la información medida al resto de módulos. Por todo esto se realiza un estudio aclarando los aspectos generales del objetivo del trabajo, para la comprensión del proyecto a realizar, las posibilidades de realización y adquisición de conocimientos para un mejor desarrollo. Se presenta una solución basada en el control del motor lineal paso a paso mediante el microcontrolador PIC32MX795F512-L. Dispositivo del fabricante Microchip con una arquitectura de 32 bits. Este dispone de un módulo CAN integrado y distintos periféricos que se emplean en la medición de los sensores y actuación sobre el motor paso a paso empleando el driver de Texas Instruments DRV8805. Entonces el trabajo se realiza en dos líneas, una parte software de programación del control del sistema, empleando el software de Microchip MPLABX IDE y otra parte hardware de diseño de una PCB y circuitos acondicionadores para la conexión del microcontrolador, con los sensores, driver, motor paso a paso y bus CAN. El software empleado para la realización de la PCB es Orcad9.2/Layout. Para la evaluación de las medidas obtenidas por los sensores y la comprobación del bus CAN se emplea el kit de desarrollo de Microchip, MCP2515 CAN Bus Monitor Demo Board, que permite ver la información en el bus CAN e introducir tramas al mismo. ABSTRACT. This project develops an electronic system to vary the geometry of a car engine which runs the Formula SAE competition. Formula SAE is a design car competition for students, organized by "Society of Automotive Engineers" (SAE). This competition seeks technological innovation in the automotive industry and brings in students to participate in a real job, in which the objective is to obtain competitive results in compliance with certain requirements. Varying engine’s geometry in a vehicle improves car’s performance raising engine output torque. Any improvement in the vehicle in a competition field can be decisive in the outcome of it. The goal of the project is the variation by controlling the length of the air intake pipe or "runners" in a combustion engine, using a linear motor step. For these, uses the information gathered by speed sensors from the combustion engine and by the throttle position to control the distance of these tubes. This system is integrated in the vehicle CAN bus to share the information with the other modules. For all this is made a study to clarify the general aspects of the project in order to understand the activities developed inside the project, the different options available and also, to acquire knowledge for a better development of the project. The solution is based on linear stepper motor control by the microcontroller PIC32MX795F512-L. Device from manufacturer Microchip with a 32-bit architecture. This module has an integrated CAN various peripherals that are used in measuring the performance of the sensors and drives the stepper motor using Texas Instruments DRV8805 driver. Then the work is done in two lines, first, control programming software system using software MPLABX Microchip IDE and, second, hardware design of a PCB and conditioning circuits for connecting the microcontroller, with sensors, driver stepper motor and CAN bus. The software used to carry out the PCB is Orcad9.2/Layout. For the evaluation of the measurements obtained by the sensors and CAN bus checking is used Microchip development kit, MCP2515 CAN Bus Monitor Demo Board, that allows you to see the information on the CAN bus and enter new frames in the bus.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The nonlinear optical properties of many materials and devices have been the main object of research as potential candidates for sensing in different places. Just one of these properties has been, in most of the cases, the basis for the sensing operation. As a consequence, just one parameter can be detected. In this paper, although just one property will be employed too, we will show the possibility to sense different parameters with just one type of sensor. The way adopted in this work is the use of the optical bistability obtained from different photonic structures. Because this optical bistability has a strong dependence on many different parameters the possibility to sense different inputs appears. In our case, we will report the use of some non-linear optical devices, mainly Semiconductor Optical Amplifiers, as sensing elements. Because their outputs depend on many parameters, as the incident light wavelength, polarization, intensity and direction, applied voltage and feedback characteristics, they can be employed to detect, at the same time, different type of signals. This is because the way these different signals affect to the sensor response is very different too and appears under a different set of characteristics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Sensing systems in living bodies offer a large variety of possible different configurations and philosophies able to be emulated in artificial sensing systems. Motion detection is one of the areas where different animals adopt different solutions and, in most of the cases, these solutions reflect a very sophisticated form. One of them, the mammalian visual system, presents several advantages with respect to the artificial ones. The main objective of this paper is to present a system, based on this biological structure, able to detect motion, its sense and its characteristics. The configuration adopted responds to the internal structure of the mammalian retina, where just five types of cells arranged in five layers are able to differentiate a large number of characteristics of the image impinging onto it. Its main advantage is that the detection of these properties is based purely on its hardware. A simple unit, based in a previous optical logic cell employed in optical computing, is the basis for emulating the different behaviors of the biological neurons. No software is present and, in this way, no possible interference from outside affects to the final behavior. This type of structure is able to work, once the internal configuration is implemented, without any further attention. Different possibilities are present in the architecture to be presented: detection of motion, of its direction and intensity. Moreover, some other characteristics, as symmetry may be obtained.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El objeto de esta Tesis doctoral es el desarrollo de una metodologia para la deteccion automatica de anomalias a partir de datos hiperespectrales o espectrometria de imagen, y su cartografiado bajo diferentes condiciones tipologicas de superficie y terreno. La tecnologia hiperespectral o espectrometria de imagen ofrece la posibilidad potencial de caracterizar con precision el estado de los materiales que conforman las diversas superficies en base a su respuesta espectral. Este estado suele ser variable, mientras que las observaciones se producen en un numero limitado y para determinadas condiciones de iluminacion. Al aumentar el numero de bandas espectrales aumenta tambien el numero de muestras necesarias para definir espectralmente las clases en lo que se conoce como Maldicion de la Dimensionalidad o Efecto Hughes (Bellman, 1957), muestras habitualmente no disponibles y costosas de obtener, no hay mas que pensar en lo que ello implica en la Exploracion Planetaria. Bajo la definicion de anomalia en su sentido espectral como la respuesta significativamente diferente de un pixel de imagen respecto de su entorno, el objeto central abordado en la Tesis estriba primero en como reducir la dimensionalidad de la informacion en los datos hiperespectrales, discriminando la mas significativa para la deteccion de respuestas anomalas, y segundo, en establecer la relacion entre anomalias espectrales detectadas y lo que hemos denominado anomalias informacionales, es decir, anomalias que aportan algun tipo de informacion real de las superficies o materiales que las producen. En la deteccion de respuestas anomalas se asume un no conocimiento previo de los objetivos, de tal manera que los pixeles se separan automaticamente en funcion de su informacion espectral significativamente diferenciada respecto de un fondo que se estima, bien de manera global para toda la escena, bien localmente por segmentacion de la imagen. La metodologia desarrollada se ha centrado en la implicacion de la definicion estadistica del fondo espectral, proponiendo un nuevo enfoque que permite discriminar anomalias respecto fondos segmentados en diferentes grupos de longitudes de onda del espectro, explotando la potencialidad de separacion entre el espectro electromagnetico reflectivo y emisivo. Se ha estudiado la eficiencia de los principales algoritmos de deteccion de anomalias, contrastando los resultados del algoritmo RX (Reed and Xiaoli, 1990) adoptado como estandar por la comunidad cientifica, con el metodo UTD (Uniform Targets Detector), su variante RXD-UTD, metodos basados en subespacios SSRX (Subspace RX) y metodo basados en proyecciones de subespacios de imagen, como OSPRX (Orthogonal Subspace Projection RX) y PP (Projection Pursuit). Se ha desarrollado un nuevo metodo, evaluado y contrastado por los anteriores, que supone una variacion de PP y describe el fondo espectral mediante el analisis discriminante de bandas del espectro electromagnetico, separando las anomalias con el algortimo denominado Detector de Anomalias de Fondo Termico o DAFT aplicable a sensores que registran datos en el espectro emisivo. Se han evaluado los diferentes metodos de deteccion de anomalias en rangos del espectro electromagnetico del visible e infrarrojo cercano (Visible and Near Infrared-VNIR), infrarrojo de onda corta (Short Wavelenght Infrared-SWIR), infrarrojo medio (Meadle Infrared-MIR) e infrarrojo termico (Thermal Infrared-TIR). La respuesta de las superficies en las distintas longitudes de onda del espectro electromagnetico junto con su entorno, influyen en el tipo y frecuencia de las anomalias espectrales que puedan provocar. Es por ello que se han utilizado en la investigacion cubos de datos hiperepectrales procedentes de los sensores aeroportados cuya estrategia y diseno en la construccion espectrometrica de la imagen difiere. Se han evaluado conjuntos de datos de test de los sensores AHS (Airborne Hyperspectral System), HyMAP Imaging Spectrometer, CASI (Compact Airborne Spectrographic Imager), AVIRIS (Airborne Visible Infrared Imaging Spectrometer), HYDICE (Hyperspectral Digital Imagery Collection Experiment) y MASTER (MODIS/ASTER Simulator). Se han disenado experimentos sobre ambitos naturales, urbanos y semiurbanos de diferente complejidad. Se ha evaluado el comportamiento de los diferentes detectores de anomalias a traves de 23 tests correspondientes a 15 areas de estudio agrupados en 6 espacios o escenarios: Urbano - E1, Semiurbano/Industrial/Periferia Urbana - E2, Forestal - E3, Agricola - E4, Geologico/Volcanico - E5 y Otros Espacios Agua, Nubes y Sombras - E6. El tipo de sensores evaluados se caracteriza por registrar imagenes en un amplio rango de bandas, estrechas y contiguas, del espectro electromagnetico. La Tesis se ha centrado en el desarrollo de tecnicas que permiten separar y extraer automaticamente pixeles o grupos de pixeles cuya firma espectral difiere de manera discriminante de las que tiene alrededor, adoptando para ello como espacio muestral parte o el conjunto de las bandas espectrales en las que ha registrado radiancia el sensor hiperespectral. Un factor a tener en cuenta en la investigacion ha sido el propio instrumento de medida, es decir, la caracterizacion de los distintos subsistemas, sensores imagen y auxiliares, que intervienen en el proceso. Para poder emplear cuantitativamente los datos medidos ha sido necesario definir las relaciones espaciales y espectrales del sensor con la superficie observada y las potenciales anomalias y patrones objetivos de deteccion. Se ha analizado la repercusion que en la deteccion de anomalias tiene el tipo de sensor, tanto en su configuracion espectral como en las estrategias de diseno a la hora de registrar la radiacion prodecente de las superficies, siendo los dos tipos principales de sensores estudiados los barredores o escaneres de espejo giratorio (whiskbroom) y los barredores o escaneres de empuje (pushbroom). Se han definido distintos escenarios en la investigacion, lo que ha permitido abarcar una amplia variabilidad de entornos geomorfologicos y de tipos de coberturas, en ambientes mediterraneos, de latitudes medias y tropicales. En resumen, esta Tesis presenta una tecnica de deteccion de anomalias para datos hiperespectrales denominada DAFT en su variante de PP, basada en una reduccion de la dimensionalidad proyectando el fondo en un rango de longitudes de onda del espectro termico distinto de la proyeccion de las anomalias u objetivos sin firma espectral conocida. La metodologia propuesta ha sido probada con imagenes hiperespectrales reales de diferentes sensores y en diferentes escenarios o espacios, por lo tanto de diferente fondo espectral tambien, donde los resultados muestran los beneficios de la aproximacion en la deteccion de una gran variedad de objetos cuyas firmas espectrales tienen suficiente desviacion respecto del fondo. La tecnica resulta ser automatica en el sentido de que no hay necesidad de ajuste de parametros, dando resultados significativos en todos los casos. Incluso los objetos de tamano subpixel, que no pueden distinguirse a simple vista por el ojo humano en la imagen original, pueden ser detectados como anomalias. Ademas, se realiza una comparacion entre el enfoque propuesto, la popular tecnica RX y otros detectores tanto en su modalidad global como local. El metodo propuesto supera a los demas en determinados escenarios, demostrando su capacidad para reducir la proporcion de falsas alarmas. Los resultados del algoritmo automatico DAFT desarrollado, han demostrado la mejora en la definicion cualitativa de las anomalias espectrales que identifican a entidades diferentes en o bajo superficie, reemplazando para ello el modelo clasico de distribucion normal con un metodo robusto que contempla distintas alternativas desde el momento mismo de la adquisicion del dato hiperespectral. Para su consecucion ha sido necesario analizar la relacion entre parametros biofisicos, como la reflectancia y la emisividad de los materiales, y la distribucion espacial de entidades detectadas respecto de su entorno. Por ultimo, el algoritmo DAFT ha sido elegido como el mas adecuado para sensores que adquieren datos en el TIR, ya que presenta el mejor acuerdo con los datos de referencia, demostrando una gran eficacia computacional que facilita su implementacion en un sistema de cartografia que proyecte de forma automatica en un marco geografico de referencia las anomalias detectadas, lo que confirma un significativo avance hacia un sistema en lo que se denomina cartografia en tiempo real. The aim of this Thesis is to develop a specific methodology in order to be applied in automatic detection anomalies processes using hyperspectral data also called hyperspectral scenes, and to improve the classification processes. Several scenarios, areas and their relationship with surfaces and objects have been tested. The spectral characteristics of reflectance parameter and emissivity in the pattern recognition of urban materials in several hyperspectral scenes have also been tested. Spectral ranges of the visible-near infrared (VNIR), shortwave infrared (SWIR) and thermal infrared (TIR) from hyperspectral data cubes of AHS (Airborne Hyperspectral System), HyMAP Imaging Spectrometer, CASI (Compact Airborne Spectrographic Imager), AVIRIS (Airborne Visible Infrared Imaging Spectrometer), HYDICE (Hyperspectral Digital Imagery Collection Experiment) and MASTER (MODIS/ASTER Simulator) have been used in this research. It is assumed that there is not prior knowledge of the targets in anomaly detection. Thus, the pixels are automatically separated according to their spectral information, significantly differentiated with respect to a background, either globally for the full scene, or locally by the image segmentation. Several experiments on different scenarios have been designed, analyzing the behavior of the standard RX anomaly detector and different methods based on subspace, image projection and segmentation-based anomaly detection methods. Results and their consequences in unsupervised classification processes are discussed. Detection of spectral anomalies aims at extracting automatically pixels that show significant responses in relation of their surroundings. This Thesis deals with the unsupervised technique of target detection, also called anomaly detection. Since this technique assumes no prior knowledge about the target or the statistical characteristics of the data, the only available option is to look for objects that are differentiated from the background. Several methods have been developed in the last decades, allowing a better understanding of the relationships between the image dimensionality and the optimization of search procedures as well as the subpixel differentiation of the spectral mixture and its implications in anomalous responses. In other sense, image spectrometry has proven to be efficient in the characterization of materials, based on statistical methods using a specific reflection and absorption bands. Spectral configurations in the VNIR, SWIR and TIR have been successfully used for mapping materials in different urban scenarios. There has been an increasing interest in the use of high resolution data (both spatial and spectral) to detect small objects and to discriminate surfaces in areas with urban complexity. This has come to be known as target detection which can be either supervised or unsupervised. In supervised target detection, algorithms lean on prior knowledge, such as the spectral signature. The detection process for matching signatures is not straightforward due to the complications of converting data airborne sensor with material spectra in the ground. This could be further complicated by the large number of possible objects of interest, as well as uncertainty as to the reflectance or emissivity of these objects and surfaces. An important objective in this research is to establish relationships that allow linking spectral anomalies with what can be called informational anomalies and, therefore, identify information related to anomalous responses in some places rather than simply spotting differences from the background. The development in recent years of new hyperspectral sensors and techniques, widen the possibilities for applications in remote sensing of the Earth. Remote sensing systems measure and record electromagnetic disturbances that the surveyed objects induce in their surroundings, by means of different sensors mounted on airborne or space platforms. Map updating is important for management and decisions making people, because of the fast changes that usually happen in natural, urban and semi urban areas. It is necessary to optimize the methodology for obtaining the best from remote sensing techniques from hyperspectral data. The first problem with hyperspectral data is to reduce the dimensionality, keeping the maximum amount of information. Hyperspectral sensors augment considerably the amount of information, this allows us to obtain a better precision on the separation of material but at the same time it is necessary to calculate a bigger number of parameters, and the precision lowers with the increase in the number of bands. This is known as the Hughes effects (Bellman, 1957) . Hyperspectral imagery allows us to discriminate between a huge number of different materials however some land and urban covers are made up with similar material and respond similarly which produces confusion in the classification. The training and the algorithm used for mapping are also important for the final result and some properties of thermal spectrum for detecting land cover will be studied. In summary, this Thesis presents a new technique for anomaly detection in hyperspectral data called DAFT, as a PP's variant, based on dimensionality reduction by projecting anomalies or targets with unknown spectral signature to the background, in a range thermal spectrum wavelengths. The proposed methodology has been tested with hyperspectral images from different imaging spectrometers corresponding to several places or scenarios, therefore with different spectral background. The results show the benefits of the approach to the detection of a variety of targets whose spectral signatures have sufficient deviation in relation to the background. DAFT is an automated technique in the sense that there is not necessary to adjust parameters, providing significant results in all cases. Subpixel anomalies which cannot be distinguished by the human eye, on the original image, however can be detected as outliers due to the projection of the VNIR end members with a very strong thermal contrast. Furthermore, a comparison between the proposed approach and the well-known RX detector is performed at both modes, global and local. The proposed method outperforms the existents in particular scenarios, demonstrating its performance to reduce the probability of false alarms. The results of the automatic algorithm DAFT have demonstrated improvement in the qualitative definition of the spectral anomalies by replacing the classical model by the normal distribution with a robust method. For their achievement has been necessary to analyze the relationship between biophysical parameters such as reflectance and emissivity, and the spatial distribution of detected entities with respect to their environment, as for example some buried or semi-buried materials, or building covers of asbestos, cellular polycarbonate-PVC or metal composites. Finally, the DAFT method has been chosen as the most suitable for anomaly detection using imaging spectrometers that acquire them in the thermal infrared spectrum, since it presents the best results in comparison with the reference data, demonstrating great computational efficiency that facilitates its implementation in a mapping system towards, what is called, Real-Time Mapping.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Related with the detection of weak magnetic fields, the anisotropic magnetoresistive (AMR) effect is widely utilized in sensor applications. Exchange coupling between an antiferromagnet (AF) and the ferromagnet (FM) has been known as a significant parameter in the field sensitivity of magnetoresistance because of pinning effects on magnetic domain in FM layer by the bias field in AF. In this work we have studied the thermal evolution of the magnetization reversal processes in nanocrystalline exchange biased Ni80Fe20/Ni-O bilayers with large training effects and we report the anisotropic magnetoresistance ratio arising from field orientation in the bilayer.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes a knowledge-based approach for summarizing and presenting the behavior of hydrologic networks. This approach has been designed for visualizing data from sensors and simulations in the context of emergencies caused by floods. It follows a solution for event summarization that exploits physical properties of the dynamic system to automatically generate summaries of relevant data. The summarized information is presented using different modes such as text, 2D graphics and 3D animations on virtual terrains. The presentation is automatically generated using a hierarchical planner with abstract presentation fragments corresponding to discourse patterns, taking into account the characteristics of the user who receives the information and constraints imposed by the communication devices (mobile phone, computer, fax, etc.). An application following this approach has been developed for a national hydrologic information infrastructure of Spain.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The cup anemometer has been used widely by the wind energy industry since its early beginning, covering two fundamental aspects: wind mill performance control and wind energy production forecast. Furthermore, despite modern technological advances such as LIDAR and SODAR, the cup anemometer remains clearly the most used instrument by the wind energy industry. Together with the major advantages of this instrument (precision, robustness), some issues must be taken into account by scientists and researchers when using it. Overspeeding, interaction with stream wakes due to allocation on masts and wind- mills, loss of performance due to wear and tear, change of performance due to different climatic conditions, checking of the maintenance status and recalibration, etc. In the present work a review of the research campaigns carried out at the IDR/UPM Institute to analyze cup anemometer performance is included. Several aspects of this instrument are examined: the calibration process, the loss of performances due to aging and wear and tear, the effect of changes of air density, the rotor aerodynamics, and the harmonic terms contained in the anemometer output signal and their possible relation to the anemometer performances.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En esta memoria se muestra el desarrollo del proyecto realizado para la creación de un sistema implantado en espacios verdes de las ciudades. A partir del estudio previo del mercado en torno a las Smart Cities se plantea el desarrollo de un sistema que mejore la sostenibilidad, la eficiencia y la seguridad de sistemas de riego, de medidores de la calidad del aire o estaciones meteorológicas instalados en los espacios verdes y jardines de las ciudades. Sirve de demostrador de cómo mejorar la eficiencia del consumo de agua y energía, además de añadir la funcionalidad de monitorizar la información recibida por sensores instalados en estos espacios en tiempo real a través del sistema desarrollado. La finalidad de este proyecto es la de crear un demostrador que sirva de ejemplo e inspiración para proyectos futuros que se basen en las mismas herramientas que son ofrecidas por Telefónica. El demostrador consiste en el control, gestión y mantenimiento de estos espacios, de modo que se pueda realizar un control y un mantenimiento de los dispositivos y sensores instalados en cada localización y la monitorización de la información suministrada por dichos elementos gracias a la plataforma M2M mediante los simuladores de sensores que ofrece Telefónica para demostraciones. Desde la aplicación, y mediante tecnologías REST y WEB, se realiza la comunicación con los elementos ofrecidos por Telefónica. La información de los sensores almacenada se muestra mediante la creación de informes y gráficas, mostrando la información recibida de los distintos parámetros de medida posibles, como son consumo de agua, temperatura, humedad y medidas de calidad del aire entre otras disponibles. Además, hay un espacio disponible para localizar y gestionar los dispositivos de los que se disponga existiendo la posibilidad de creación de alertas y avisos en función de lo que requiera el usuario. La solución creada como demostrador tiene dos componentes realizados por el estudiante. El primer componente desarrollado consiste en una plataforma web mediante la cual se pueda gestionar en tiempo real los dispositivos instalados en estos espacios verdes, su localización, realizar informes y gráficas, mostrando la información recibida de los sensores y la gestión de reglas para controlar valores límites impuestos por el usuario. El segundo componente es un servicio REST al cual se accede desde la plataforma web creada en el anterior componente y realiza la comunicación con los servicios ofrecidos por Telefónica y la identificación del proveedor del servicio con Telefónica a través del uso de un certificado privado. Además, este componente lleva a cabo la gestión de usuarios, mediante la creación de un servicio de autenticación para limitar los recursos ofrecidos en el servicio REST a determinados usuarios. ABSTRACT. In this memorandum the development of the project carried out for the creation of an implanted into green spaces in cities system is shown. From the previous study of the market around the Smart Cities it arise development of a system to improve the sustainability, efficiency and safety of irrigation systems, meter air quality and green spaces installed weather stations and gardens of cities. It serves as a demonstrator of how to improve the efficiency of water and energy consumption, as well as adding functionality to monitor the information received by sensors installed in these spaces in real time through the system developed. The purpose of this project is to create a demonstrator to provide an example and an inspiration for future projects based on the same tools that are provided by Telefónica. The demonstrator consists of control, management and maintenance of these spaces, so that it can carry out checks and maintenance of devices and sensors installed at each location and monitoring of the information provided by these elements through the M2M platform sensor simulators offered by Telefónica. From the application, and using REST and Web technologies, communication with the items provided by Telefónica is performed. Information stored sensors is shown by creating reports and graphs showing information received from the different parameters as possible, such as water consumption, temperature, humidity and air quality measures among others available. Addition there is an available space to locate and manage devices which becomes available with the possibility of creating alerting and warnings based on what the user requires. The solution created as a demonstrator has two components made by the student. The first component consists of a developed web platform through which to manage real-time devices installed in these green spaces, displaying the information received from the sensors and management rules to control limit values by the user. The second component is a REST service which is accessible from the web platform created in the previous component and performs communication with the services provided by Telefónica and the identification of the service provider with Telefónica through the use of a private certificate. In addition, this component performs user management, by creating an authentication service to restrict the resources available on the REST service to certain users.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A lo largo de las últimas décadas el desarrollo de la tecnología en muy distintas áreas ha sido vertiginoso. Su propagación a todos los aspectos de nuestro día a día parece casi inevitable y la electrónica de consumo ha invadido nuestros hogares. No obstante, parece que la domótica no ha alcanzado el grado de integración que cabía esperar hace apenas una década. Es cierto que los dispositivos autónomos y con un cierto grado de inteligencia están abriéndose paso de manera independiente, pero el hogar digital, como sistema capaz de abarcar y automatizar grandes conjuntos de elementos de una vivienda (gestión energética, seguridad, bienestar, etc.) no ha conseguido extenderse al hogar medio. Esta falta de integración no se debe a la ausencia de tecnología, ni mucho menos, y numerosos son los estudios y proyectos surgidos en esta dirección. Sin embargo, no ha sido hasta hace unos pocos años que las instituciones y grandes compañías han comenzado a prestar verdadero interés en este campo. Parece que estamos a punto de experimentar un nuevo cambio en nuestra forma de vida, concretamente en la manera en la que interactuamos con nuestro hogar y las comodidades e información que este nos puede proporcionar. En esa corriente se desarrolla este Proyecto Fin de Grado, con el objetivo de aportar un nuevo enfoque a la manera de integrar los diferentes dispositivos del hogar digital con la inteligencia artificial y, lo que es más importante, al modo en el que el usuario interactúa con su vivienda. Más concretamente, se pretende desarrollar un sistema capaz de tomar decisiones acordes al contexto y a las preferencias del usuario. A través de la utilización de diferentes tecnologías se dotará al hogar digital de cierta autonomía a la hora de decidir qué acciones debe llevar a cabo sobre los dispositivos que contiene, todo ello mediante la interpretación de órdenes procedentes del usuario (expresadas de manera coloquial) y el estudio del contexto que envuelve al instante de ejecución. Para la interacción entre el usuario y el hogar digital se desarrollará una aplicación móvil mediante la cual podrá expresar (de manera conversacional) las órdenes que quiera dar al sistema, el cual intervendrá en la conversación y llevará a cabo las acciones oportunas. Para todo ello, el sistema hará principalmente uso de ontologías, análisis semántico, redes bayesianas, UPnP y Android. Se combinará información procedente del usuario, de los sensores y de fuentes externas para determinar, a través de las citadas tecnologías, cuál es la operación que debe realizarse para satisfacer las necesidades del usuario. En definitiva, el objetivo final de este proyecto es diseñar e implementar un sistema innovador que se salga de la corriente actual de interacción mediante botones, menús y formularios a los que estamos tan acostumbrados, y que permita al usuario, en cierto modo, hablar con su vivienda y expresarle sus necesidades, haciendo a la tecnología un poco más transparente y cercana y aproximándonos un poco más a ese concepto de hogar inteligente que imaginábamos a finales del siglo XX. ABSTRACT. Over the last decades the development of technology in very different areas has happened incredibly fast. Its propagation to all aspects of our daily activities seems to be inevitable and the electronic devices have invaded our homes. Nevertheless, home automation has not reached the integration point that it was supposed to just a few decades ago. It is true that some autonomic and relatively intelligent devices are emerging, but the digital home as a system able to control a large set of elements from a house (energy management, security, welfare, etc.) is not present yet in the average home. That lack of integration is not due to the absence of technology and, in fact, there are a lot of investigations and projects focused on this field. However, the institutions and big companies have not shown enough interest in home automation until just a few years ago. It seems that, finally, we are about to experiment another change in our lifestyle and how we interact with our home and the information and facilities it can provide. This Final Degree Project is developed as part of this trend, with the goal of providing a new approach to the way the system could integrate the home devices with the artificial intelligence and, mainly, to the way the user interacts with his house. More specifically, this project aims to develop a system able to make decisions, taking into account the context and the user preferences. Through the use of several technologies and approaches, the system will be able to decide which actions it should perform based on the order interpretation (expressed colloquially) and the context analysis. A mobile application will be developed to enable the user-home interaction. The user will be able to express his orders colloquially though out a conversational mode, and the system will also participate in the conversation, performing the required actions. For providing all this features, the system will mainly use ontologies, semantic analysis, Bayesian networks, UPnP and Android. Information from the user, the sensors and external sources will be combined to determine, through the use of these technologies, which is the operation that the system should perform to meet the needs of the user. In short, the final goal of this project is to design and implement an innovative system, away from the current trend of buttons, menus and forms. In a way, the user will be able to talk to his home and express his needs, experiencing a technology closer to the people and getting a little closer to that concept of digital home that we imagined in the late twentieth century.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Crowd induced dynamic loading in large structures, such as gymnasiums or stadiums, is usually modelled as a series of harmonic loads which are defined in terms of their Fourier coefficients. Different values of these Fourier coefficients that were obtained from full scale measurements can be found in codes. Recently, an alternative has been proposed, based on random generation of load time histories that take into account phase lags among individuals inside the crowd. Generally the testing is performed on platforms or structures that can be considered rigid because their natural frequencies are higher than the excitation frequencies associated with crowd loading. In this paper we shall present the testing done on a structure designed to be a gymnasium, which has natural frequencies within that range. In this test the gym slab was instrumented with acceleration sensors and different people jumped on a force plate installed on the floor. Test results have been compared with predictions based on the two abovementioned load modelling alternatives and a new methodology for modelling jumping loads has been proposed in order to reduce the difference between experimental and numerical results at high frequency range.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La rápida adopción de dispositivos electrónicos en el automóvil, ha contribuido a mejorar en gran medida la seguridad y el confort. Desde principios del siglo 20, la investigación en sistemas de seguridad activa ha originado el desarrollo de tecnologías como ABS (Antilock Brake System), TCS (Traction Control System) y ESP (Electronic Stability Program). El coste de despliegue de estos sistemas es crítico: históricamente, sólo han sido ampliamente adoptados cuando el precio de los sensores y la electrónica necesarios para su construcción ha caído hasta un valor marginal. Hoy en día, los vehículos a motor incluyen un amplio rango de sensores para implementar las funciones de seguridad. La incorporación de sistemas que detecten la presencia de agua, hielo o nieve en la vía es un factor adicional que podría ayudar a evitar situaciones de riesgo. Existen algunas implementaciones prácticas capaces de detectar carreteras mojadas, heladas y nevadas, aunque con limitaciones importantes. En esta tesis doctoral, se propone una aproximación novedosa al problema, basada en el análisis del ruido de rodadura generado durante la conducción. El ruido de rodadura es capturado y preprocesado. Después es analizado utilizando un clasificador basado en máquinas de vectores soporte (SVM), con el fin de generar una estimación del estado del firme. Todas estas operaciones se realizan en el propio vehículo. El sistema propuesto se ha desarrollado y evaluado utilizando Matlabr, mostrando tasas de aciertos de más del 90%. Se ha realizado una implementación en tiempo real, utilizando un prototipo basado en DSP. Después se han introducido varias optimizaciones para permitir que el sistema sea realizable usando un microcontrolador de propósito general. Finalmente se ha realizado una implementación hardware basada en un microcontrolador, integrándola estrechamente con las ECU del vehículo, pudiendo obtener datos capturados por los sensores del mismo y enviar las estimaciones del estado del firme. El sistema resultante ha sido patentado, y destaca por su elevada tasa de aciertos con un tamaño, consumo y coste reducidos. ABSTRACT Proliferation of automotive electronics, has greatly improved driving safety and comfort. Since the beginning of the 20th century, investigation in active safety systems has resulted in the development of technologies such as ABS (Antilock Brake System), TCS (Traction Control System) and ESP (Electronic Stability Program). Deployment cost of these systems is critical: historically, they have been widely adopted only when the price of the sensors and electronics needed to build them has been cut to a marginal value. Nowadays, motor vehicles include a wide range of sensors to implement the safety functions. Incorporation of systems capable of detecting water, ice or snow on the road is an additional factor that could help avoiding risky situations. There are some implementations capable of detecting wet, icy and snowy roads, although with important limitations. In this PhD Thesis, a novel approach is proposed, based on the analysis of the tyre/road noise radiated during driving. Tyre/road noise is captured and pre-processed. Then it is analysed using a Support Vector Machine (SVM) based classifier, to output an estimation of the road status. All these operations are performed on-board. Proposed system is developed and evaluated using Matlabr, showing success rates greater than 90%. A real time implementation is carried out using a DSP based prototype. Several optimizations are introduced enabling the system to work using a low-cost general purpose microcontroller. Finally a microcontroller based hardware implementation is developed. This implementation is tightly integrated with the vehicle ECUs, allowing it to obtain data captured by its sensors, and to send the road status estimations. Resulting system has been patented, and is notable because of its high hit rate, small size, low power consumption and low cost.