13 resultados para Interior point algorithm

em Universidad Politécnica de Madrid


Relevância:

80.00% 80.00%

Publicador:

Resumo:

El Análisis de Consumo de Recursos o Análisis de Coste trata de aproximar el coste de ejecutar un programa como una función dependiente de sus datos de entrada. A pesar de que existen trabajos previos a esta tesis doctoral que desarrollan potentes marcos para el análisis de coste de programas orientados a objetos, algunos aspectos avanzados, como la eficiencia, la precisión y la fiabilidad de los resultados, todavía deben ser estudiados en profundidad. Esta tesis aborda estos aspectos desde cuatro perspectivas diferentes: (1) Las estructuras de datos compartidas en la memoria del programa son una pesadilla para el análisis estático de programas. Trabajos recientes proponen una serie de condiciones de localidad para poder mantener de forma consistente información sobre los atributos de los objetos almacenados en memoria compartida, reemplazando éstos por variables locales no almacenadas en la memoria compartida. En esta tesis presentamos dos extensiones a estos trabajos: la primera es considerar, no sólo los accesos a los atributos, sino también los accesos a los elementos almacenados en arrays; la segunda se centra en los casos en los que las condiciones de localidad no se cumplen de forma incondicional, para lo cual, proponemos una técnica para encontrar las precondiciones necesarias para garantizar la consistencia de la información acerca de los datos almacenados en memoria. (2) El objetivo del análisis incremental es, dado un programa, los resultados de su análisis y una serie de cambios sobre el programa, obtener los nuevos resultados del análisis de la forma más eficiente posible, evitando reanalizar aquellos fragmentos de código que no se hayan visto afectados por los cambios. Los analizadores actuales todavía leen y analizan el programa completo de forma no incremental. Esta tesis presenta un análisis de coste incremental, que, dado un cambio en el programa, reconstruye la información sobre el coste del programa de todos los métodos afectados por el cambio de forma incremental. Para esto, proponemos (i) un algoritmo multi-dominio y de punto fijo que puede ser utilizado en todos los análisis globales necesarios para inferir el coste, y (ii) una novedosa forma de almacenar las expresiones de coste que nos permite reconstruir de forma incremental únicamente las funciones de coste de aquellos componentes afectados por el cambio. (3) Las garantías de coste obtenidas de forma automática por herramientas de análisis estático no son consideradas totalmente fiables salvo que la implementación de la herramienta o los resultados obtenidos sean verificados formalmente. Llevar a cabo el análisis de estas herramientas es una tarea titánica, ya que se trata de herramientas de gran tamaño y complejidad. En esta tesis nos centramos en el desarrollo de un marco formal para la verificación de las garantías de coste obtenidas por los analizadores en lugar de analizar las herramientas. Hemos implementado esta idea mediante la herramienta COSTA, un analizador de coste para programas Java y KeY, una herramienta de verificación de programas Java. De esta forma, COSTA genera las garantías de coste, mientras que KeY prueba la validez formal de los resultados obtenidos, generando de esta forma garantías de coste verificadas. (4) Hoy en día la concurrencia y los programas distribuidos son clave en el desarrollo de software. Los objetos concurrentes son un modelo de concurrencia asentado para el desarrollo de sistemas concurrentes. En este modelo, los objetos son las unidades de concurrencia y se comunican entre ellos mediante llamadas asíncronas a sus métodos. La distribución de las tareas sugiere que el análisis de coste debe inferir el coste de los diferentes componentes distribuidos por separado. En esta tesis proponemos un análisis de coste sensible a objetos que, utilizando los resultados obtenidos mediante un análisis de apunta-a, mantiene el coste de los diferentes componentes de forma independiente. Abstract Resource Analysis (a.k.a. Cost Analysis) tries to approximate the cost of executing programs as functions on their input data sizes and without actually having to execute the programs. While a powerful resource analysis framework on object-oriented programs existed before this thesis, advanced aspects to improve the efficiency, the accuracy and the reliability of the results of the analysis still need to be further investigated. This thesis tackles this need from the following four different perspectives. (1) Shared mutable data structures are the bane of formal reasoning and static analysis. Analyses which keep track of heap-allocated data are referred to as heap-sensitive. Recent work proposes locality conditions for soundly tracking field accesses by means of ghost non-heap allocated variables. In this thesis we present two extensions to this approach: the first extension is to consider arrays accesses (in addition to object fields), while the second extension focuses on handling cases for which the locality conditions cannot be proven unconditionally by finding aliasing preconditions under which tracking such heap locations is feasible. (2) The aim of incremental analysis is, given a program, its analysis results and a series of changes to the program, to obtain the new analysis results as efficiently as possible and, ideally, without having to (re-)analyze fragments of code that are not affected by the changes. During software development, programs are permanently modified but most analyzers still read and analyze the entire program at once in a non-incremental way. This thesis presents an incremental resource usage analysis which, after a change in the program is made, is able to reconstruct the upper-bounds of all affected methods in an incremental way. To this purpose, we propose (i) a multi-domain incremental fixed-point algorithm which can be used by all global analyses required to infer the cost, and (ii) a novel form of cost summaries that allows us to incrementally reconstruct only those components of cost functions affected by the change. (3) Resource guarantees that are automatically inferred by static analysis tools are generally not considered completely trustworthy, unless the tool implementation or the results are formally verified. Performing full-blown verification of such tools is a daunting task, since they are large and complex. In this thesis we focus on the development of a formal framework for the verification of the resource guarantees obtained by the analyzers, instead of verifying the tools. We have implemented this idea using COSTA, a state-of-the-art cost analyzer for Java programs and KeY, a state-of-the-art verification tool for Java source code. COSTA is able to derive upper-bounds of Java programs while KeY proves the validity of these bounds and provides a certificate. The main contribution of our work is to show that the proposed tools cooperation can be used for automatically producing verified resource guarantees. (4) Distribution and concurrency are today mainstream. Concurrent objects form a well established model for distributed concurrent systems. In this model, objects are the concurrency units that communicate via asynchronous method calls. Distribution suggests that analysis must infer the cost of the diverse distributed components separately. In this thesis we propose a novel object-sensitive cost analysis which, by using the results gathered by a points-to analysis, can keep the cost of the diverse distributed components separate.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this work is to provide the necessary methods to register and fuse the endo-epicardial signal intensity (SI) maps extracted from contrast-enhanced magnetic resonance imaging (ceMRI) with X-ray coronary ngiograms using an intrinsic registrationbased algorithm to help pre-planning and guidance of catheterization procedures. Fusion of angiograms with SI maps was treated as a 2D-3D pose estimation, where each image point is projected to a Plücker line, and the screw representation for rigid motions is minimized using a gradient descent method. The resultant transformation is applied to the SI map that is then projected and fused on each angiogram. The proposed method was tested in clinical datasets from 6 patients with prior myocardial infarction. The registration procedure is optionally combined with an iterative closest point algorithm (ICP) that aligns the ventricular contours segmented from two ventriculograms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This is an account of some aspects of the geometry of Kahler affine metrics based on considering them as smooth metric measure spaces and applying the comparison geometry of Bakry-Emery Ricci tensors. Such techniques yield a version for Kahler affine metrics of Yau s Schwarz lemma for volume forms. By a theorem of Cheng and Yau, there is a canonical Kahler affine Einstein metric on a proper convex domain, and the Schwarz lemma gives a direct proof of its uniqueness up to homothety. The potential for this metric is a function canonically associated to the cone, characterized by the property that its level sets are hyperbolic affine spheres foliating the cone. It is shown that for an n -dimensional cone, a rescaling of the canonical potential is an n -normal barrier function in the sense of interior point methods for conic programming. It is explained also how to construct from the canonical potential Monge-Ampère metrics of both Riemannian and Lorentzian signatures, and a mean curvature zero conical Lagrangian submanifold of the flat para-Kahler space.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Vector reconstruction of objects from an unstructured point cloud obtained with a LiDAR-based system (light detection and ranging) is one of the most promising methods to build three dimensional models of orchards. The cylinder fitting method for woody structure reconstruction of leafless trees from point clouds obtained with a mobile terrestrial laser scanner (MTLS) has been analysed. The advantage of this method is that it performs reconstruction in a single step. The most time consuming part of the algorithm is generation of the cylinder direction, which must be recalculated at the inclusion of each point in the cylinder. The tree skeleton is obtained at the same time as the cluster of cylinders is formed. The method does not guarantee a unique convergence and the reconstruction parameter values must be carefully chosen. A balanced processing of clusters has also been defined which has proven to be very efficient in terms of processing time by following the hierarchy of branches, predecessors and successors. The algorithm was applied to simulated MTLS of virtual orchard models and to MTLS data of real orchards. The constraints applied in the method have been reviewed to ensure better convergence and simpler use of parameters. The results obtained show a correct reconstruction of the woody structure of the trees and the algorithm runs in linear logarithmic time

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Wave energy conversion has an essential difference from other renewable energies since the dependence between the devices design and the energy resource is stronger. Dimensioning is therefore considered a key stage when a design project of Wave Energy Converters (WEC) is undertaken. Location, WEC concept, Power Take-Off (PTO) type, control strategy and hydrodynamic resonance considerations are some of the critical aspects to take into account to achieve a good performance. The paper proposes an automatic dimensioning methodology to be accomplished at the initial design project stages and the following elements are described to carry out the study: an optimization design algorithm, its objective functions and restrictions, a PTO model, as well as a procedure to evaluate the WEC energy production. After that, a parametric analysis is included considering different combinations of the key parameters previously introduced. A variety of study cases are analysed from the point of view of energy production for different design-parameters and all of them are compared with a reference case. Finally, a discussion is presented based on the results obtained, and some recommendations to face the WEC design stage are given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the Expectation Maximization algorithm (EM) applied to operational modal analysis of structures. The EM algorithm is a general-purpose method for maximum likelihood estimation (MLE) that in this work is used to estimate state space models. As it is well known, the MLE enjoys some optimal properties from a statistical point of view, which make it very attractive in practice. However, the EM algorithm has two main drawbacks: its slow convergence and the dependence of the solution on the initial values used. This paper proposes two different strategies to choose initial values for the EM algorithm when used for operational modal analysis: to begin with the parameters estimated by Stochastic Subspace Identification method (SSI) and to start using random points. The effectiveness of the proposed identification method has been evaluated through numerical simulation and measured vibration data in the context of a benchmark problem. Modal parameters (natural frequencies, damping ratios and mode shapes) of the benchmark structure have been estimated using SSI and the EM algorithm. On the whole, the results show that the application of the EM algorithm starting from the solution given by SSI is very useful to identify the vibration modes of a structure, discarding the spurious modes that appear in high order models and discovering other hidden modes. Similar results are obtained using random starting values, although this strategy allows us to analyze the solution of several starting points what overcome the dependence on the initial values used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Division of labor is a widely studied aspect of colony behavior of social insects. Division of labor models indicate how individuals distribute themselves in order to perform different tasks simultaneously. However, models that study division of labor from a dynamical system point of view cannot be found in the literature. In this paper, we define a division of labor model as a discrete-time dynamical system, in order to study the equilibrium points and their properties related to convergence and stability. By making use of this analytical model, an adaptive algorithm based on division of labor can be designed to satisfy dynamic criteria. In this way, we have designed and tested an algorithm that varies the response thresholds in order to modify the dynamic behavior of the system. This behavior modification allows the system to adapt to specific environmental and collective situations, making the algorithm a good candidate for distributed control applications. The variable threshold algorithm is based on specialization mechanisms. It is able to achieve an asymptotically stable behavior of the system in different environments and independently of the number of individuals. The algorithm has been successfully tested under several initial conditions and number of individuals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La tecnología de múltiples antenas ha evolucionado para dar soporte a los actuales y futuros sistemas de comunicaciones inalámbricas en su afán por proporcionar la calidad de señal y las altas tasas de transmisión que demandan los nuevos servicios de voz, datos y multimedia. Sin embargo, es fundamental comprender las características espaciales del canal radio, ya que son las características del propio canal lo que limita en gran medida las prestaciones de los sistemas de comunicación actuales. Por ello surge la necesidad de estudiar la estructura espacial del canal de propagación para poder diseñar, evaluar e implementar de forma más eficiente tecnologías multiantena en los actuales y futuros sistemas de comunicación inalámbrica. Las tecnologías multiantena denominadas antenas inteligentes y MIMO han generado un gran interés en el área de comunicaciones inalámbricas, por ejemplo los sistemas de telefonía celular o más recientemente en las redes WLAN (Wireless Local Area Network), principalmente por la mejora que proporcionan en la calidad de las señales y en la tasa de transmisión de datos, respectivamente. Las ventajas de estas tecnologías se fundamentan en el uso de la dimensión espacial para obtener ganancia por diversidad espacial, como ya sucediera con las tecnologías FDMA (Frequency Division Multiplexing Access), TDMA (Time Division Multiplexing Access) y CDMA (Code Division Multiplexing Access) para obtener diversidad en las dimensiones de frecuencia, tiempo y código, respectivamente. Esta Tesis se centra en estudiar las características espaciales del canal con sistemas de múltiples antenas mediante la estimación de los perfiles de ángulos de llegada (DoA, Direction-of- Arrival) considerando esquemas de diversidad en espacio, polarización y frecuencia. Como primer paso se realiza una revisión de los sistemas con antenas inteligentes y los sistemas MIMO, describiendo con detalle la base matemática que sustenta las prestaciones ofrecidas por estos sistemas. Posteriormente se aportan distintos estudios sobre la estimación de los perfiles de DoA de canales radio con sistemas multiantena evaluando distintos aspectos de antenas, algoritmos de estimación, esquemas de polarización, campo lejano y campo cercano de las fuentes. Así mismo, se presenta un prototipo de medida MIMO-OFDM-SPAA3D en la banda ISM (Industrial, Scientific and Medical) de 2,45 Ghz, el cual está preparado para caracterizar experimentalmente el rendimiento de los sistemas MIMO, y para caracterizar espacialmente canales de propagación, considerando los esquemas de diversidad espacial, por polarización y frecuencia. Los estudios aportados se describen a continuación. Los sistemas de antenas inteligentes dependen en gran medida de la posición de los usuarios. Estos sistemas están equipados con arrays de antenas, los cuales aportan la diversidad espacial necesaria para obtener una representación espacial fidedigna del canal radio a través de los perfiles de DoA (DoA, Direction-of-Arrival) y por tanto, la posición de las fuentes de señal. Sin embargo, los errores de fabricación de arrays así como ciertos parámetros de señal conlleva un efecto negativo en las prestaciones de estos sistemas. Por ello se plantea un modelo de señal parametrizado que permite estudiar la influencia que tienen estos factores sobre los errores de estimación de DoA, tanto en acimut como en elevación, utilizando los algoritmos de estimación de DOA más conocidos en la literatura. A partir de las curvas de error, se pueden obtener parámetros de diseño para sistemas de localización basados en arrays. En un segundo estudio se evalúan esquemas de diversidad por polarización con los sistemas multiantena para mejorar la estimación de los perfiles de DoA en canales que presentan pérdidas por despolarización. Para ello se desarrolla un modelo de señal en array con sensibilidad de polarización que toma en cuenta el campo electromagnético de ondas planas. Se realizan simulaciones MC del modelo para estudiar el efecto de la orientación de la polarización como el número de polarizaciones usadas en el transmisor como en el receptor sobre la precisión en la estimación de los perfiles de DoA observados en el receptor. Además, se presentan los perfiles DoA obtenidos en escenarios quasiestáticos de interior con un prototipo de medida MIMO 4x4 de banda estrecha en la banda de 2,45 GHz, los cuales muestran gran fidelidad con el escenario real. Para la obtención de los perfiles DoA se propone un método basado en arrays virtuales, validado con los datos de simulación y los datos experimentales. Con relación a la localización 3D de fuentes en campo cercano (zona de Fresnel), se presenta un tercer estudio para obtener con gran exactitud la estructura espacial del canal de propagación en entornos de interior controlados (en cámara anecóica) utilizando arrays virtuales. El estudio analiza la influencia del tamaño del array y el diagrama de radiación en la estimación de los parámetros de localización proponiendo, para ello, un modelo de señal basado en un vector de enfoque de onda esférico (SWSV). Al aumentar el número de antenas del array se consigue reducir el error RMS de estimación y mejorar sustancialmente la representación espacial del canal. La estimación de los parámetros de localización se lleva a cabo con un nuevo método de búsqueda multinivel adaptativo, propuesto con el fin de reducir drásticamente el tiempo de procesado que demandan otros algoritmos multivariable basados en subespacios, como el MUSIC, a costa de incrementar los requisitos de memoria. Las simulaciones del modelo arrojan resultados que son validados con resultados experimentales y comparados con el límite de Cramer Rao en términos del error cuadrático medio. La compensación del diagrama de radiación acerca sustancialmente la exactitud de estimación de la distancia al límite de Cramer Rao. Finalmente, es igual de importante la evaluación teórica como experimental de las prestaciones de los sistemas MIMO-OFDM. Por ello, se presenta el diseño e implementación de un prototipo de medida MIMO-OFDM-SPAA3D autocalibrado con sistema de posicionamiento de antena automático en la banda de 2,45 Ghz con capacidad para evaluar la capacidad de los sistemas MIMO. Además, tiene la capacidad de caracterizar espacialmente canales MIMO, incorporando para ello una etapa de autocalibración para medir la respuesta en frecuencia de los transmisores y receptores de RF, y así poder caracterizar la respuesta de fase del canal con mayor precisión. Este sistema incorpora un posicionador de antena automático 3D (SPAA3D) basado en un scanner con 3 brazos mecánicos sobre los que se desplaza un posicionador de antena de forma independiente, controlado desde un PC. Este posicionador permite obtener una gran cantidad de mediciones del canal en regiones locales, lo cual favorece la caracterización estadística de los parámetros del sistema MIMO. Con este prototipo se realizan varias campañas de medida para evaluar el canal MIMO en términos de capacidad comparando 2 esquemas de polarización y tomando en cuenta la diversidad en frecuencia aportada por la modulación OFDM en distintos escenarios. ABSTRACT Multiple-antennas technologies have been evolved to be the support of the actual and future wireless communication systems in its way to provide the high quality and high data rates required by new data, voice and data services. However, it is important to understand the behavior of the spatial characteristics of the radio channel, since the channel by itself limits the performance of the actual wireless communications systems. This drawback raises the need to understand the spatial structure of the propagation channel in order to design, assess, and develop more efficient multiantenna technologies for the actual and future wireless communications systems. Multiantenna technologies such as ‘Smart Antennas’ and MIMO systems have generated great interest in the field of wireless communications, i.e. cellular communications systems and more recently WLAN (Wireless Local Area Networks), mainly because the higher quality and the high data rate they are able to provide. Their technological benefits are based on the exploitation of the spatial diversity provided by the use of multiple antennas as happened in the past with some multiaccess technologies such as FDMA (Frequency Division Multiplexing Access), TDMA (Time Division Multiplexing Access), and CDMA (Code Division Multiplexing Access), which give diversity in the domains of frequency, time and code, respectively. This Thesis is mainly focus to study the spatial channel characteristics using schemes of multiple antennas considering several diversity schemes such as space, polarization, and frequency. The spatial characteristics will be study in terms of the direction-of-arrival profiles viewed at the receiver side of the radio link. The first step is to do a review of the smart antennas and MIMO systems technologies highlighting their advantages and drawbacks from a mathematical point of view. In the second step, a set of studies concerning the spatial characterization of the radio channel through the DoA profiles are addressed. The performance of several DoA estimation methods is assessed considering several aspects regarding antenna array structure, polarization diversity, and far-field and near-field conditions. Most of the results of these studies come from simulations of data models and measurements with real multiantena prototypes. In the same way, having understand the importance of validate the theoretical data models with experimental results, a 2,4 GHz MIMO-OFDM-SPAA2D prototype is presented. This prototype is intended for evaluating MIMO-OFDM capacity in indoor and outdoor scenarios, characterize the spatial structure of radio channels, assess several diversity schemes such as polarization, space, and frequency diversity, among others aspects. The studies reported are briefly described below. As is stated in Chapter two, the determination of user position is a fundamental task to be resolved for the smart antenna systems. As these systems are equipped with antenna arrays, they can provide the enough spatial diversity to accurately draw the spatial characterization of the radio channel through the DoA profiles, and therefore the source location. However, certain real implementation factors related to antenna errors, signals, and receivers will certainly reduce the performance of such direction finding systems. In that sense, a parameterized narrowband signal model is proposed to evaluate the influence of these factors in the location parameter estimation through extensive MC simulations. The results obtained from several DoA algorithms may be useful to extract some parameter design for directing finding systems based on arrays. The second study goes through the importance that polarization schemes can have for estimating far-field DoA profiles in radio channels, particularly for scenarios that may introduce polarization losses. For this purpose, a narrowband signal model with polarization sensibility is developed to conduct an analysis of several polarization schemes at transmitter (TX) and receiver (RX) through extensive MC simulations. In addition, spatial characterization of quasistatic indoor scenarios is also carried out using a 2.45 GHz MIMO prototype equipped with single and dual-polarized antennas. A good agreement between the measured DoA profiles with the propagation scenario is achieved. The theoretical and experimental evaluation of polarization schemes is performed using virtual arrays. In that case, a DoA estimation method is proposed based on adding an phase reference to properly track the DoA, which shows good results. In the third study, the special case of near-field source localization with virtual arrays is addressed. Most of DoA estimation algorithms are focused in far-field source localization where the radiated wavefronts are assume to be planar waves at the receive array. However, when source are located close to the array, the assumption of plane waves is no longer valid as the wavefronts exhibit a spherical behavior along the array. Thus, a faster and effective method of azimuth, elevation angles-of-arrival, and range estimation for near-field sources is proposed. The efficacy of the proposed method is evaluated with simulation and validated with measurements collected from a measurement campaign carried out in a controlled propagation environment, i.e. anechoic chamber. Moreover, the performance of the method is assessed in terms of the RMSE for several array sizes, several source positions, and taking into account the effect of radiation pattern. In general, better results are obtained with larger array and larger source distances. The effect of the antennas is included in the data model leading to more accurate results, particularly for range rather than for angle estimation. Moreover, a new multivariable searching method based on the MUSIC algorithm, called MUSA (multilevel MUSIC-based algorithm), is presented. This method is proposed to estimate the 3D location parameters in a faster way than other multivariable algorithms, such as MUSIC algorithm, at the cost of increasing the memory size. Finally, in the last chapter, a MIMO-OFDM-SPAA3D prototype is presented to experimentally evaluate different MIMO schemes regarding antennas, polarization, and frequency in different indoor and outdoor scenarios. The prototype has been developed on a Software-Defined Radio (SDR) platform. It allows taking measurements where future wireless systems will be developed. The novelty of this prototype is concerning the following 2 subsystems. The first one is the tridimensional (3D) antenna positioning system (SPAA3D) based on three linear scanners which is developed for making automatic testing possible reducing errors of the antenna array positioning. A set of software has been developed for research works such as MIMO channel characterization, MIMO capacity, OFDM synchronization, and so on. The second subsystem is the RF autocalibration module at the TX and RX. This subsystem allows to properly tracking the spatial structure of indoor and outdoor channels in terms of DoA profiles. Some results are draw regarding performance of MIMO-OFDM systems with different polarization schemes and different propagation environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper the daily temporal and spatial behavior of electric vehicles (EVs) is modelled using an activity-based (ActBM) microsimulation model for Flanders region (Belgium). Assuming that all EVs are completely charged at the beginning of the day, this mobility model is used to determine the percentage of Flemish vehicles that cannot cover their programmed daily trips and need to be recharged during the day. Assuming a variable electricity price, an optimization algorithm determines when and where EVs can be recharged at minimum cost for their owners. This optimization takes into account the individual mobility constraint for each vehicle, as they can only be charged when the car is stopped and the owner is performing an activity. From this information, the aggregated electric demand for Flanders is obtained, identifying the most overloaded areas at the critical hours. Finally it is also analyzed what activities EV owners are underway during their recharging period. From this analysis, different actions for public charging point deployment in different areas and for different activities are proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El objetivo del presente trabajo es realizar una evaluación realista de la incertidumbre en los ensayos de ruido transmitido al interior de una sala según el Anexo IV del Real Decreto 1367/2007, contemplando todas las posibles causas que intervienen en la incertidumbre de forma que las imprecisiones observadas en los ejercicios de intercomparación se cubran con un único ensayo. Otra parte fundamental del trabajo es cuantificar la fuente de incertidumbre generada en la elección del punto de medición. En primer lugar se realiza la medición del nivel continuo equivalente ponderado A en habitaciones de distinto tamaño, a la que llegan los distintos tipos de ruido emitido desde el exterior de las mismas. El siguiente paso es realizar un análisis de los niveles medidos, tanto en su distribución espacial como en su evolución temporal, incidiendo en los valores máximos, LAeq,5s. A continuación se comprueba si la distribución de los niveles medidos se ajusta a una distribución normal mediante el software de análisis estadístico STATGRAPHICS. Determinando, en base al tamaño de las muestras escogidas, por medio de análisis estadísticos, la aportación a la incertidumbre generada en la elección del punto de medida, cuantificando su valor. Por último se realiza una medición del ruido de actividades según el Real Decreto 1367/2007, aportando una evaluación de la incertidumbre, teniendo en cuenta todas las fuentes que la generan, mediante el enfoque clásico de la GUM. ABSTRACT. The objective of this work is to carry out a realistic assessment of the uncertainty in the trials of noise transmitted into the interior of a room according to Annex IV of the Royal Decree 136772997, considering all of the causes involved in uncertainty in such a way that the inaccuracies observed in intercomparison exercises are covered with a single trial. Another fundamental part of the work is to quantify the source of uncertainty in the choice of the point of measurement. First is the weighted equivalent of the continuous level measurement in rooms of different sizes, which reach different types of noise emitted from outside of them. The next step is to perform an analysis of the measured levels, both in their spatial distribution and their temporal evolution, influencing the maximum values, LAeq,5s. Then it is checked to see whether the distribution of the measured levels conforms to a normal distribution using the STATGRAPHICS statistical analysis software. Determining therefore, based on the size of the selected samples, through statistical analysis, the contribution to the uncertainty generated in the choice of the measurement point, quantifying its value. At last is the measure of the noise of activities according to the Royal Decree 1367/2007, providing an assessment of the uncertainty, taking into account all sources that generate it, using the classical approach to the GUM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta tesis se centra en desarrollo de tecnologías para la interacción hombre-robot en entornos nucleares de fusión. La problemática principal del sector de fusión nuclear radica en las condiciones ambientales tan extremas que hay en el interior del reactor, y la necesidad de que los equipos cumplan requisitos muy restrictivos para poder aguantar esos niveles de radiación, magnetismo, ultravacío, temperatura... Como no es viable la ejecución de tareas directamente por parte de humanos, habrá que utilizar dispositivos de manipulación remota para llevar a cabo los procesos de operación y mantenimiento. En las instalaciones de ITER es obligatorio tener un entorno controlado de extrema seguridad, que necesita de estándares validados. La definición y uso de protocolos es indispensable para regir su buen funcionamiento. Si nos centramos en la telemanipulación con algo grado de escalado, surge la necesidad de definir protocolos para sistemas abiertos que permitan la interacción entre equipos y dispositivos de diversa índole. En este contexto se plantea la definición del Protocolo de Teleoperación que permita la interconexión entre dispositivos maestros y esclavos de distinta tipología, pudiéndose comunicar bilateralmente entre sí y utilizar distintos algoritmos de control según la tarea a desempeñar. Este protocolo y su interconectividad se han puesto a prueba en la Plataforma Abierta de Teleoperación (P.A.T.) que se ha desarrollado e integrado en la ETSII UPM como una herramienta que permita probar, validar y realizar experimentos de telerrobótica. Actualmente, este Protocolo de Teleoperación se ha propuesto a través de AENOR al grupo ISO de Telerobotics como una solución válida al problema existente y se encuentra bajo revisión. Con el diseño de dicho protocolo se ha conseguido enlazar maestro y esclavo, sin embargo con los niveles de radiación tan altos que hay en ITER la electrónica del controlador no puede entrar dentro del tokamak. Por ello se propone que a través de una mínima electrónica convenientemente protegida se puedan multiplexar las señales de control que van a través del cableado umbilical desde el controlador hasta la base del robot. En este ejercicio teórico se demuestra la utilidad y viabilidad de utilizar este tipo de solución para reducir el volumen y peso del cableado umbilical en cifras aproximadas de un 90%, para ello hay que desarrollar una electrónica específica y con certificación RadHard para soportar los enormes niveles de radiación de ITER. Para este manipulador de tipo genérico y con ayuda de la Plataforma Abierta de Teleoperación, se ha desarrollado un algoritmo que mediante un sensor de fuerza/par y una IMU colocados en la muñeca del robot, y convenientemente protegidos ante la radiación, permiten calcular las fuerzas e inercias que produce la carga, esto es necesario para poder transmitirle al operador unas fuerzas escaladas, y que pueda sentir la carga que manipula, y no otras fuerzas que puedan influir en el esclavo remoto, como ocurre con otras técnicas de estimación de fuerzas. Como el blindaje de los sensores no debe ser grande ni pesado, habrá que destinar este tipo de tecnología a las tareas de mantenimiento de las paradas programadas de ITER, que es cuando los niveles de radiación están en sus valores mínimos. Por otro lado para que el operador sienta lo más fielmente posible la fuerza de carga se ha desarrollado una electrónica que mediante el control en corriente de los motores permita realizar un control en fuerza a partir de la caracterización de los motores del maestro. Además para aumentar la percepción del operador se han realizado unos experimentos que demuestran que al aplicar estímulos multimodales (visuales, auditivos y hápticos) aumenta su inmersión y el rendimiento en la consecución de la tarea puesto que influyen directamente en su capacidad de respuesta. Finalmente, y en referencia a la realimentación visual del operador, en ITER se trabaja con cámaras situadas en localizaciones estratégicas, si bien el humano cuando manipula objetos hace uso de su visión binocular cambiando constantemente el punto de vista adecuándose a las necesidades visuales de cada momento durante el desarrollo de la tarea. Por ello, se ha realizado una reconstrucción tridimensional del espacio de la tarea a partir de una cámara-sensor RGB-D, lo cual nos permite obtener un punto de vista binocular virtual móvil a partir de una cámara situada en un punto fijo que se puede proyectar en un dispositivo de visualización 3D para que el operador pueda variar el punto de vista estereoscópico según sus preferencias. La correcta integración de estas tecnologías para la interacción hombre-robot en la P.A.T. ha permitido validar mediante pruebas y experimentos para verificar su utilidad en la aplicación práctica de la telemanipulación con alto grado de escalado en entornos nucleares de fusión. Abstract This thesis focuses on developing technologies for human-robot interaction in nuclear fusion environments. The main problem of nuclear fusion sector resides in such extreme environmental conditions existing in the hot-cell, leading to very restrictive requirements for equipment in order to deal with these high levels of radiation, magnetism, ultravacuum, temperature... Since it is not feasible to carry out tasks directly by humans, we must use remote handling devices for accomplishing operation and maintenance processes. In ITER facilities it is mandatory to have a controlled environment of extreme safety and security with validated standards. The definition and use of protocols is essential to govern its operation. Focusing on Remote Handling with some degree of escalation, protocols must be defined for open systems to allow interaction among different kind of equipment and several multifunctional devices. In this context, a Teleoperation Protocol definition enables interconnection between master and slave devices from different typologies, being able to communicate bilaterally one each other and using different control algorithms depending on the task to perform. This protocol and its interconnectivity have been tested in the Teleoperation Open Platform (T.O.P.) that has been developed and integrated in the ETSII UPM as a tool to test, validate and conduct experiments in Telerobotics. Currently, this protocol has been proposed for Teleoperation through AENOR to the ISO Telerobotics group as a valid solution to the existing problem, and it is under review. Master and slave connection has been achieved with this protocol design, however with such high radiation levels in ITER, the controller electronics cannot enter inside the tokamak. Therefore it is proposed a multiplexed electronic board, that through suitable and RadHard protection processes, to transmit control signals through an umbilical cable from the controller to the robot base. In this theoretical exercise the utility and feasibility of using this type of solution reduce the volume and weight of the umbilical wiring approximate 90% less, although it is necessary to develop specific electronic hardware and validate in RadHard qualifications in order to handle huge levels of ITER radiation. Using generic manipulators does not allow to implement regular sensors for force feedback in ITER conditions. In this line of research, an algorithm to calculate the forces and inertia produced by the load has been developed using a force/torque sensor and IMU, both conveniently protected against radiation and placed on the robot wrist. Scaled forces should be transmitted to the operator, feeling load forces but not other undesirable forces in slave system as those resulting from other force estimation techniques. Since shielding of the sensors should not be large and heavy, it will be necessary to allocate this type of technology for programmed maintenance periods of ITER, when radiation levels are at their lowest levels. Moreover, the operator perception needs to feel load forces as accurate as possible, so some current control electronics were developed to perform a force control of master joint motors going through a correct motor characterization. In addition to increase the perception of the operator, some experiments were conducted to demonstrate applying multimodal stimuli (visual, auditory and haptic) increases immersion and performance in achieving the task since it is directly correlated with response time. Finally, referring to the visual feedback to the operator in ITER, it is usual to work with 2D cameras in strategic locations, while humans use binocular vision in direct object manipulation, constantly changing the point of view adapting it to the visual needs for performing manipulation during task procedures. In this line a three-dimensional reconstruction of non-structured scenarios has been developed using RGB-D sensor instead of cameras in the remote environment. Thus a mobile virtual binocular point of view could be generated from a camera at a fixed point, projecting stereoscopic images in 3D display device according to operator preferences. The successful integration of these technologies for human-robot interaction in the T.O.P., and validating them through tests and experiments, verify its usefulness in practical application of high scaling remote handling at nuclear fusion environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

3D crop reconstruction with a high temporal resolution and by the use of non-destructive measuring technologies can support the automation of plant phenotyping processes. Thereby, the availability of such 3D data can give valuable information about the plant development and the interaction of the plant genotype with the environment. This article presents a new methodology for georeferenced 3D reconstruction of maize plant structure. For this purpose a total station, an IMU, and several 2D LiDARs with different orientations were mounted on an autonomous vehicle. By the multistep methodology presented, based on the application of the ICP algorithm for point cloud fusion, it was possible to perform the georeferenced point clouds overlapping. The overlapping point cloud algorithm showed that the aerial points (corresponding mainly to plant parts) were reduced to 1.5%–9% of the total registered data. The remaining were redundant or ground points. Through the inclusion of different LiDAR point of views of the scene, a more realistic representation of the surrounding is obtained by the incorporation of new useful information but also of noise. The use of georeferenced 3D maize plant reconstruction at different growth stages, combined with the total station accuracy could be highly useful when performing precision agriculture at the crop plant level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Development of a Sensorimotor Algorithm Able to Deal with Unforeseen Pushes and Its Implementation Based on VHDL is the title of my thesis which concludes my Bachelor Degree in the Escuela Técnica Superior de Ingeniería y Sistemas de Telecomunicación of the Universidad Politécnica de Madrid. It encloses the overall work I did in the Neurorobotics Research Laboratory from the Beuth Hochschule für Technik Berlin during my ERASMUS year in 2015. This thesis is focused on the field of robotics, specifically an electronic circuit called Cognitive Sensorimotor Loop (CSL) and its control algorithm based on VHDL hardware description language. The reason that makes the CSL special resides in its ability to operate a motor both as a sensor and an actuator. This way, it is possible to achieve a balanced position in any of the robot joints (e.g. the robot manages to stand) without needing any conventional sensor. In other words, the back electromotive force (EMF) induced by the motor coils is measured and the control algorithm responds depending on its magnitude. The CSL circuit contains mainly an analog-to-digital converter (ADC) and a driver. The ADC consists on a delta-sigma modulation which generates a series of bits with a certain percentage of 1's and 0's, proportional to the back EMF. The control algorithm, running in a FPGA, processes the bit frame and outputs a signal for the driver. This driver, which has an H bridge topology, gives the motor the ability to rotate in both directions while it's supplied with the power needed. The objective of this thesis is to document the experiments and overall work done on push ignoring contractive sensorimotor algorithms, meaning sensorimotor algorithms that ignore large magnitude forces (compared to gravity) applied in a short time interval on a pendulum system. This main objective is divided in two sub-objectives: (1) developing a system based on parameterized thresholds and (2) developing a system based on a push bypassing filter. System (1) contains a module that outputs a signal which blocks the main Sensorimotor algorithm when a push is detected. This module has several different parameters as inputs e.g. the back EMF increment to consider a force as a push or the time interval between samples. System (2) consists on a low-pass Infinite Impulse Response digital filter. It cuts any frequency considered faster than a certain push oscillation. This filter required an intensive study on how to implement some functions and data types (fixed or floating point data) not supported by standard VHDL packages. Once this was achieved, the next challenge was to simplify the solution as much as possible, without using non-official user made packages. Both systems behaved with a series of interesting advantages and disadvantages for the elaboration of the document. Stability, reaction time, simplicity or computational load are one of the many factors to be studied in the designed systems. RESUMEN. Development of a Sensorimotor Algorithm Able to Deal with Unforeseen Pushes and Its Implementation Based on VHDL es un Proyecto de Fin de Grado (PFG) que concluye mis estudios en la Escuela Técnica Superior de Ingeniería y Sistemas de Telecomunicación de la Universidad Politécnica de Madrid. En él se documenta el trabajo de investigación que realicé en el Neurorobotics Research Laboratory de la Beuth Hochschule für Technik Berlin durante el año 2015 mediante el programa de intercambio ERASMUS. Este PFG se centra en el campo de la robótica y en concreto en un circuito electrónico llamado Cognitive Sensorimotor Loop (CSL) y su algoritmo de control basado en lenguaje de modelado hardware VHDL. La particularidad del CSL reside en que se consigue que un motor haga las veces tanto de sensor como de actuador. De esta manera es posible que las articulaciones de un robot alcancen una posición de equilibrio (p.ej. el robot se coloca erguido) sin la necesidad de sensores en el sentido estricto de la palabra. Es decir, se mide la propia fuerza electromotriz (FEM) inducida sobre el motor y el algoritmo responde de acuerdo a su magnitud. El circuito CSL se compone de un convertidor analógico-digital (ADC) y un driver. El ADC consiste en un modulador sigma-delta, que genera una serie de bits con un porcentaje de 1's y 0's determinado, en proporción a la magnitud de la FEM inducida. El algoritmo de control, que se ejecuta en una FPGA, procesa esta cadena de bits y genera una señal para el driver. El driver, que posee una topología en puente H, provee al motor de la potencia necesaria y le otorga la capacidad de rotar en cualquiera de las dos direcciones. El objetivo de este PFG es documentar los experimentos y en general el trabajo realizado en algoritmos Sensorimotor que puedan ignorar fuerzas de gran magnitud (en comparación con la gravedad) y aplicadas en una corta ventana de tiempo. En otras palabras, ignorar empujones conservando el comportamiento original frente a la gravedad. Para ello se han desarrollado dos sistemas: uno basado en umbrales parametrizados (1) y otro basado en un filtro de corte ajustable (2). El sistema (1) contiene un módulo que, en el caso de detectar un empujón, genera una señal que bloquea el algoritmo Sensorimotor. Este módulo recibe diferentes parámetros como el incremento necesario de la FEM para que se considere un empujón o la ventana de tiempo para que se considere la existencia de un empujón. El sistema (2) consiste en un filtro digital paso-bajo de respuesta infinita que corta cualquier variación que considere un empujón. Para crear este filtro se requirió un estudio sobre como implementar ciertas funciones y tipos de datos (coma fija o flotante) no soportados por las librerías básicas de VHDL. Tras esto, el objetivo fue simplificar al máximo la solución del problema, sin utilizar paquetes de librerías añadidos. En ambos sistemas aparecen una serie de ventajas e inconvenientes de interés para el documento. La estabilidad, el tiempo de reacción, la simplicidad o la carga computacional son algunas de las muchos factores a estudiar en los sistemas diseñados. Para concluir, también han sido documentadas algunas incorporaciones a los sistemas: una interfaz visual en VGA, un módulo que compensa el offset del ADC o la implementación de una batería de faders MIDI entre otras.