918 resultados para Television -- Antennas -- Design and construction -- Data processing
Resumo:
This study was a retrospective design and used secondary data from the National Child Abuse and Neglect Data System (NCANDS), provided by the National Data Archive on Child Abuse and Neglect Family Life Development Center administered by Cornell University. The dataset contained information for the year 2005 on children from birth to 18 years of age. Child abuse and neglect for disabled children, was evaluated in-depth in the present study. Descriptive and statistical analysis was performed using the children with and without disabilities. It was found that children with disabilities have a lower rate of substantiation that likely indicates the interference of reporting due to their handicap. The results of this research demonstrate the important need to teach professionals and laypersons alike on how to recognize and substantiate abuse among disabled children.^
Resumo:
This Doctoral Thesis entitled Contribution to the analysis, design and assessment of compact antenna test ranges at millimeter wavelengths aims to deepen the knowledge of a particular antenna measurement system: the compact range, operating in the frequency bands of millimeter wavelengths. The thesis has been developed at Radiation Group (GR), an antenna laboratory which belongs to the Signals, Systems and Radiocommunications department (SSR), from Technical University of Madrid (UPM). The Radiation Group owns an extensive experience on antenna measurements, running at present four facilities which operate in different configurations: Gregorian compact antenna test range, spherical near field, planar near field and semianechoic arch system. The research work performed in line with this thesis contributes the knowledge of the first measurement configuration at higher frequencies, beyond the microwaves region where Radiation Group features customer-level performance. To reach this high level purpose, a set of scientific tasks were sequentially carried out. Those are succinctly described in the subsequent paragraphs. A first step dealed with the State of Art review. The study of scientific literature dealed with the analysis of measurement practices in compact antenna test ranges in addition with the particularities of millimeter wavelength technologies. Joint study of both fields of knowledge converged, when this measurement facilities are of interest, in a series of technological challenges which become serious bottlenecks at different stages: analysis, design and assessment. Thirdly after the overview study, focus was set on Electromagnetic analysis algorithms. These formulations allow to approach certain electromagnetic features of interest, such as field distribution phase or stray signal analysis of particular structures when they interact with electromagnetic waves sources. Properly operated, a CATR facility features electromagnetic waves collimation optics which are large, in terms of wavelengths. Accordingly, the electromagnetic analysis tasks introduce an extense number of mathematic unknowns which grow with frequency, following different polynomic order laws depending on the used algorithmia. In particular, the optics configuration which was of our interest consisted on the reflection type serrated edge collimator. The analysis of these devices requires a flexible handling of almost arbitrary scattering geometries, becoming this flexibility the nucleus of the algorithmia’s ability to perform the subsequent design tasks. This thesis’ contribution to this field of knowledge consisted on reaching a formulation which was powerful at the same time when dealing with various analysis geometries and computationally speaking. Two algorithmia were developed. While based on the same principle of hybridization, they reached different order Physics performance at the cost of the computational efficiency. Inter-comparison of their CATR design capabilities was performed, reaching both qualitative as well as quantitative conclusions on their scope. In third place, interest was shifted from analysis - design tasks towards range assessment. Millimetre wavelengths imply strict mechanical tolerances and fine setup adjustment. In addition, the large number of unknowns issue already faced in the analysis stage appears as well in the on chamber field probing stage. Natural decrease of dynamic range available by semiconductor millimeter waves sources requires in addition larger integration times at each probing point. These peculiarities increase exponentially the difficulty of performing assessment processes in CATR facilities beyond microwaves. The bottleneck becomes so tight that it compromises the range characterization beyond a certain limit frequency which typically lies on the lowest segment of millimeter wavelength frequencies. However the value of range assessment moves, on the contrary, towards the highest segment. This thesis contributes this technological scenario developing quiet zone probing techniques which achieves substantial data reduction ratii. Collaterally, it increases the robustness of the results to noise, which is a virtual rise of the setup’s available dynamic range. In fourth place, the environmental sensitivity of millimeter wavelengths issue was approached. It is well known the drifts of electromagnetic experiments due to the dependance of the re sults with respect to the surrounding environment. This feature relegates many industrial practices of microwave frequencies to the experimental stage, at millimeter wavelengths. In particular, evolution of the atmosphere within acceptable conditioning bounds redounds in drift phenomena which completely mask the experimental results. The contribution of this thesis on this aspect consists on modeling electrically the indoor atmosphere existing in a CATR, as a function of environmental variables which affect the range’s performance. A simple model was developed, being able to handle high level phenomena, such as feed - probe phase drift as a function of low level magnitudes easy to be sampled: relative humidity and temperature. With this model, environmental compensation can be performed and chamber conditioning is automatically extended towards higher frequencies. Therefore, the purpose of this thesis is to go further into the knowledge of millimetre wavelengths involving compact antenna test ranges. This knowledge is dosified through the sequential stages of a CATR conception, form early low level electromagnetic analysis towards the assessment of an operative facility, stages for each one of which nowadays bottleneck phenomena exist and seriously compromise the antenna measurement practices at millimeter wavelengths.
Resumo:
The manipulation and handling of an ever increasing volume of data by current data-intensive applications require novel techniques for e?cient data management. Despite recent advances in every aspect of data management (storage, access, querying, analysis, mining), future applications are expected to scale to even higher degrees, not only in terms of volumes of data handled but also in terms of users and resources, often making use of multiple, pre-existing autonomous, distributed or heterogeneous resources.
Resumo:
Nowadays, more a more base stations are equipped with active conformal antennas. These antenna designs combine phase shift systems with multibeam networks providing multi-beam ability and interference rejection, which optimize multiple channel systems. GEODA is a conformal adaptive antenna system designed for satellite communications. Operating at 1.7 GHz with circular polarization, it is possible to track and communicate with several satellites at once thanks to its adaptive beam. The antenna is based on a set of similar triangular arrays that are divided in subarrays of three elements called `cells'. Transmission/Receiver (T/R) modules manage beam steering by shifting the phases. A more accurate steering of the antenna GEODA could be achieved by using a multibeam network. Several multibeam network designs based on Butler network will be presented
Resumo:
Communications Based Train Control Systems require high quality radio data communications for train signaling and control. Actually most of these systems use 2.4GHz band with proprietary radio transceivers and leaky feeder as distribution system. All them demand a high QoS radio network to improve the efficiency of railway networks. We present narrow band, broad band and data correlated measurements taken in Madrid underground with a transmission system at 2.4 GHz in a test network of 2 km length in subway tunnels. The architecture proposed has a strong overlap in between cells to improve reliability and QoS. The radio planning of the network is carefully described and modeled with narrow band and broadband measurements and statistics. The result is a network with 99.7% of packets transmitted correctly and average propagation delay of 20ms. These results fulfill the specifications QoS of CBTC systems.
Resumo:
It is easy to get frustrated at spoken conversational agents (SCAs), perhaps because they seem to be callous. By and large, the quality of human-computer interaction is affected due to the inability of the SCAs to recognise and adapt to user emotional state. Now with the mass appeal of artificially-mediated communication, there has been an increasing need for SCAs to be socially and emotionally intelligent, that is, to infer and adapt to their human interlocutors’ emotions on the fly, in order to ascertain an affective, empathetic and naturalistic interaction. An enhanced quality of interaction would reduce users’ frustrations and consequently increase their satisfactions. These reasons have motivated the development of SCAs towards including socio-emotional elements, turning them into affective and socially-sensitive interfaces. One barrier to the creation of such interfaces has been the lack of methods for modelling emotions in a task-independent environment. Most emotion models for spoken dialog systems are task-dependent and thus cannot be used “as-is” in different applications. This Thesis focuses on improving this, in which it concerns computational modeling of emotion, personality and their interrelationship for task-independent autonomous SCAs. The generation of emotion is driven by needs, inspired by human’s motivational systems. The work in this Thesis is organised in three stages, each one with its own contribution. The first stage involved defining, integrating and quantifying the psychological-based motivational and emotional models sourced from. Later these were transformed into a computational model by implementing them into software entities. The computational model was then incorporated and put to test with an existing SCA host, a HiFi-control agent. The second stage concerned automatic prediction of affect, which has been the main challenge towards the greater aim of infusing social intelligence into the HiFi agent. In recent years, studies on affect detection from voice have moved on to using realistic, non-acted data, which is subtler. However, it is more challenging to perceive subtler emotions and this is demonstrated in tasks such as labelling and machine prediction. In this stage, we attempted to address part of this challenge by considering the roles of user satisfaction ratings and conversational/dialog features as the respective target and predictors in discriminating contentment and frustration, two types of emotions that are known to be prevalent within spoken human-computer interaction. The final stage concerned the evaluation of the emotional model through the HiFi agent. A series of user studies with 70 subjects were conducted in a real-time environment, each in a different phase and with its own conditions. All the studies involved the comparisons between the baseline non-modified and the modified agent. The findings have gone some way towards enhancing our understanding of the utility of emotion in spoken dialog systems in several ways; first, an SCA should not express its emotions blindly, albeit positive. Rather, it should adapt its emotions to user states. Second, low performance in an SCA may be compensated by the exploitation of emotion. Third, the expression of emotion through the exploitation of prosody could better improve users’ perceptions of an SCA compared to exploiting emotions through just lexical contents. Taken together, these findings not only support the success of the emotional model, but also provide substantial evidences with respect to the benefits of adding emotion in an SCA, especially in mitigating users’ frustrations and ultimately improving their satisfactions. Resumen Es relativamente fácil experimentar cierta frustración al interaccionar con agentes conversacionales (Spoken Conversational Agents, SCA), a menudo porque parecen ser un poco insensibles. En general, la calidad de la interacción persona-agente se ve en cierto modo afectada por la incapacidad de los SCAs para identificar y adaptarse al estado emocional de sus usuarios. Actualmente, y debido al creciente atractivo e interés de dichos agentes, surge la necesidad de hacer de los SCAs unos seres cada vez más sociales y emocionalmente inteligentes, es decir, con capacidad para inferir y adaptarse a las emociones de sus interlocutores humanos sobre la marcha, de modo que la interacción resulte más afectiva, empática y, en definitiva, natural. Una interacción mejorada en este sentido permitiría reducir la posible frustración de los usuarios y, en consecuencia, mejorar el nivel de satisfacción alcanzado por los mismos. Estos argumentos justifican y motivan el desarrollo de nuevos SCAs con capacidades socio-emocionales, dotados de interfaces afectivas y socialmente sensibles. Una de las barreras para la creación de tales interfaces ha sido la falta de métodos de modelado de emociones en entornos independientes de tarea. La mayoría de los modelos emocionales empleados por los sistemas de diálogo hablado actuales son dependientes de tarea y, por tanto, no pueden utilizarse "tal cual" en diferentes dominios o aplicaciones. Esta tesis se centra precisamente en la mejora de este aspecto, la definición de modelos computacionales de las emociones, la personalidad y su interrelación para SCAs autónomos e independientes de tarea. Inspirada en los sistemas motivacionales humanos en el ámbito de la psicología, la tesis propone un modelo de generación/producción de la emoción basado en necesidades. El trabajo realizado en la presente tesis está organizado en tres etapas diferenciadas, cada una con su propia contribución. La primera etapa incluyó la definición, integración y cuantificación de los modelos motivacionales de partida y de los modelos emocionales derivados a partir de éstos. Posteriormente, dichos modelos emocionales fueron plasmados en un modelo computacional mediante su implementación software. Este modelo computacional fue incorporado y probado en un SCA anfitrión ya existente, un agente con capacidad para controlar un equipo HiFi, de alta fidelidad. La segunda etapa se orientó hacia el reconocimiento automático de la emoción, aspecto que ha constituido el principal desafío en relación al objetivo mayor de infundir inteligencia social en el agente HiFi. En los últimos años, los estudios sobre reconocimiento de emociones a partir de la voz han pasado de emplear datos actuados a usar datos reales en los que la presencia u observación de emociones se produce de una manera mucho más sutil. El reconocimiento de emociones bajo estas condiciones resulta mucho más complicado y esta dificultad se pone de manifiesto en tareas tales como el etiquetado y el aprendizaje automático. En esta etapa, se abordó el problema del reconocimiento de las emociones del usuario a partir de características o métricas derivadas del propio diálogo usuario-agente. Gracias a dichas métricas, empleadas como predictores o indicadores del grado o nivel de satisfacción alcanzado por el usuario, fue posible discriminar entre satisfacción y frustración, las dos emociones prevalentes durante la interacción usuario-agente. La etapa final corresponde fundamentalmente a la evaluación del modelo emocional por medio del agente Hifi. Con ese propósito se llevó a cabo una serie de estudios con usuarios reales, 70 sujetos, interaccionando con diferentes versiones del agente Hifi en tiempo real, cada uno en una fase diferente y con sus propias características o capacidades emocionales. En particular, todos los estudios realizados han profundizado en la comparación entre una versión de referencia del agente no dotada de ningún comportamiento o característica emocional, y una versión del agente modificada convenientemente con el modelo emocional propuesto. Los resultados obtenidos nos han permitido comprender y valorar mejor la utilidad de las emociones en los sistemas de diálogo hablado. Dicha utilidad depende de varios aspectos. En primer lugar, un SCA no debe expresar sus emociones a ciegas o arbitrariamente, incluso aunque éstas sean positivas. Más bien, debe adaptar sus emociones a los diferentes estados de los usuarios. En segundo lugar, un funcionamiento relativamente pobre por parte de un SCA podría compensarse, en cierto modo, dotando al SCA de comportamiento y capacidades emocionales. En tercer lugar, aprovechar la prosodia como vehículo para expresar las emociones, de manera complementaria al empleo de mensajes con un contenido emocional específico tanto desde el punto de vista léxico como semántico, ayuda a mejorar la percepción por parte de los usuarios de un SCA. Tomados en conjunto, los resultados alcanzados no sólo confirman el éxito del modelo emocional, sino xv que constituyen además una evidencia decisiva con respecto a los beneficios de incorporar emociones en un SCA, especialmente en cuanto a reducir el nivel de frustración de los usuarios y, en última instancia, mejorar su satisfacción.
Resumo:
The SMS, Simultaneous Multiple Surfaces, design was born to Nonimaging Optics applications and is now being applied also to Imaging Optics. In this paper the wave aberration function of a selected SMS design is studied. It has been found the SMS aberrations can be analyzed with a little set of parameters, sometimes two. The connection of this model with the conventional aberration expansion is also presented. To verify these mathematical model two SMS design systems were raytraced and the data were analyzed with a classical statistical methods: the plot of discrepancies and the quadratic average error. Both the tests show very good agreement with the model for our systems.
Resumo:
Due to the advancement of both, information technology in general, and databases in particular; data storage devices are becoming cheaper and data processing speed is increasing. As result of this, organizations tend to store large volumes of data holding great potential information. Decision Support Systems, DSS try to use the stored data to obtain valuable information for organizations. In this paper, we use both data models and use cases to represent the functionality of data processing in DSS following Software Engineering processes. We propose a methodology to develop DSS in the Analysis phase, respective of data processing modeling. We have used, as a starting point, a data model adapted to the semantics involved in multidimensional databases or data warehouses, DW. Also, we have taken an algorithm that provides us with all the possible ways to automatically cross check multidimensional model data. Using the aforementioned, we propose diagrams and descriptions of use cases, which can be considered as patterns representing the DSS functionality, in regard to DW data processing, DW on which DSS are based. We highlight the reusability and automation benefits that this can be achieved, and we think this study can serve as a guide in the development of DSS.
Resumo:
An innovative dissipative multi-beam network for triangular arrays of three radiating elements is proposed. This novel network provides three orthogonal beams in θ0 elevation angle and a fourth one in the broadside steering direction. The network is composed of 90º hybrid couplers and fixed phase shifters. In this paper, a relation between network components, radiating element distance and beam steering directions will be shown. Application of the proposed dissipative network to the triangular cells of three radiating elements that integrate the intelligent antenna GEODA will be exhibited. This system works at 1.7 GHz, it has a 60º single radiating element beamwidth and a distance between array elements of 0.57 λ. Both beam patterns, theoretical and simulated, obtained with the network will be depicted. Moreover, the whole system, dissipative network built with GEODA cell array, has been measured in the anechoic chamber of the Radiation Group of Technical University of Madrid, demonstrating expected performance.
Resumo:
The design, construction and operation of the tunnels of M-30, the major ring road in the city of Madrid (Spain), represent a very interesting project in wich a wide variety of situations -geometrical, topographical, etc.- had to be covered, in variable conditions of traffic. For that reasons, the M-30 project is a remarkable technical challenge, which, after its completion, turned into an international reference. From the "design for safety" perspective, a holistic approach has been used to deal with new technologies, integration of systems and development of the procedures to reach the maximum level. However, one of the primary goals has been to achieve reasonable homogeneity characteristics which can permit operate a netword of tunels as one only infraestructure. In the case of the ventilation system the mentioned goals have implied innovative solutions and coordination efforts of great interest. Consequently, this paper describes the principal ideas underlying the conceptual solution developed focusing on the principal peculiarities of the project.
Resumo:
Este proyecto consiste en el diseño y construcción de un sintetizador basado en el chip 6581 Sound Interface Device (SID). Este chip era el encargado de la generación de sonido en el Commodore 64, ordenador personal comercializado en 1982, y fue el primer sintetizador complejo construido para ordenador. El chip en cuestión es un sintetizador de tres voces, cada una de ellas capaz de generar cuatro diferentes formas de onda. Cada voz tiene control independiente de varios parámetros, permitiendo una relativamente amplia variedad de sonidos y efectos, muy útil para su uso en videojuegos. Además está dotado de un filtro programable para conseguir distintos timbres mediante síntesis sustractiva. El sintetizador se ha construido sobre Arduino, una plataforma de electrónica abierta concebida para la creación de prototipos, consistente en una placa de circuito impreso con un microcontrolador, programable desde un PC para que realice múltiples funciones (desde encender LEDs hasta controlar servomecanismos en robótica, procesado y transmisión de datos, etc.). El sintetizador es controlable vía MIDI, por ejemplo, desde un teclado de piano. A través de MIDI recibe información tal como qué notas debe tocar, o los valores de los parámetros del SID que modifican las propiedades del sonido. Además, toda esa información también la puede recibir de un PC mediante una conexión USB. Se han construido dos versiones del sintetizador: una versión “hardware”, que utiliza el SID para la generación de sonido, y otra “software”, que reemplaza el SID por un emulador, es decir, un programa que se comporta (en la medida de lo posible) de la misma manera que el SID. El emulador se ha implementado en un microcontrolador Atmega 168 de Atmel, el mismo que utiliza Arduino. ABSTRACT. This project consists on design and construction of a synthesizer which is based on chip 6581 Sound Interface Device (SID). This chip was used for sound generation on the Commodore 64, a home computer presented in 1982, and it was the first complex synthesizer built for computers. The chip is a three-voice synthesizer, each voice capable of generating four different waveforms. Each voice has independent control of several parameters, allowing a relatively wide variety of sounds and effects, very useful for its use on videogames. It also includes a programmable filter, allowing more timbre control via subtractive synthesis. The synthesizer has been built on Arduino, an open-source electronics prototyping platform that consists on a printed circuit board with a microcontroller, which is programmable with a computer to do several functions (lighting LEDs, controlling servomechanisms on robotics, data processing or transmission, etc.). The synthesizer is controlled via MIDI, in example, from a piano-type keyboard. It receives from MIDI information such as the notes that should be played or SID’s parameter values that modify the sound. It also can receive that information from a PC via USB connection. Two versions of the synthesizer have been built: a hardware one that uses the SID chip for sound generation, and a software one that replaces SID by an emulator, it is, a program that behaves (as far as possible) in the same way the SID would. The emulator is implemented on an Atmel’s Atmega 168 microcontroller, the same one that is used on Arduino.
Resumo:
En la interacción con el entorno que nos rodea durante nuestra vida diaria (utilizar un cepillo de dientes, abrir puertas, utilizar el teléfono móvil, etc.) y en situaciones profesionales (intervenciones médicas, procesos de producción, etc.), típicamente realizamos manipulaciones avanzadas que incluyen la utilización de los dedos de ambas manos. De esta forma el desarrollo de métodos de interacción háptica multi-dedo dan lugar a interfaces hombre-máquina más naturales y realistas. No obstante, la mayoría de interfaces hápticas disponibles en el mercado están basadas en interacciones con un solo punto de contacto; esto puede ser suficiente para la exploración o palpación del entorno pero no permite la realización de tareas más avanzadas como agarres. En esta tesis, se investiga el diseño mecánico, control y aplicaciones de dispositivos hápticos modulares con capacidad de reflexión de fuerzas en los dedos índice, corazón y pulgar del usuario. El diseño mecánico de la interfaz diseñada, ha sido optimizado con funciones multi-objetivo para conseguir una baja inercia, un amplio espacio de trabajo, alta manipulabilidad y reflexión de fuerzas superiores a 3 N en el espacio de trabajo. El ancho de banda y la rigidez del dispositivo se han evaluado mediante simulación y experimentación real. Una de las áreas más importantes en el diseño de estos dispositivos es el efector final, ya que es la parte que está en contacto con el usuario. Durante este trabajo se ha diseñado un dedal de bajo peso, adaptable a diferentes usuarios que, mediante la incorporación de sensores de contacto, permite estimar fuerzas normales y tangenciales durante la interacción con entornos reales y virtuales. Para el diseño de la arquitectura de control, se estudiaron los principales requisitos para estos dispositivos. Entre estos, cabe destacar la adquisición, procesado e intercambio a través de internet de numerosas señales de control e instrumentación; la computación de equaciones matemáticas incluyendo la cinemática directa e inversa, jacobiana, algoritmos de detección de agarres, etc. Todos estos componentes deben calcularse en tiempo real garantizando una frecuencia mínima de 1 KHz. Además, se describen sistemas para manipulación de precisión virtual y remota; así como el diseño de un método denominado "desacoplo cinemático iterativo" para computar la cinemática inversa de robots y la comparación con otros métodos actuales. Para entender la importancia de la interacción multimodal, se ha llevado a cabo un estudio para comprobar qué estímulos sensoriales se correlacionan con tiempos de respuesta más rápidos y de mayor precisión. Estos experimentos se desarrollaron en colaboración con neurocientíficos del instituto Technion Israel Institute of Technology. Comparando los tiempos de respuesta en la interacción unimodal (auditiva, visual y háptica) con combinaciones bimodales y trimodales de los mismos, se demuestra que el movimiento sincronizado de los dedos para generar respuestas de agarre se basa principalmente en la percepción háptica. La ventaja en el tiempo de procesamiento de los estímulos hápticos, sugiere que los entornos virtuales que incluyen esta componente sensorial generan mejores contingencias motoras y mejoran la credibilidad de los eventos. Se concluye que, los sistemas que incluyen percepción háptica dotan a los usuarios de más tiempo en las etapas cognitivas para rellenar información de forma creativa y formar una experiencia más rica. Una aplicación interesante de los dispositivos hápticos es el diseño de nuevos simuladores que permitan entrenar habilidades manuales en el sector médico. En colaboración con fisioterapeutas de Griffith University en Australia, se desarrolló un simulador que permite realizar ejercicios de rehabilitación de la mano. Las propiedades de rigidez no lineales de la articulación metacarpofalange del dedo índice se estimaron mediante la utilización del efector final diseñado. Estos parámetros, se han implementado en un escenario que simula el comportamiento de la mano humana y que permite la interacción háptica a través de esta interfaz. Las aplicaciones potenciales de este simulador están relacionadas con entrenamiento y educación de estudiantes de fisioterapia. En esta tesis, se han desarrollado nuevos métodos que permiten el control simultáneo de robots y manos robóticas en la interacción con entornos reales. El espacio de trabajo alcanzable por el dispositivo háptico, se extiende mediante el cambio de modo de control automático entre posición y velocidad. Además, estos métodos permiten reconocer el gesto del usuario durante las primeras etapas de aproximación al objeto para su agarre. Mediante experimentos de manipulación avanzada de objetos con un manipulador y diferentes manos robóticas, se muestra que el tiempo en realizar una tarea se reduce y que el sistema permite la realización de la tarea con precisión. Este trabajo, es el resultado de una colaboración con investigadores de Harvard BioRobotics Laboratory. ABSTRACT When we interact with the environment in our daily life (using a toothbrush, opening doors, using cell-phones, etc.), or in professional situations (medical interventions, manufacturing processes, etc.) we typically perform dexterous manipulations that involve multiple fingers and palm for both hands. Therefore, multi-Finger haptic methods can provide a realistic and natural human-machine interface to enhance immersion when interacting with simulated or remote environments. Most commercial devices allow haptic interaction with only one contact point, which may be sufficient for some exploration or palpation tasks but are not enough to perform advanced object manipulations such as grasping. In this thesis, I investigate the mechanical design, control and applications of a modular haptic device that can provide force feedback to the index, thumb and middle fingers of the user. The designed mechanical device is optimized with a multi-objective design function to achieve a low inertia, a large workspace, manipulability, and force-feedback of up to 3 N within the workspace; the bandwidth and rigidity for the device is assessed through simulation and real experimentation. One of the most important areas when designing haptic devices is the end-effector, since it is in contact with the user. In this thesis the design and evaluation of a thimble-like, lightweight, user-adaptable, and cost-effective device that incorporates four contact force sensors is described. This design allows estimation of the forces applied by a user during manipulation of virtual and real objects. The design of a real-time, modular control architecture for multi-finger haptic interaction is described. Requirements for control of multi-finger haptic devices are explored. Moreover, a large number of signals have to be acquired, processed, sent over the network and mathematical computations such as device direct and inverse kinematics, jacobian, grasp detection algorithms, etc. have to be calculated in Real Time to assure the required high fidelity for the haptic interaction. The Hardware control architecture has different modules and consists of an FPGA for the low-level controller and a RT controller for managing all the complex calculations (jacobian, kinematics, etc.); this provides a compact and scalable solution for the required high computation capabilities assuring a correct frequency rate for the control loop of 1 kHz. A set-up for dexterous virtual and real manipulation is described. Moreover, a new algorithm named the iterative kinematic decoupling method was implemented to solve the inverse kinematics of a robotic manipulator. In order to understand the importance of multi-modal interaction including haptics, a subject study was carried out to look for sensory stimuli that correlate with fast response time and enhanced accuracy. This experiment was carried out in collaboration with neuro-scientists from Technion Israel Institute of Technology. By comparing the grasping response times in unimodal (auditory, visual, and haptic) events with the response times in events with bimodal and trimodal combinations. It is concluded that in grasping tasks the synchronized motion of the fingers to generate the grasping response relies on haptic cues. This processing-speed advantage of haptic cues suggests that multimodalhaptic virtual environments are superior in generating motor contingencies, enhancing the plausibility of events. Applications that include haptics provide users with more time at the cognitive stages to fill in missing information creatively and form a richer experience. A major application of haptic devices is the design of new simulators to train manual skills for the medical sector. In collaboration with physical therapists from Griffith University in Australia, we developed a simulator to allow hand rehabilitation manipulations. First, the non-linear stiffness properties of the metacarpophalangeal joint of the index finger were estimated by using the designed end-effector; these parameters are implemented in a scenario that simulates the behavior of the human hand and that allows haptic interaction through the designed haptic device. The potential application of this work is related to educational and medical training purposes. In this thesis, new methods to simultaneously control the position and orientation of a robotic manipulator and the grasp of a robotic hand when interacting with large real environments are studied. The reachable workspace is extended by automatically switching between rate and position control modes. Moreover, the human hand gesture is recognized by reading the relative movements of the index, thumb and middle fingers of the user during the early stages of the approximation-to-the-object phase and then mapped to the robotic hand actuators. These methods are validated to perform dexterous manipulation of objects with a robotic manipulator, and different robotic hands. This work is the result of a research collaboration with researchers from the Harvard BioRobotics Laboratory. The developed experiments show that the overall task time is reduced and that the developed methods allow for full dexterity and correct completion of dexterous manipulations.
Resumo:
Propagation of nuclear data uncertainties in reactor calculations is interesting for design purposes and libraries evaluation. Previous versions of the GRS XSUSA library propagated only neutron cross section uncertainties. We have extended XSUSA uncertainty assessment capabilities by including propagation of fission yields and decay data uncertainties due to the their relevance in depletion simulations. We apply this extended methodology to the UAM6 PWR Pin-Cell Burnup Benchmark, which involves uncertainty propagation through burnup.
Resumo:
Propagation of nuclear data uncertainties to calculated values is interesting for design purposes and libraries evaluation. XSUSA, developed at GRS, propagates cross section uncertainties to nuclear calculations. In depletion simulations, fission yields and decay data are also involved and suppose a possible source of uncertainty that must be taken into account. We have developed tools to generate varied fission yields and decay libraries and to propagate uncertainties trough depletion in order to complete the XSUSA uncertainty assessment capabilities. A simple test to probe the methodology is defined and discussed.