973 resultados para Visual feedback


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In retinal surgery, surgeons face difficulties such as indirect visualization of surgical targets, physiological tremor, and lack of tactile feedback, which increase the risk of retinal damage caused by incorrect surgical gestures. In this context, intraocular proximity sensing has the potential to overcome current technical limitations and increase surgical safety. In this paper, we present a system for detecting unintentional collisions between surgical tools and the retina using the visual feedback provided by the opthalmic stereo microscope. Using stereo images, proximity between surgical tools and the retinal surface can be detected when their relative stereo disparity is small. For this purpose, we developed a system comprised of two modules. The first is a module for tracking the surgical tool position on both stereo images. The second is a disparity tracking module for estimating a stereo disparity map of the retinal surface. Both modules were specially tailored for coping with the challenging visualization conditions in retinal surgery. The potential clinical value of the proposed method is demonstrated by extensive testing using a silicon phantom eye and recorded rabbit in vivo data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: We evaluated the feasibility of an augmented robotics-assisted tilt table (RATT) for incremental cardiopulmonary exercise testing (CPET) and exercise training in dependent-ambulatory stroke patients. METHODS: Stroke patients (Functional Ambulation Category ≤ 3) underwent familiarization, an incremental exercise test (IET) and a constant load test (CLT) on separate days. A RATT equipped with force sensors in the thigh cuffs, a work rate estimation algorithm and real-time visual feedback to guide the exercise work rate was used. Feasibility assessment considered technical feasibility, patient tolerability, and cardiopulmonary responsiveness. RESULTS: Eight patients (4 female) aged 58.3 ± 9.2 years (mean ± SD) were recruited and all completed the study. For IETs, peak oxygen uptake (V'O2peak), peak heart rate (HRpeak) and peak work rate (WRpeak) were 11.9 ± 4.0 ml/kg/min (45 % of predicted V'O2max), 117 ± 32 beats/min (72 % of predicted HRmax) and 22.5 ± 13.0 W, respectively. Peak ratings of perceived exertion (RPE) were on the range "hard" to "very hard". All 8 patients reached their limit of functional capacity in terms of either their cardiopulmonary or neuromuscular performance. A ventilatory threshold (VT) was identified in 7 patients and a respiratory compensation point (RCP) in 6 patients: mean V'O2 at VT and RCP was 8.9 and 10.7 ml/kg/min, respectively, which represent 75 % (VT) and 85 % (RCP) of mean V'O2peak. Incremental CPET provided sufficient information to satisfy the responsiveness criteria and identification of key outcomes in all 8 patients. For CLTs, mean steady-state V'O2 was 6.9 ml/kg/min (49 % of V'O2 reserve), mean HR was 90 beats/min (56 % of HRmax), RPEs were > 2, and all patients maintained the active work rate for 10 min: these values meet recommended intensity levels for bouts of training. CONCLUSIONS: The augmented RATT is deemed feasible for incremental cardiopulmonary exercise testing and exercise training in dependent-ambulatory stroke patients: the approach was found to be technically implementable, acceptable to the patients, and it showed substantial cardiopulmonary responsiveness. This work has clinical implications for patients with severe disability who otherwise are not able to be tested.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Due to the lack of exercise testing devices that can be employed in stroke patients with severe disability, the aim of this PhD research was to investigate the clinical feasibility of using a robotics-assisted tilt table (RATT) as a method for cardiopulmonary exercise testing (CPET) and exercise training in stroke patients. For this purpose, the RATT was augmented with force sensors, a visual feedback system and a work rate calculation algorithm. As the RATT had not been used previously for CPET, the first phase of this project focused on a feasibility study in 11 healthy able-bodied subjects. The results demonstrated substantial cardiopulmonary responses, no complications were found, and the method was deemed feasible. The second phase was to analyse validity and test-retest reliability of the primary CPET parameters obtained from the RATT in 18 healthy able-bodied subjects and to compare the outcomes to those obtained from standard exercise testing devices (a cycle ergometer and a treadmill). The results demonstrated that peak oxygen uptake (V'O2peak) and oxygen uptake at the submaximal exercise thresholds on the RATT were ̴20% lower than for the cycle ergometer and ̴30% lower than on the treadmill. A very high correlation was found between the RATT vs the cycle ergometer V'O2peak and the RATT vs the treadmill V'O2peak. Test-retest reliability of CPET parameters obtained from the RATT were similarly high to those for standard exercise testing devices. These findings suggested that the RATT is a valid and reliable device for CPET and that it has potential to be used in severely impaired patients. Thus, the third phase was to investigate using the RATT for CPET and exercise training in 8 severely disabled stroke patients. The method was technically implementable, well tolerated by the patients, and substantial cardiopulmonary responses were observed. Additionally, all patients could exercise at the recommended training intensity for 10 min bouts. Finally, an investigation of test-retest reliability and four-week changes in cardiopulmonary fitness was carried out in 17 stroke patients with various degrees of disability. Good to excellent test-retest reliability and repeatability were found for the main CPET variables. There was no significant difference in most CPET parameters over four weeks. In conclusion, based on the demonstrated validity, reliability and repeatability, the RATT was found to be a feasible and appropriate alternative exercise testing and training device for patients who have limitations for use of standard devices.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose In recent years, selective retina laser treatment (SRT), a sub-threshold therapy method, avoids widespread damage to all retinal layers by targeting only a few. While these methods facilitate faster healing, their lack of visual feedback during treatment represents a considerable shortcoming as induced lesions remain invisible with conventional imaging and make clinical use challenging. To overcome this, we present a new strategy to provide location-specific and contact-free automatic feedback of SRT laser applications. Methods We leverage time-resolved optical coherence tomography (OCT) to provide informative feedback to clinicians on outcomes of location-specific treatment. By coupling an OCT system to SRT treatment laser, we visualize structural changes in the retinal layers as they occur via time-resolved depth images. We then propose a novel strategy for automatic assessment of such time-resolved OCT images. To achieve this, we introduce novel image features for this task that when combined with standard machine learning classifiers yield excellent treatment outcome classification capabilities. Results Our approach was evaluated on both ex vivo porcine eyes and human patients in a clinical setting, yielding performances above 95 % accuracy for predicting patient treatment outcomes. In addition, we show that accurate outcomes for human patients can be estimated even when our method is trained using only ex vivo porcine data. Conclusion The proposed technique presents a much needed strategy toward noninvasive, safe, reliable, and repeatable SRT applications. These results are encouraging for the broader use of new treatment options for neovascularization-based retinal pathologies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La medida de la presión sonora es un proceso de extrema importancia para la ingeniería acústica, de aplicación en numerosas áreas de esta disciplina, como la acústica arquitectónica o el control de ruido. Sobre todo en esta última, es necesario poder efectuar medidas precisas en condiciones muy diversas. Por otra parte, la ubicuidad de los dispositivos móviles inteligentes (smartphones, tabletas, etc.), dispositivos que integran potencia de procesado, conectividad, interactividad y una interfaz intuitiva en un tamaño reducido, abre la posibilidad de su uso como sistemas de medida de calidad y de coste bajo. En este Proyecto se pretende utilizar las capacidades de entrada y salida, procesado, conectividad inalámbrica y geolocalización de los dispositivos móviles basados en iOS, en concreto el iPhone, para implementar un sistema de medidas acústicas que iguale o supere las prestaciones de los sonómetros existentes en el mercado. SonoPhone permitirá, mediante la conexión de un micrófono de medida adecuado, la realización de medidas de acuerdo a las normas técnicas en vigor, así como la posibilidad de programar, configurar y almacenar o trasmitir las medidas realizadas, que además estarán geolocalizadas con el GPS integrado en el dispositivo móvil. También se permitirá enviar los datos de la medida a un almacenamiento remoto en la nube. La aplicación tiene una estructura modular en la que un módulo de adquisición de datos lee la señal del micrófono, un back-end efectúa el procesado necesario, y otros módulos permiten la calibración del dispositivo y programar y configurar las medidas, así como su almacenamiento y transmisión en red. Una interfaz de usuario (GUI) permite visualizar las medidas y efectuar las configuraciones deseadas por el usuario, todo ello en tiempo real. Además de implementar la aplicación, se ha realizado una prueba de funcionamiento para determinar si el hardware del iPhone es adecuado para la medida de la presión acústica de acuerdo a las normas internacionales. Sound pressure measurement is an extremely important process in the field of acoustic engineering, with applications in numerous subfields, like for instance building acoustics and noise control, where it is necessary to be able to accurately measure sound pressure in very diverse (and sometimes adverse) conditions. On the other hand, the growing ubiquity of mobile devices such as smartphones or tablets, which combine processing power, connectivity, interactivity and an intuitive interface in a small size, makes it possible to use these devices as quality low-cost measurement systems. This Project aims to use the input-output capabilities of iOS-based mobile devices, in particular the iPhone, together with their processing power, wireless connectivity and geolocation features, to implement an acoustic measurement system that rivals the performance of existing devices. SonoPhone allows, with the addition of an adequate measurement microphone, to carry out measurements that comply with current technical regulations, as well as programming, configuring, storing and transmitting the results of the measurement. These measurements will be geolocated using the integrated GPS, and can be transmitted effortlessly to a remote cloud storage. The application is structured in modular fashion. A data acquisition module reads the signal from the microphone, while a back-end module carries out the necessary processing. Other modules permit the device to be calibrated, or control the configuration of the measurement and its storage or transmission. A Graphical User Interface (GUI) allows visual feedback on the measurement in progress, and provides the user with real-time control over the measurement parameters. Not only an application has been developed; a laboratory test was carried out with the goal of determining if the hardware of the iPhone permits the whole system to comply with international regulations regarding sound level meters.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Esta tesis se centra en desarrollo de tecnologías para la interacción hombre-robot en entornos nucleares de fusión. La problemática principal del sector de fusión nuclear radica en las condiciones ambientales tan extremas que hay en el interior del reactor, y la necesidad de que los equipos cumplan requisitos muy restrictivos para poder aguantar esos niveles de radiación, magnetismo, ultravacío, temperatura... Como no es viable la ejecución de tareas directamente por parte de humanos, habrá que utilizar dispositivos de manipulación remota para llevar a cabo los procesos de operación y mantenimiento. En las instalaciones de ITER es obligatorio tener un entorno controlado de extrema seguridad, que necesita de estándares validados. La definición y uso de protocolos es indispensable para regir su buen funcionamiento. Si nos centramos en la telemanipulación con algo grado de escalado, surge la necesidad de definir protocolos para sistemas abiertos que permitan la interacción entre equipos y dispositivos de diversa índole. En este contexto se plantea la definición del Protocolo de Teleoperación que permita la interconexión entre dispositivos maestros y esclavos de distinta tipología, pudiéndose comunicar bilateralmente entre sí y utilizar distintos algoritmos de control según la tarea a desempeñar. Este protocolo y su interconectividad se han puesto a prueba en la Plataforma Abierta de Teleoperación (P.A.T.) que se ha desarrollado e integrado en la ETSII UPM como una herramienta que permita probar, validar y realizar experimentos de telerrobótica. Actualmente, este Protocolo de Teleoperación se ha propuesto a través de AENOR al grupo ISO de Telerobotics como una solución válida al problema existente y se encuentra bajo revisión. Con el diseño de dicho protocolo se ha conseguido enlazar maestro y esclavo, sin embargo con los niveles de radiación tan altos que hay en ITER la electrónica del controlador no puede entrar dentro del tokamak. Por ello se propone que a través de una mínima electrónica convenientemente protegida se puedan multiplexar las señales de control que van a través del cableado umbilical desde el controlador hasta la base del robot. En este ejercicio teórico se demuestra la utilidad y viabilidad de utilizar este tipo de solución para reducir el volumen y peso del cableado umbilical en cifras aproximadas de un 90%, para ello hay que desarrollar una electrónica específica y con certificación RadHard para soportar los enormes niveles de radiación de ITER. Para este manipulador de tipo genérico y con ayuda de la Plataforma Abierta de Teleoperación, se ha desarrollado un algoritmo que mediante un sensor de fuerza/par y una IMU colocados en la muñeca del robot, y convenientemente protegidos ante la radiación, permiten calcular las fuerzas e inercias que produce la carga, esto es necesario para poder transmitirle al operador unas fuerzas escaladas, y que pueda sentir la carga que manipula, y no otras fuerzas que puedan influir en el esclavo remoto, como ocurre con otras técnicas de estimación de fuerzas. Como el blindaje de los sensores no debe ser grande ni pesado, habrá que destinar este tipo de tecnología a las tareas de mantenimiento de las paradas programadas de ITER, que es cuando los niveles de radiación están en sus valores mínimos. Por otro lado para que el operador sienta lo más fielmente posible la fuerza de carga se ha desarrollado una electrónica que mediante el control en corriente de los motores permita realizar un control en fuerza a partir de la caracterización de los motores del maestro. Además para aumentar la percepción del operador se han realizado unos experimentos que demuestran que al aplicar estímulos multimodales (visuales, auditivos y hápticos) aumenta su inmersión y el rendimiento en la consecución de la tarea puesto que influyen directamente en su capacidad de respuesta. Finalmente, y en referencia a la realimentación visual del operador, en ITER se trabaja con cámaras situadas en localizaciones estratégicas, si bien el humano cuando manipula objetos hace uso de su visión binocular cambiando constantemente el punto de vista adecuándose a las necesidades visuales de cada momento durante el desarrollo de la tarea. Por ello, se ha realizado una reconstrucción tridimensional del espacio de la tarea a partir de una cámara-sensor RGB-D, lo cual nos permite obtener un punto de vista binocular virtual móvil a partir de una cámara situada en un punto fijo que se puede proyectar en un dispositivo de visualización 3D para que el operador pueda variar el punto de vista estereoscópico según sus preferencias. La correcta integración de estas tecnologías para la interacción hombre-robot en la P.A.T. ha permitido validar mediante pruebas y experimentos para verificar su utilidad en la aplicación práctica de la telemanipulación con alto grado de escalado en entornos nucleares de fusión. Abstract This thesis focuses on developing technologies for human-robot interaction in nuclear fusion environments. The main problem of nuclear fusion sector resides in such extreme environmental conditions existing in the hot-cell, leading to very restrictive requirements for equipment in order to deal with these high levels of radiation, magnetism, ultravacuum, temperature... Since it is not feasible to carry out tasks directly by humans, we must use remote handling devices for accomplishing operation and maintenance processes. In ITER facilities it is mandatory to have a controlled environment of extreme safety and security with validated standards. The definition and use of protocols is essential to govern its operation. Focusing on Remote Handling with some degree of escalation, protocols must be defined for open systems to allow interaction among different kind of equipment and several multifunctional devices. In this context, a Teleoperation Protocol definition enables interconnection between master and slave devices from different typologies, being able to communicate bilaterally one each other and using different control algorithms depending on the task to perform. This protocol and its interconnectivity have been tested in the Teleoperation Open Platform (T.O.P.) that has been developed and integrated in the ETSII UPM as a tool to test, validate and conduct experiments in Telerobotics. Currently, this protocol has been proposed for Teleoperation through AENOR to the ISO Telerobotics group as a valid solution to the existing problem, and it is under review. Master and slave connection has been achieved with this protocol design, however with such high radiation levels in ITER, the controller electronics cannot enter inside the tokamak. Therefore it is proposed a multiplexed electronic board, that through suitable and RadHard protection processes, to transmit control signals through an umbilical cable from the controller to the robot base. In this theoretical exercise the utility and feasibility of using this type of solution reduce the volume and weight of the umbilical wiring approximate 90% less, although it is necessary to develop specific electronic hardware and validate in RadHard qualifications in order to handle huge levels of ITER radiation. Using generic manipulators does not allow to implement regular sensors for force feedback in ITER conditions. In this line of research, an algorithm to calculate the forces and inertia produced by the load has been developed using a force/torque sensor and IMU, both conveniently protected against radiation and placed on the robot wrist. Scaled forces should be transmitted to the operator, feeling load forces but not other undesirable forces in slave system as those resulting from other force estimation techniques. Since shielding of the sensors should not be large and heavy, it will be necessary to allocate this type of technology for programmed maintenance periods of ITER, when radiation levels are at their lowest levels. Moreover, the operator perception needs to feel load forces as accurate as possible, so some current control electronics were developed to perform a force control of master joint motors going through a correct motor characterization. In addition to increase the perception of the operator, some experiments were conducted to demonstrate applying multimodal stimuli (visual, auditory and haptic) increases immersion and performance in achieving the task since it is directly correlated with response time. Finally, referring to the visual feedback to the operator in ITER, it is usual to work with 2D cameras in strategic locations, while humans use binocular vision in direct object manipulation, constantly changing the point of view adapting it to the visual needs for performing manipulation during task procedures. In this line a three-dimensional reconstruction of non-structured scenarios has been developed using RGB-D sensor instead of cameras in the remote environment. Thus a mobile virtual binocular point of view could be generated from a camera at a fixed point, projecting stereoscopic images in 3D display device according to operator preferences. The successful integration of these technologies for human-robot interaction in the T.O.P., and validating them through tests and experiments, verify its usefulness in practical application of high scaling remote handling at nuclear fusion environments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Small groups of athletes (maximum size 8) were taught to voluntarily control their finger temperature, in a test of the feasibility of thermal biofeedback as a tool for coaches. The objective was to decrease precompetitive anxiety among the 140 young, competitive athletes (track and field, N=61; swimming, N=79), 66 females and 74 males, mean age 14.8 years, age range 8.9-20.5 years, from local high schools and swimming clubs. The biofeedback (visual and auditory) was provided by small, battery-powered devices that were connected to thermistors attached to the middle finger of the dominant hand. An easily readable digital LCD display, in 0.01 degrees C increments, provided visual feedback, while a musical tone, which descended in pitch with increased finger temperature, provided the audio component via small headphones. Eight twenty minute sessions were scheduled, with 48 hours between sessions. The measures employed in this prestest-posttest study were Levenson's locus of control scale (IPC), and the Competitive Sport Anxiety Inventory (CSAI-2). The results indicated that, while significant control of finger temperature was achieved, F(1, 160)=5.30, p

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In reaction time (RT) tasks, presentation of a startling acoustic stimulus (SAS) together with a visual imperative stimulus can dramatically reduce RT while leaving response execution unchanged. It has been suggested that a prepared motor response program is triggered early by the SAS but is not otherwise affected. Movements aimed at intercepting moving targets are usually considered to be similarly governed by a prepared program. This program is triggered when visual stimulus information about the time to arrival of the moving target reaches a specific criterion. We investigated whether a SAS could also trigger such a movement. Human experimental participants were trained to hit moving targets with movements of a specific duration. This permitted an estimate of when movement would begin (expected onset time). Startling and sub-startle threshold acoustic probe stimuli were delivered unexpectedly among control trials: 65, 85, 115 and 135 ms prior to expected onset (10:1 ratio of control to probe trials). Results showed that startling probe stimuli at 85 and 115 ms produced early response onsets but not those at 65 or 135 ms. Sub-threshold stimuli at 115 and 135 ms also produced early onsets. Startle probes led to an increased vigor in the response, but sub-threshold probes had no detectable effects. These data can be explained by a simple model in which preparatory, response-related activation builds up in the circuits responsible for generating motor commands in anticipation of the GO command. If early triggering by the acoustic probes is the mechanism underlying the findings, then the data support the hypothesis that rapid interceptions are governed by a motor program. © 2006 Published by Elsevier Ltd on behalf of IBRO.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Emmetropization is dependent on visual feedback and presumably some measure of the optical and image quality of the eye. We investigated the effect of simple alterations to image contrast on eye growth and refractive development. A 1.6 cyc/deg square-wave-grating target was located at the end of a 3.3 cm cone,, imaged by a +30 D lens and applied monocularly to the eyes of 8-day-old chicks. Eleven different contrast targets were tested: 95, 67, 47.5, 33.5, 24, 17, 12, 8.5, 4.2, 2.1, and 0%. Refractive error (RE), vitreous chamber depth (VC) and axial length (AL) varied with the contrast of the image (RE diff. F-10.86 = 12.420, p < 0.0005; VC diff. F-10.86 = 8.756, p < 0.0005; AL diff. F-10.86 = 9.240, p < 0.0005). Target contrasts 4.2% and lower produced relative myopia (4.2%: RE diff = -7.48 +/- 2.26 D, p = 0.987; 2.1%: RE diff = -7.22 +/- 2.77 D, p = 0.951) of similar amount to that observed in response to a featureless 0% contrast target (RE diff = -9.11 +/- 4.68 D). For target contrast levels 47.5% and greater isometropia was maintained (95%: RE diff = 1.83 +/- 2.78 D; 67%: RE diff = 0.14 +/- 1.84 D; 47.5% RE diff = 0.25 +/- 1.82 D). Contrasts in between produced an intermediate amount of myopia (33.5%: RE diff = -2.81 +/- 1.80 D; 24%: RE diff = -3.45 +/- 1.64 D; 17%: RE diff = -3.19 +/- 1.54 D; 12%: RE diff = -4.08 +/- 3.56 D; 8.5%: RE diff = -4.09 +/- 3.60 D). We conclude that image contrast provides important visual information for the eye growth control system or that contrast must reach a threshold value for some other emmetropization signal to function. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

By 24-months of age most children show mirror self-recognition. When surreptitiously marked on their forehead and then presented with a mirror, they explore their own head for the unexpected mark. Here we demonstrate that self-recognition in mirrors does not generalize to other visual feedback. We tested 80 children on mirror and live video versions of the task. Whereas 90% of 24-month olds passed the mirror version, only 35% passed the video version. Seventy percent of 30-month olds showed video selfrecognition and only by age 36-months did the pass rate on the video version reach 90%. It remains to be y 24-months of age most children show mirror self-recognition. When surreptitiously marked on their forehead and then presented with a mirror, they explore their own head for the unexpected mark. Here we demonstrate that self-recognition in mirrors does not generalize to other visual feedback. We tested 80 children on mirror and live video versions of the task. Whereas 90% of 24-month olds passed the mirror version, only 35% passed the video version. Seventy percent of 30-month olds showed video selfrecognition and only by age 36-months did the pass rate on the video version reach 90%. It remains to be

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We sought to determine the extent to which colour (and luminance) signals contribute towards the visuomotor localization of targets. To do so we exploited the movement-related illusory displacement a small stationary window undergoes when it has a continuously moving carrier grating behind it. We used drifting (1.0-4.2 Hz) red/green-modulated isoluminant gratings or yellow/black luminance-modulated gratings as carriers, each curtailed in space by a stationary, two-dimensional window. After each trial, the perceived location of the window was recorded with reference to an on-screen ruler (perceptual task) or the on-screen touch of a ballistic pointing movement made without visual feedback (visuomotor task). Our results showed that the perceptual displacement measures were similar for each stimulus type and weakly dependent on stimulus drift rate. However, while the visuomotor displacement measures were similar for each stimulus type at low drift rates (<4 Hz), they were significantly larger for luminance than colour stimuli at high drift rates (>4 Hz). We show that the latter cannot be attributed to differences in perceived speed between stimulus types. We assume, therefore, that our visuomotor localization judgements were more susceptible to the (carrier) motion of luminance patterns than colour patterns. We suggest that, far from being detrimental, this susceptibility may indicate the operation of mechanisms designed to counter the temporal asynchrony between perceptual experiences and the physical changes in the environment that give rise to them. We propose that perceptual localisation is equally supported by both colour and luminance signals but that visuomotor localisation is predominantly supported by luminance signals. We discuss the neural pathways that may be involved with visuomotor localization. © 2007 Springer-Verlag.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The development of increasingly powerful computers, which has enabled the use of windowing software, has also opened the way for the computer study, via simulation, of very complex physical systems. In this study, the main issues related to the implementation of interactive simulations of complex systems are identified and discussed. Most existing simulators are closed in the sense that there is no access to the source code and, even if it were available, adaptation to interaction with other systems would require extensive code re-writing. This work aims to increase the flexibility of such software by developing a set of object-oriented simulation classes, which can be extended, by subclassing, at any level, i.e., at the problem domain, presentation or interaction levels. A strategy, which involves the use of an object-oriented framework, concurrent execution of several simulation modules, use of a networked windowing system and the re-use of existing software written in procedural languages, is proposed. A prototype tool which combines these techniques has been implemented and is presented. It allows the on-line definition of the configuration of the physical system and generates the appropriate graphical user interface. Simulation routines have been developed for the chemical recovery cycle of a paper pulp mill. The application, by creation of new classes, of the prototype to the interactive simulation of this physical system is described. Besides providing visual feedback, the resulting graphical user interface greatly simplifies the interaction with this set of simulation modules. This study shows that considerable benefits can be obtained by application of computer science concepts to the engineering domain, by helping domain experts to tailor interactive tools to suit their needs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis describes work undertaken in order to fulfil a need experienced in the Department of Educational Enquiry at the University of Aston in Birmingham for speech analysis facilities suitable for use in teaching and research work within the Department. The hardware and software developed during the research project provides displays of speech fundamental frequency and intensity in real time. The system is suitable for the provision of visual feedback of these parameters of a subject's speech in a learning situation, and overcomes the inadequacies of equipment currently used for this task in that it provides a clear indication of fundamental frequency contours as the subject is speaking. The thesis considers the use of such equipment in several related fields, and the approaches that have been reported to one of the major problems of speech analysis, namely pitch-period estimation. A number of different systems are described, and their suitability for the present purposes is discussed. Finally, a novel method of pitch-period estimation is developed, and a speech analysis system incorporating this method is described. Comparison is made between the results produced by this system and those produced by a conventional speech spectrograph.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Integrating information from multiple sources is a crucial function of the brain. Examples of such integration include multiple stimuli of different modalties, such as visual and auditory, multiple stimuli of the same modality, such as auditory and auditory, and integrating stimuli from the sensory organs (i.e. ears) with stimuli delivered from brain-machine interfaces.

The overall aim of this body of work is to empirically examine stimulus integration in these three domains to inform our broader understanding of how and when the brain combines information from multiple sources.

First, I examine visually-guided auditory, a problem with implications for the general problem in learning of how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a ‘guess and check’ heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain’s reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.

My next line of research examines how electrical stimulation of the inferior colliculus influences perception of sounds in a nonhuman primate. The central nucleus of the inferior colliculus is the major ascending relay of auditory information before it reaches the forebrain, and thus an ideal target for understanding low-level information processing prior to the forebrain, as almost all auditory signals pass through the central nucleus of the inferior colliculus before reaching the forebrain. Thus, the inferior colliculus is the ideal structure to examine to understand the format of the inputs into the forebrain and, by extension, the processing of auditory scenes that occurs in the brainstem. Therefore, the inferior colliculus was an attractive target for understanding stimulus integration in the ascending auditory pathway.

Moreover, understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5-80 µA, 100-300 Hz, n=172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals’ judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site in comparison to the reference frequency employed in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site’s response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated and provide a greater range of evoked percepts.

My next line of research employs a frequency-tagging approach to examine the extent to which multiple sound sources are combined (or segregated) in the nonhuman primate inferior colliculus. In the single-sound case, most inferior colliculus neurons respond and entrain to sounds in a very broad region of space, and many are entirely spatially insensitive, so it is unknown how the neurons will respond to a situation with more than one sound. I use multiple AM stimuli of different frequencies, which the inferior colliculus represents using a spike timing code. This allows me to measure spike timing in the inferior colliculus to determine which sound source is responsible for neural activity in an auditory scene containing multiple sounds. Using this approach, I find that the same neurons that are tuned to broad regions of space in the single sound condition become dramatically more selective in the dual sound condition, preferentially entraining spikes to stimuli from a smaller region of space. I will examine the possibility that there may be a conceptual linkage between this finding and the finding of receptive field shifts in the visual system.

In chapter 5, I will comment on these findings more generally, compare them to existing theoretical models, and discuss what these results tell us about processing in the central nervous system in a multi-stimulus situation. My results suggest that the brain is flexible in its processing and can adapt its integration schema to fit the available cues and the demands of the task.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The purpose of this paper is to examine the promising contributions of the Concept Maps for Learning (CMfL) website to assessment for learning practices. The CMfL website generates concept maps from relatedness degree of concepts pairs through the Pathfinder Scaling Algorithm. This website also confirms the established principles of effective assessment for learning, for it is capable of automatically assessing students' higher order knowledge, simultaneously identifying strengths and weaknesses, immediately providing useful feedback and being user-friendly. According to the default assessment plan, students first create concept maps on a particular subject and then they are given individualized visual feedback followed by associated instructional material (e.g., videos, website links, examples, problems, etc.) based on a comparison of their concept map and a subject matter expert's map. After studying the feedback and instructional material, teachers can monitor their students' progress by having them create revised concept maps. Therefore, we claim that the CMfL website may reduce the workload of teachers as well as provide immediate and delayed feedback on the weaknesses of students in different forms such as graphical and multimedia. For the following study, we will examine whether these promising contributions to assessment for learning are valid in a variety of subjects.