992 resultados para Object manipulation actions


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A form of three-dimensional X-ray imaging, called Object 3-D, is introduced, where the relevant subject material is represented as discrete ‘objects’. The surface of each such object is derived accurately from the projections of its outline, and of its other discontinuities, in about ten conventional X-ray views, distributed in solid angle. This technique is suitable for many applications, and permits dramatic savings in radiation exposure and in data acquisition and manipulation. It is well matched to user-friendly interactive displays.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research in the last four decades has brought a considerable advance in our understanding of how the brain synthesizes information arising from different sensory modalities. Indeed, many cortical and subcortical areas, beyond those traditionally considered to be ‘associative,’ have been shown to be involved in multisensory interaction and integration (Ghazanfar and Schroeder 2006). Visuo-tactile interaction is of particular interest, because of the prominent role played by vision in guiding our actions and anticipating their tactile consequences in everyday life. In this chapter, we focus on the functional role that visuo-tactile processing may play in driving two types of body-object interactions: avoidance and approach. We will first review some basic features of visuo-tactile interactions, as revealed by electrophysiological studies in monkeys. These will prove to be relevant for interpreting the subsequent evidence arising from human studies. A crucial point that will be stressed is that these visuo-tactile mechanisms have not only sensory, but also motor-related activity that qualifies them as multisensory-motor interfaces. Evidence will then be presented for the existence of functionally homologous processing in the human brain, both from neuropsychological research in brain-damaged patients and in healthy participants. The final part of the chapter will focus on some recent studies in humans showing that the human motor system is provided with a multisensory interface that allows for continuous monitoring of the space near the body (i.e., peripersonal space). We further demonstrate that multisensory processing can be modulated on-line as a consequence of interacting with objects. This indicates that, far from being passive, the monitoring of peripersonal space is an active process subserving actions between our body and objects located in the space around us.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The existence of hand-centred visual processing has long been established in the macaque premotor cortex. These hand-centred mechanisms have been thought to play some general role in the sensory guidance of movements towards objects, or, more recently, in the sensory guidance of object avoidance movements. We suggest that these hand-centred mechanisms play a specific and prominent role in the rapid selection and control of manual actions following sudden changes in the properties of the objects relevant for hand-object interactions. We discuss recent anatomical and physiological evidence from human and non-human primates, which indicates the existence of rapid processing of visual information for hand-object interactions. This new evidence demonstrates how several stages of the hierarchical visual processing system may be bypassed, feeding the motor system with hand-related visual inputs within just 70 ms following a sudden event. This time window is early enough, and this processing rapid enough, to allow the generation and control of rapid hand-centred avoidance and acquisitive actions, for aversive and desired objects, respectively

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study represents a preliminary step towards data-driven computation of contact dynamics during manipulation of deformable objects at two points of contact. A modeling approach is proposed that characterizes the individual interaction at both points and the mutual effects of the two interactions on each other via a set of parameters. Both global as well as local coordinate systems are tested for encoding the contact mechanics. Artificial neural networks are trained on simulated data to capture the object behavior. A comparison of test data with the output of the trained system reveals a mean squared error percentage between 1% and 3% for simple interactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A powerful image editing system called OVIE is described, which provides fast and accurate creation, composition, rendering and other manipulation of image contents. Flexibility and convenience of the system are achieved by including two modules: image decomposition and image vectorization to understand and represent an image respectively. To understand an image comprehensively, we propose to integrate image segmentation, shape completion and image completion techniques to ensure a seamless image editing. An array of pixels is replaced by vector data with geometric edit ability for image representation since the geometrically-based editing has physical meanings and thus it is more natural or intuitive for users to edit. Compared to the existing works, our system is more convenient and can generate effects with higher quality. © 2012 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goals of this study were to examine the visual information influence on body sway as a function of self- and object-motion perception and visual information quality. Participants that were aware (object-motion) and unaware (self-motion) of the movement of a moving room were asked to stand upright at five different distances from its frontal wall. The visual information effect on body sway decreased when participants were aware about the sensory manipulation. Moreover, while the visual influence on body sway decreased as the distance increased in the self-motion perception, no effects were observed in the object-motion mode. The overall results indicate that postural control system functioning can be altered by prior knowledge, and adaptation due to changes in sensory quality seem to occur in the self- but not in the object-motion perception mode. (C) 2004 Elsevier B.V.. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

How do capuchin monkeys learn to use stones to crack open nuts? Perception-action theory posits that individuals explore producing varying spatial and force relations among objects and surfaces, thereby learning about affordances of such relations and how to produce them. Such learning supports the discovery of tool use. We present longitudinal developmental data from semifree-ranging tufted capuchin monkeys (Cebus apella) to evaluate predictions arising from Perception-action theory linking manipulative development and the onset of tool-using. Percussive actions bringing an object into contact with a surface appeared within the first year of life. Most infants readily struck nuts and other objects against stones or other surfaces from 6 months of age, but percussive actions alone were not sufficient to produce nut-cracking sequences. Placing the nut on the anvil surface and then releasing it, so that it could be struck with a stone, was the last element necessary for nut-cracking to appear in capuchins. Young chimpanzees may face a different challenge in learning to crack nuts: they readily place objects on surfaces and release them, but rarely vigorously strike objects against surfaces or other objects. Thus the challenges facing the two species in developing the same behavior (nut-cracking using a stone hammer and an anvil) may be quite different. Capuchins must inhibit a strong bias to hold nuts so that they can release them; chimpanzees must generate a percussive action rather than a gentle placing action. Generating the right actions may be as challenging as achieving the right sequence of actions in both species. Our analysis suggests a new direction for studies of social influence on young primates learning sequences of actions involving manipulation of objects in relation to surfaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, stable markerless 6 DOF video based handtracking devices became available. These devices simultaneously track the positions and orientations of both user hands in different postures with at least 25 frames per second. Such hand-tracking allows for using the human hands as natural input devices. However, the absence of physical buttons for performing click actions and state changes poses severe challenges in designing an efficient and easy to use 3D interface on top of such a device. In particular, for coupling and decoupling a virtual object’s movements to the user’s hand (i.e. grabbing and releasing) a solution has to be found. In this paper, we introduce a novel technique for efficient two-handed grabbing and releasing objects and intuitively manipulating them in the virtual space. This technique is integrated in a novel 3D interface for virtual manipulations. A user experiment shows the superior applicability of this new technique. Last but not least, we describe how this technique can be exploited in practice to improve interaction by integrating it with RTT DeltaGen, a professional CAD/CAS visualization and editing tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the current study it is investigated whether peripheral vision can be used to monitor multi-ple moving objects and to detect single-target changes. For this purpose, in Experiment 1, a modified MOT setup with a large projection and a constant-position centroid phase had to be checked first. Classical findings regarding the use of a virtual centroid to track multiple ob-jects and the dependency of tracking accuracy on target speed could be successfully replicat-ed. Thereafter, the main experimental variations regarding the manipulation of to-be-detected target changes could be introduced in Experiment 2. In addition to a button press used for the detection task, gaze behavior was assessed using an integrated eye-tracking system. The anal-ysis of saccadic reaction times in relation to the motor response shows that peripheral vision is naturally used to detect motion and form changes in MOT because the saccade to the target occurred after target-change offset. Furthermore, for changes of comparable task difficulties, motion changes are detected better by peripheral vision than form changes. Findings indicate that capabilities of the visual system (e.g., visual acuity) affect change detection rates and that covert-attention processes may be affected by vision-related aspects like spatial uncertainty. Moreover, it is argued that a centroid-MOT strategy might reduce the amount of saccade-related costs and that eye-tracking seems to be generally valuable to test predictions derived from theories on MOT. Finally, implications for testing covert attention in applied settings are proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En la interacción con el entorno que nos rodea durante nuestra vida diaria (utilizar un cepillo de dientes, abrir puertas, utilizar el teléfono móvil, etc.) y en situaciones profesionales (intervenciones médicas, procesos de producción, etc.), típicamente realizamos manipulaciones avanzadas que incluyen la utilización de los dedos de ambas manos. De esta forma el desarrollo de métodos de interacción háptica multi-dedo dan lugar a interfaces hombre-máquina más naturales y realistas. No obstante, la mayoría de interfaces hápticas disponibles en el mercado están basadas en interacciones con un solo punto de contacto; esto puede ser suficiente para la exploración o palpación del entorno pero no permite la realización de tareas más avanzadas como agarres. En esta tesis, se investiga el diseño mecánico, control y aplicaciones de dispositivos hápticos modulares con capacidad de reflexión de fuerzas en los dedos índice, corazón y pulgar del usuario. El diseño mecánico de la interfaz diseñada, ha sido optimizado con funciones multi-objetivo para conseguir una baja inercia, un amplio espacio de trabajo, alta manipulabilidad y reflexión de fuerzas superiores a 3 N en el espacio de trabajo. El ancho de banda y la rigidez del dispositivo se han evaluado mediante simulación y experimentación real. Una de las áreas más importantes en el diseño de estos dispositivos es el efector final, ya que es la parte que está en contacto con el usuario. Durante este trabajo se ha diseñado un dedal de bajo peso, adaptable a diferentes usuarios que, mediante la incorporación de sensores de contacto, permite estimar fuerzas normales y tangenciales durante la interacción con entornos reales y virtuales. Para el diseño de la arquitectura de control, se estudiaron los principales requisitos para estos dispositivos. Entre estos, cabe destacar la adquisición, procesado e intercambio a través de internet de numerosas señales de control e instrumentación; la computación de equaciones matemáticas incluyendo la cinemática directa e inversa, jacobiana, algoritmos de detección de agarres, etc. Todos estos componentes deben calcularse en tiempo real garantizando una frecuencia mínima de 1 KHz. Además, se describen sistemas para manipulación de precisión virtual y remota; así como el diseño de un método denominado "desacoplo cinemático iterativo" para computar la cinemática inversa de robots y la comparación con otros métodos actuales. Para entender la importancia de la interacción multimodal, se ha llevado a cabo un estudio para comprobar qué estímulos sensoriales se correlacionan con tiempos de respuesta más rápidos y de mayor precisión. Estos experimentos se desarrollaron en colaboración con neurocientíficos del instituto Technion Israel Institute of Technology. Comparando los tiempos de respuesta en la interacción unimodal (auditiva, visual y háptica) con combinaciones bimodales y trimodales de los mismos, se demuestra que el movimiento sincronizado de los dedos para generar respuestas de agarre se basa principalmente en la percepción háptica. La ventaja en el tiempo de procesamiento de los estímulos hápticos, sugiere que los entornos virtuales que incluyen esta componente sensorial generan mejores contingencias motoras y mejoran la credibilidad de los eventos. Se concluye que, los sistemas que incluyen percepción háptica dotan a los usuarios de más tiempo en las etapas cognitivas para rellenar información de forma creativa y formar una experiencia más rica. Una aplicación interesante de los dispositivos hápticos es el diseño de nuevos simuladores que permitan entrenar habilidades manuales en el sector médico. En colaboración con fisioterapeutas de Griffith University en Australia, se desarrolló un simulador que permite realizar ejercicios de rehabilitación de la mano. Las propiedades de rigidez no lineales de la articulación metacarpofalange del dedo índice se estimaron mediante la utilización del efector final diseñado. Estos parámetros, se han implementado en un escenario que simula el comportamiento de la mano humana y que permite la interacción háptica a través de esta interfaz. Las aplicaciones potenciales de este simulador están relacionadas con entrenamiento y educación de estudiantes de fisioterapia. En esta tesis, se han desarrollado nuevos métodos que permiten el control simultáneo de robots y manos robóticas en la interacción con entornos reales. El espacio de trabajo alcanzable por el dispositivo háptico, se extiende mediante el cambio de modo de control automático entre posición y velocidad. Además, estos métodos permiten reconocer el gesto del usuario durante las primeras etapas de aproximación al objeto para su agarre. Mediante experimentos de manipulación avanzada de objetos con un manipulador y diferentes manos robóticas, se muestra que el tiempo en realizar una tarea se reduce y que el sistema permite la realización de la tarea con precisión. Este trabajo, es el resultado de una colaboración con investigadores de Harvard BioRobotics Laboratory. ABSTRACT When we interact with the environment in our daily life (using a toothbrush, opening doors, using cell-phones, etc.), or in professional situations (medical interventions, manufacturing processes, etc.) we typically perform dexterous manipulations that involve multiple fingers and palm for both hands. Therefore, multi-Finger haptic methods can provide a realistic and natural human-machine interface to enhance immersion when interacting with simulated or remote environments. Most commercial devices allow haptic interaction with only one contact point, which may be sufficient for some exploration or palpation tasks but are not enough to perform advanced object manipulations such as grasping. In this thesis, I investigate the mechanical design, control and applications of a modular haptic device that can provide force feedback to the index, thumb and middle fingers of the user. The designed mechanical device is optimized with a multi-objective design function to achieve a low inertia, a large workspace, manipulability, and force-feedback of up to 3 N within the workspace; the bandwidth and rigidity for the device is assessed through simulation and real experimentation. One of the most important areas when designing haptic devices is the end-effector, since it is in contact with the user. In this thesis the design and evaluation of a thimble-like, lightweight, user-adaptable, and cost-effective device that incorporates four contact force sensors is described. This design allows estimation of the forces applied by a user during manipulation of virtual and real objects. The design of a real-time, modular control architecture for multi-finger haptic interaction is described. Requirements for control of multi-finger haptic devices are explored. Moreover, a large number of signals have to be acquired, processed, sent over the network and mathematical computations such as device direct and inverse kinematics, jacobian, grasp detection algorithms, etc. have to be calculated in Real Time to assure the required high fidelity for the haptic interaction. The Hardware control architecture has different modules and consists of an FPGA for the low-level controller and a RT controller for managing all the complex calculations (jacobian, kinematics, etc.); this provides a compact and scalable solution for the required high computation capabilities assuring a correct frequency rate for the control loop of 1 kHz. A set-up for dexterous virtual and real manipulation is described. Moreover, a new algorithm named the iterative kinematic decoupling method was implemented to solve the inverse kinematics of a robotic manipulator. In order to understand the importance of multi-modal interaction including haptics, a subject study was carried out to look for sensory stimuli that correlate with fast response time and enhanced accuracy. This experiment was carried out in collaboration with neuro-scientists from Technion Israel Institute of Technology. By comparing the grasping response times in unimodal (auditory, visual, and haptic) events with the response times in events with bimodal and trimodal combinations. It is concluded that in grasping tasks the synchronized motion of the fingers to generate the grasping response relies on haptic cues. This processing-speed advantage of haptic cues suggests that multimodalhaptic virtual environments are superior in generating motor contingencies, enhancing the plausibility of events. Applications that include haptics provide users with more time at the cognitive stages to fill in missing information creatively and form a richer experience. A major application of haptic devices is the design of new simulators to train manual skills for the medical sector. In collaboration with physical therapists from Griffith University in Australia, we developed a simulator to allow hand rehabilitation manipulations. First, the non-linear stiffness properties of the metacarpophalangeal joint of the index finger were estimated by using the designed end-effector; these parameters are implemented in a scenario that simulates the behavior of the human hand and that allows haptic interaction through the designed haptic device. The potential application of this work is related to educational and medical training purposes. In this thesis, new methods to simultaneously control the position and orientation of a robotic manipulator and the grasp of a robotic hand when interacting with large real environments are studied. The reachable workspace is extended by automatically switching between rate and position control modes. Moreover, the human hand gesture is recognized by reading the relative movements of the index, thumb and middle fingers of the user during the early stages of the approximation-to-the-object phase and then mapped to the robotic hand actuators. These methods are validated to perform dexterous manipulation of objects with a robotic manipulator, and different robotic hands. This work is the result of a research collaboration with researchers from the Harvard BioRobotics Laboratory. The developed experiments show that the overall task time is reduced and that the developed methods allow for full dexterity and correct completion of dexterous manipulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El uso de técnicas para la monitorización del movimiento humano generalmente permite a los investigadores analizar la cinemática y especialmente las capacidades motoras en aquellas actividades de la vida cotidiana que persiguen un objetivo concreto como pueden ser la preparación de bebidas y comida, e incluso en tareas de aseo. Adicionalmente, la evaluación del movimiento y el comportamiento humanos en el campo de la rehabilitación cognitiva es esencial para profundizar en las dificultades que algunas personas encuentran en la ejecución de actividades diarias después de accidentes cerebro-vasculares. Estas dificultades están principalmente asociadas a la realización de pasos secuenciales y al reconocimiento del uso de herramientas y objetos. La interpretación de los datos sobre la actitud de este tipo de pacientes para reconocer y determinar el nivel de éxito en la ejecución de las acciones, y para ampliar el conocimiento en las enfermedades cerebrales, sus consecuencias y severidad, depende totalmente de los dispositivos usados para la captura de esos datos y de la calidad de los mismos. Más aún, existe una necesidad real de mejorar las técnicas actuales de rehabilitación cognitiva contribuyendo al diseño de sistemas automáticos para crear una especie de terapeuta virtual que asegure una vida más independiente de estos pacientes y reduzca la carga de trabajo de los terapeutas. Con este objetivo, el uso de sensores y dispositivos para obtener datos en tiempo real de la ejecución y estado de la tarea de rehabilitación es esencial para también contribuir al diseño y entrenamiento de futuros algoritmos que pudieran reconocer errores automáticamente para informar al paciente acerca de ellos mediante distintos tipos de pistas como pueden ser imágenes, mensajes auditivos o incluso videos. La tecnología y soluciones existentes en este campo no ofrecen una manera totalmente robusta y efectiva para obtener datos en tiempo real, por un lado, porque pueden influir en el movimiento del propio paciente en caso de las plataformas basadas en el uso de marcadores que necesitan sensores pegados en la piel; y por otro lado, debido a la complejidad o alto coste de implantación lo que hace difícil pensar en la idea de instalar un sistema en el hospital o incluso en la casa del paciente. Esta tesis presenta la investigación realizada en el campo de la monitorización del movimiento de pacientes para proporcionar un paso adelante en términos de detección, seguimiento y reconocimiento del comportamiento de manos, gestos y cara mediante una manera no invasiva la cual puede mejorar la técnicas actuales de rehabilitación cognitiva para la adquisición en tiempo real de datos sobre el comportamiento del paciente y la ejecución de la tarea. Para entender la importancia del marco de esta tesis, inicialmente se presenta un resumen de las principales enfermedades cognitivas y se introducen las consecuencias que tienen en la ejecución de tareas de la vida diaria. Más aún, se investiga sobre las metodologías actuales de rehabilitación cognitiva. Teniendo en cuenta que las manos son la principal parte del cuerpo para la ejecución de tareas manuales de la vida cotidiana, también se resumen las tecnologías existentes para la captura de movimiento de manos. Una de las principales contribuciones de esta tesis está relacionada con el diseño y evaluación de una solución no invasiva para detectar y seguir las manos durante la ejecución de tareas manuales de la vida cotidiana que a su vez involucran la manipulación de objetos. Esta solución la cual no necesita marcadores adicionales y está basada en una cámara de profundidad de bajo coste, es robusta, precisa y fácil de instalar. Otra contribución presentada se centra en el reconocimiento de gestos para detectar el agarre de objetos basado en un sensor infrarrojo de última generación, y también complementado con una cámara de profundidad. Esta nueva técnica, y también no invasiva, sincroniza ambos sensores para seguir objetos específicos además de reconocer eventos concretos relacionados con tareas de aseo. Más aún, se realiza una evaluación preliminar del reconocimiento de expresiones faciales para analizar si es adecuado para el reconocimiento del estado de ánimo durante la tarea. Por su parte, todos los componentes y algoritmos desarrollados son integrados en un prototipo simple para ser usado como plataforma de monitorización. Se realiza una evaluación técnica del funcionamiento de cada dispositivo para analizar si es adecuada para adquirir datos en tiempo real durante la ejecución de tareas cotidianas reales. Finalmente, se estudia la interacción con pacientes reales para obtener información del nivel de usabilidad del prototipo. Dicha información es esencial y útil para considerar una rehabilitación cognitiva basada en la idea de instalación del sistema en la propia casa del paciente al igual que en el hospital correspondiente. ABSTRACT The use of human motion monitoring techniques usually let researchers to analyse kinematics, especially in motor strategies for goal-oriented activities of daily living, such as the preparation of drinks and food, and even grooming tasks. Additionally, the evaluation of human movements and behaviour in the field of cognitive rehabilitation is essential to deep into the difficulties some people find in common activities after stroke. This difficulties are mainly associated with sequence actions and the recognition of tools usage. The interpretation of attitude data of this kind of patients in order to recognize and determine the level of success of the execution of actions, and to broaden the knowledge in brain diseases, consequences and severity, depends totally on the devices used for the capture of that data and the quality of it. Moreover, there is a real need of improving the current cognitive rehabilitation techniques by contributing to the design of automatic systems to create a kind of virtual therapist for the improvement of the independent life of these stroke patients and to reduce the workload of the occupational therapists currently in charge of them. For this purpose, the use of sensors and devices to obtain real time data of the execution and state of the rehabilitation task is essential to also contribute to the design and training of future smart algorithms which may recognise errors to automatically provide multimodal feedback through different types of cues such as still images, auditory messages or even videos. The technology and solutions currently adopted in the field don't offer a totally robust and effective way for obtaining real time data, on the one hand, because they may influence the patient's movement in case of marker-based platforms which need sensors attached to the skin; and on the other hand, because of the complexity or high cost of implementation, which make difficult the idea of installing a system at the hospital or even patient's home. This thesis presents the research done in the field of user monitoring to provide a step forward in terms of detection, tracking and recognition of hand movements, gestures and face via a non-invasive way which could improve current techniques for cognitive rehabilitation for real time data acquisition of patient's behaviour and execution of the task. In order to understand the importance of the scope of the thesis, initially, a summary of the main cognitive diseases that require for rehabilitation and an introduction of the consequences on the execution of daily tasks are presented. Moreover, research is done about the actual methodology to provide cognitive rehabilitation. Considering that the main body members involved in the completion of a handmade daily task are the hands, the current technologies for human hands movements capture are also highlighted. One of the main contributions of this thesis is related to the design and evaluation of a non-invasive approach to detect and track user's hands during the execution of handmade activities of daily living which involve the manipulation of objects. This approach does not need the inclusion of any additional markers. In addition, it is only based on a low-cost depth camera, it is robust, accurate and easy to install. Another contribution presented is focused on the hand gesture recognition for detecting object grasping based on a brand new infrared sensor, and also complemented with a depth camera. This new, and also non-invasive, solution which synchronizes both sensors to track specific tools as well as recognize specific events related to grooming is evaluated. Moreover, a preliminary assessment of the recognition of facial expressions is carried out to analyse if it is adequate for recognizing mood during the execution of task. Meanwhile, all the corresponding hardware and software developed are integrated in a simple prototype with the purpose of being used as a platform for monitoring the execution of the rehabilitation task. Technical evaluation of the performance of each device is carried out in order to analyze its suitability to acquire real time data during the execution of real daily tasks. Finally, a kind of healthcare evaluation is also presented to obtain feedback about the usability of the system proposed paying special attention to the interaction with real users and stroke patients. This feedback is quite useful to consider the idea of a home-based cognitive rehabilitation as well as a possible hospital installation of the prototype.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Es, en el encuentro de los edificios con el terreno, donde el suelo como realidad se transforma en cualidad arquitectónica. La presente tesis aborda el estudio del plano del suelo, haciendo una revisión crítica de la arquitectura como mecanismo de pensamiento proyectual. Este análisis se enmarca a partir de los años sesenta, por considerar que es cuando comienza a evidenciarse la ruptura respecto a la herencia del Movimiento Moderno. Es entonces cuando la arquitectura marca un punto de inflexión, y empiezan a surgir diferentes actitudes metodológicas respecto al suelo, totalmente nuevas. Las clásicas acciones de encuentro como posar, elevar o enterrar fueron poco a poco sustituidas por otras más complejas como plegar, inclinar o esponjar. Utilizando como marco de restricción los encuentros o desencuentros del objeto arquitectónico con el terreno, se analiza el suelo como estrategia arquitectónica tratando de demostrar como su manipulación puede ser una eficaz herramienta con la que establecer relaciones específicas con el lugar. La capacidad que presenta el suelo, como elemento arquitectónico, de explorar y modificar las características de cada entorno, hacen de esta superficie una eficiente forma de contextualización. Por tanto, la manipulación del suelo que aquí se plantea, opera transcodificando los elementos específicos de cada lugar y actúa como estrategia arquitectónica que pone en relación al edificio con el contexto, modificando las particularidades formales de dicho plano. Frente a la tendencia que reduce la expresión arquitectónica a una simple apariencia formal autónoma, se plantea la manipulación del plano del suelo como mecanismo de enraizamiento con el lugar, enfatizando para ello la condición terrestre de la arquitectura. El plano del suelo es el que ata al edificio mediante la gravedad a la corteza terrestre. En realidad se trata de realzar el carácter mediador del suelo en la arquitectura, definiendo para ello el establecimiento de elementos comunes entre realidades distintas, potenciando el valor del suelo como herramienta que puede transcodificar el entorno, trasformando su datos en elementos arquitectónicos concretos. En este proceso de traducción de información, el suelo pasa de ser un objeto pasivo a ser uno operativo, convirtiéndose en parte activa de las acciones que sobre él se ejercen. La tesis tiene también como propósito demostrar cómo, la clave de la rápida evolución que el suelo como estrategia arquitectónica ha sufrido en los últimos años, mucho debe a la expansión del suelo en otras artes como en la escultura, y mas concretamente en el landart. Surgen entonces nuevas disciplinas, en las que se propone la comprensión del lugar en los proyectos desarrollando una visión integral del mundo natural, convirtiéndolo en un tejido viviente y conector que pone en relación las actividades que sustenta. También encontramos en Utzon, y sus plataformas geológicas, al precursor de la importancia que más tarde se le daría al plano del suelo en la arquitectura, ya que inicia cierta actitud crítica, que hizo avanzar hacia una arquitectura más expresiva que requería nuevos mecanismos que la relacionasen con el suelo que los soportaba, proponiendo con sus plataformas una transformación infraestructural del suelo. Con su interpretación transcultural de las estructuras espaciales arquetípicas mayas, chinas y japonesas, irá enriqueciendo el panorama arquitectónico, adquiriendo de este modo más valor el contexto que acabará por ser entendido de forma más compleja. Los proyectos de arquitectura en muchas ocasiones se han convertido en territorios propicios de especulación donde construir teoría arquitectónica. Desde este contexto se analizan cuatro estrategias de suelo a través del estudio de cuatro posiciones arquitectónicas muy significativas desde el punto de vista de la manipulación del plano del suelo, que construyen una interesante metodología proyectual con la que operar. Los casos de estudio, propuestos son; la Terminal Pasajeros (1996-2002) en Yokohama del estudio FOA, la Casa de la Música (1999-2005) de OMA en Oporto, el Memorial Judío (1998-2005) de Berlín de Peter Eisenman, y por último el Museo MAXXI (1998-2009) de Zaha Hadid en Roma. Descubrir las reglas, referencias y metodologías que cada uno nos propone, nos permite descubrir cuáles son los principales posicionamientos en relación al proyecto y su relación con el lugar. Las propuestas aquí expuestas abordan una nueva forma de entender el suelo, que hizo avanzar a la arquitectura hacia nuevos modos de encuentro con el terreno. Nos permite también establecer cuáles son las principales aportaciones arquitectónicas del suelo, como estrategia arquitectónica, que han derivado en su reformulación. Dichas contribuciones abren nuevas formas de abordar la arquitectura basadas en el movimiento y en la flexibilidad funcional, o en la superposición de flujos de información y circulación. También plantean nuevas vías desdibujando la figura contra el fondo, y refuerzan la idea del suelo como plataforma infraestructural que ya había sido enunciada por Utzon. Se trata en definitiva de proponer la exploración de la superficie del suelo como el elemento más revelador de las formas emergentes del espacio. ABSTRACT Where the building hits the ground, it is where the latter as reality becomes architectural quality. This thesis presents the study of the ground plane, making a critical review of architecture as a mechanism of projectual thought. This analysis starts from the sixties, because it is when the break begins to be demonstrated with regard to the inheritance of the Modern Movement. It is then when architecture marks a point of inflexion, and different, completely new methodological attitudes to the ground start to emerge. The classic meeting action like place, raise or bury are gradually replaced by more complex operations such as fold, bend or fluff up. Framing it within the meetings or disagreements between architectural object and ground, we analyzed it as architectural strategy trying to show how handling can be an effective tool to establish a specific relationship with the place. The capacity ground has, as an architectural element, to explore and modify the characteristics of each environment, makes this area an efficient tool for contextualization. Therefore, the manipulation of ground that is analyzed here, operates transcoding the specifics of each place and acts as architectural strategy that relates to the building with the context, modifying the structural peculiarities of such plane. Opposite to the reductive tendency of the architectural expression to a simple formal autonomous appearance, the manipulation of the ground plane is considered as a rooting mechanism place that emphasises the earthly condition of architecture. The ground plane is the one that binds the building by gravity to the earth’s crust. In fact, it tries to study the mediating character of the ground in architecture, defining for it to establish commonalities among different realities, enhancing the value of the ground as a tool that can transcode the environment, transforming its data into specific architectural elements. In this process of translating information, the ground goes from being a liability, to become active part of the actions exerted on the object. The thesis also tries to demonstrate how the key of the rapid evolution that the ground likes architectural strategy has gone through recently, much due to its use expansion in other arts such as sculpture. New disciplines arise then, in which one proposes the local understanding in the projects, developing an integral vision of the natural world and turning it into an element linking the activities it supports. We also find in Utzon, and his geological platforms, the precursor of the importance that later would be given to the ground plane in architecture, since it initiates a certain critical attitude, which advances towards a more expressive architecture, with new mechanisms that relate to the ground that it sits in, proposing with its platforms an infrastructural transformation of the ground. With his transcultural interpretation of the spatial archetypal structures, he will enrich the architectural discourse, making the context become understood in more complex ways. Architectural projects in many cases have become territories prone to architectural theory speculation. Within this context, four strategies are analyzed through the study of four very significant architectural positions, from the point of view of handling the ground plane, and the project methodology within which to operate. The case studies analyzed are; Passenger Terminal (1996-2002) in Yokohama from FOA, The House of the music (1999-2005) the OMA in Oporto, The Jew monument (1998-2005) in Berlin the Peter Eisenman, and finally the MAXXI Museum (1998-2009) the Zaha Hadid in Rome. Discovering the rules, references and methodologies that each of those offer, it allows to discover what the main positions are regarding the building and its relationship with the place where it is located. The proposals exposed here try to shed a different light on the ground, which architecture advancing in new ways on how meet it. The crossing of the different studied positions, allows us to establish what the main contributions of ground as architectural strategy are. Such contributions open up new approaches to architecture based on movement and functional flexibility, overlapping information flow and circulation, consider new ways that blur the figure against the background, and reinforce the idea of ground as infrastructural platform, already raised by Utzon. Summarizing, it tries to propose the exploration of the ground plane as the most revealing form of spatial exploration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Knowledge of the stage composition and the temporal dynamics of human cognitive operations is critical for building theories of higher mental activity. This information has been difficult to acquire, even with different combinations of techniques such as refined behavioral testing, electrical recording/interference, and metabolic imaging studies. Verbal object comprehension was studied herein in a single individual, by using three tasks (object naming, auditory word comprehension, and visual word comprehension), two languages (English and Farsi), and four techniques (stimulus manipulation, direct cortical electrical interference, electrocorticography, and a variation of the technique of direct cortical electrical interference to produce time-delimited effects, called timeslicing), in a subject in whom indwelling subdural electrode arrays had been placed for clinical purposes. Electrical interference at a pair of electrodes on the left lateral occipitotemporal gyrus interfered with naming in both languages and with comprehension in the language tested (English). The naming and comprehension deficit resulted from interference with processing of verbal object meaning. Electrocorticography indices of cortical activation at this site during naming started 250–300 msec after visual stimulus presentation. By using the timeslicing technique, which varies the onset of electrical interference relative to the behavioral task, we found that completion of processing for verbal object meaning varied from 450 to 750 msec after current onset. This variability was found to be a function of the subject’s familiarity with the objects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Action selection and organization are very complex processes that need to exploit contextual information and the retrieval of previously memorized information, as well as the integration of these different types of data. On the basis of anatomical connection with premotor and parietal areas involved in action goal coding, and on the data about the literature it seems appropriate to suppose that one of the most candidate involved in the selection of neuronal pools for the selection and organization of intentional actions is the prefrontal cortex. We recorded single ventrolateral prefrontal (VLPF) neurons activity while monkeys performed simple and complex manipulative actions aimed at distinct final goals, by employing a modified and more strictly controlled version of the grasp-to-eat(a food pellet)/grasp-to-place(an object) paradigm used in previous studies on parietal (Fogassi et al., 2005) and premotor neurons (Bonini et al., 2010). With this task we have been able both to evaluate the processing and integration of distinct (visual and auditory) contextual sequentially presented information in order to select the forthcoming action to perform and to examine the possible presence of goal-related activity in this portion of cortex. Moreover, we performed an observation task to clarify the possible contribution of VLPF neurons to the understanding of others’ goal-directed actions. Simple Visuo Motor Task (sVMT). We found four main types of neurons: unimodal sensory-driven, motor-related, unimodal sensory-and-motor, and multisensory neurons. We found a substantial number of VLPF neurons showing both a motor-related discharge and a visual presentation response (sensory-and-motor neurons), with remarkable visuo-motor congruence for the preferred target. Interestingly the discharge of multisensory neurons reflected a behavioural decision independently from the sensory modality of the stimulus allowing the monkey to make it: some encoded a decision to act/refraining from acting (the majority), while others specified one among the four behavioural alternatives. Complex Visuo Motor Task (cVMT). The cVMT was similar to the sVMT, but included a further grasping motor act (grasping a lid in order to remove it, before grasping the target) and was run in two modalities: randomized and in blocks. Substantially, motor-related and sensory-and-motor neurons tested in the cVMTrandomized were activated already during the first grasping motor act, but the selectivity for one of the two graspable targets emerged only during the execution of the second grasping. In contrast, when the cVMT was run in block, almost all these neurons not only discharged during the first grasping motor act, but also displayed the same target selectivity showed in correspondence of the hand contact with the target. Observation Task (OT). A great part of the neurons active during the OT showed a firing rate modulation in correspondence with the action performed by the experimenter. Among them, we found neurons significantly activated during the observation of the experimenter’s action (action observation-related neurons) and neurons responding not only to the action observation, but also to the presented cue stimuli (sensory-and-action observation-related neurons. Among the neurons of the first set, almost the half displayed a target selectivity, with a not clear difference between the two presented targets; Concerning to the second neuronal set, sensory-and-action related neurons, we found a low target selectivity and a not strictly congruence between the selectivity exhibited in the visual response and in the action observation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tactile sensors play an important role in robotics manipulation to perform dexterous and complex tasks. This paper presents a novel control framework to perform dexterous manipulation with multi-fingered robotic hands using feedback data from tactile and visual sensors. This control framework permits the definition of new visual controllers which allow the path tracking of the object motion taking into account both the dynamics model of the robot hand and the grasping force of the fingertips under a hybrid control scheme. In addition, the proposed general method employs optimal control to obtain the desired behaviour in the joint space of the fingers based on an indicated cost function which determines how the control effort is distributed over the joints of the robotic hand. Finally, authors show experimental verifications on a real robotic manipulation system for some of the controllers derived from the control framework.