946 resultados para VISUAL INFORMATION
Resumo:
The aim of the present study was to analyze the effects of looking at targets located at different distances on body oscillation during tasks of distinct difficulties. In Experiment 1, ten participants in quiet stance fixated targets in three conditions: No object-far (fixation on far-target without near-target), Object-near (fixation on near target with fartarget), and Object-far (fixation on far-target with near-target). Mean oscillations of trunk in anterior-posterior axis were smallest in the Object-near condition; the No object-far and Object-far conditions were similar. In Experiment 2, seven participants in kiba-dachi, a karate stance, were submitted to three conditions: Blindfolded, No object-far, and Object-near. Mean oscillations of head and trunk in anterior-posterior axis were smaller in the Object-near as compared to blindfolded condition; trunk oscillated more during No object-far than Object-near condition. The results support the notion that a simple posture is not automatically regulated by the optical flow, but different amounts of visual instability may be tolerated according to the fixation distance, regardless the presence of non-fixated objects; the control of a more difficult posture may also accommodate the effects of fixation distance.
Resumo:
From 1933 to 1944, the 21 Regional Offices of Education of the State of São Paulo worked out reports of inventory of the schools of São Paulo inspected in that period of time. At least 68 of those reports have been preserved in the Public Archive of the State of São Paulo. The current paper presents part of that patrimony as an important source of research for the history of education. By dividing that documentation between visual and written sources, the text will focus on the visual sources, and will discuss the methodological difficulties of using this kind of source. Following, will be illustrated part of the visual information that those documents offer to the researchers. The article concludes with a brief sampling of textual information that the reports provide to the historians.
Resumo:
Saccadic eye movements have been shown to affect posture by decreasing the magnitude of body sway in young adults. However, there is no evidence of how the search for visual information that occurs during eye movements affects postural control in older adults. The purpose of the present study was to determine the influence of saccadic eye movements on postural control in older adults while they stood on 2 different bases of support. Twelve older adults stood upright in 70-s trials under 2 stance conditions (wide and narrow) and 3 gaze conditions (fixation, saccadic eye movements at 0.5 Hz, and saccadic eye movements at 1.1 Hz). Head and trunk sway amplitude and mean sway frequency were measured in both the anterior/posterior (AP) and medial/lateral (ML) directions. The results showed that the amplitude of body sway was reduced during saccades compared with fixation, as previously observed in young adults. However, older adults exhibited similar sway amplitude and frequency in the AP direction under the wide and narrow stance conditions, which is different from observations in young adults, who display larger sway in a narrow stance compared with a wide stance while performing saccades. These results suggest that although older adults are affected by saccadic eye movements by a decrease in the amplitude of body sway, as observed in young adults, they present a more rigid postural control strategy that does not allow larger sway during a more challenging stance condition.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The aim of this paper is to gather research information and studies posted about the importance of social environment in the development of symbolic systems on children. Languages verbal, visual and written are interrelated representation systems, although each one has its own characteristics and elements. The objective of this paper is to elucidate the importance of linking the various existing types of language and therefore the importance of a coordinate approach also in the teaching-learning situations of these systems. The work also intends to disclose how thin is the line that separates writing of drawing, both of them as representative systems. As well as to show how visual information and language are intrinsically linked as thought happens by the connections between them. The work includes the report of an activity with Japanese ideograms done in the author's teacher's training at formal education. It also approaches pictographic, creative and poetic aspects of ideograms that brings up new ways of thinking and representing symbolically just like the methods of artistic language
Resumo:
The aim of this paper is to gather research information and studies posted about the importance of social environment in the development of symbolic systems on children. Languages verbal, visual and written are interrelated representation systems, although each one has its own characteristics and elements. The objective of this paper is to elucidate the importance of linking the various existing types of language and therefore the importance of a coordinate approach also in the teaching-learning situations of these systems. The work also intends to disclose how thin is the line that separates writing of drawing, both of them as representative systems. As well as to show how visual information and language are intrinsically linked as thought happens by the connections between them. The work includes the report of an activity with Japanese ideograms done in the author's teacher's training at formal education. It also approaches pictographic, creative and poetic aspects of ideograms that brings up new ways of thinking and representing symbolically just like the methods of artistic language
Resumo:
The main questions addressed in this work were whether and how adaptation to suppression of visual information occurs in a free-fall paradigm, and the extent to which vision availability influences the control of landing movements. The prelanding modulation of EMG timing and amplitude of four lower-limb muscles was investigated. Participants performed six consecutive drop-landings from four different heights in two experimental conditions: with and without vision. Experimental design precluded participants from estimating the height of the drop. Since cues provided by proprioceptive and vestibular information acquired during the first trials were processed, the nervous system rapidly adapted to the lack of visual information, and hence produced a motor output (i.e., prelanding EMG modulation) similar to that observed when performing the activity with vision available.
Resumo:
Esse estudo teve como objetivo examinar possíveis alterações na dinâmica intrínseca de crianças e adultos decorrentes de informações externas na realização de uma tarefa de manutenção da postura ereta. Participaram do estudo dez crianças de 8 anos de idade e dez adultos jovens de ambos os gêneros. Eles permaneceram na posição ereta dentro de uma sala móvel que foi movimentada continuamente para frente e para trás. Os participantes recebiam informação sobre o movimento da sala e eram solicitados a não oscilar ou a oscilar junto com o movimento da mesma. Os resultados mostraram que a manipulação da informação visual induziu oscilação corporal correspondente (dinâmica intrínseca) em crianças e adultos. Informação sobre o movimento da sala e solicitação de uma ação (informação comportamental) alteraram o relacionamento entre informação visual e oscilação corporal. Crianças apresentaram mais dificuldades em alterar a dinâmica intrínseca do que adultos, indicando que elas são mais dependentes da dinâmica intrínseca do que adultos. Esses resultados trazem implicações importantes para a situação de ensino-aprendizagem, pois indica que aprendizagem envolvendo crianças deve ser estruturada propiciando condições mais favoráveis para alterações na dinâmica intrínseca para que os objetivos da mesma sejam alcançados.
Resumo:
[EN]Low cost real-time depth cameras offer new sensors for a wide field of applications apart from the gaming world. Other active research scenarios as for example surveillance, can take ad- vantage of the capabilities offered by this kind of sensors that integrate depth and visual information. In this paper, we present a system that operates in a novel application context for these devices, in troublesome scenarios where illumination conditions can suffer sudden changes. We focus on the people counting problem with re-identification and trajectory analysis.
Resumo:
The automatic extraction of biometric descriptors of anonymous people is a challenging scenario in camera networks. This task is typically accomplished making use of visual information. Calibrated RGBD sensors make possible the extraction of point cloud information. We present a novel approach for people semantic description and re-identification using the individual point cloud information. The proposal combines the use of simple geometric features with point cloud features based on surface normals.
Resumo:
A single picture provides a largely incomplete representation of the scene one is looking at. Usually it reproduces only a limited spatial portion of the scene according to the standpoint and the viewing angle, besides it contains only instantaneous information. Thus very little can be understood on the geometrical structure of the scene, the position and orientation of the observer with respect to it remaining also hard to guess. When multiple views, taken from different positions in space and time, observe the same scene, then a much deeper knowledge is potentially achievable. Understanding inter-views relations enables construction of a collective representation by fusing the information contained in every single image. Visual reconstruction methods confront with the formidable, and still unanswered, challenge of delivering a comprehensive representation of structure, motion and appearance of a scene from visual information. Multi-view visual reconstruction deals with the inference of relations among multiple views and the exploitation of revealed connections to attain the best possible representation. This thesis investigates novel methods and applications in the field of visual reconstruction from multiple views. Three main threads of research have been pursued: dense geometric reconstruction, camera pose reconstruction, sparse geometric reconstruction of deformable surfaces. Dense geometric reconstruction aims at delivering the appearance of a scene at every single point. The construction of a large panoramic image from a set of traditional pictures has been extensively studied in the context of image mosaicing techniques. An original algorithm for sequential registration suitable for real-time applications has been conceived. The integration of the algorithm into a visual surveillance system has lead to robust and efficient motion detection with Pan-Tilt-Zoom cameras. Moreover, an evaluation methodology for quantitatively assessing and comparing image mosaicing algorithms has been devised and made available to the community. Camera pose reconstruction deals with the recovery of the camera trajectory across an image sequence. A novel mosaic-based pose reconstruction algorithm has been conceived that exploit image-mosaics and traditional pose estimation algorithms to deliver more accurate estimates. An innovative markerless vision-based human-machine interface has also been proposed, so as to allow a user to interact with a gaming applications by moving a hand held consumer grade camera in unstructured environments. Finally, sparse geometric reconstruction refers to the computation of the coarse geometry of an object at few preset points. In this thesis, an innovative shape reconstruction algorithm for deformable objects has been designed. A cooperation with the Solar Impulse project allowed to deploy the algorithm in a very challenging real-world scenario, i.e. the accurate measurements of airplane wings deformations.
Resumo:
Ren and colleagues (2006) found that saccades to visual targets became less accurate when somatosensory information about hand location was added, suggesting that saccades rely mainly on vision. We conducted two kinematic experiments to examine whether or not reaching movements would also show such strong reliance on vision. In Experiment 1, subjects used their dominant right hand to perform reaches, with or without a delay, to an external visual target or to their own left fingertip positioned either by the experimenter or by the participant. Unlike saccades, reaches became more accurate and precise when proprioceptive information was available. In Experiment 2, subjects reached toward external or bodily targets with differing amounts of visual information. Proprioception improved performance only when vision was limited. Our results indicate that reaching movements, unlike saccades, are improved rather than impaired by the addition of somatosensory information.
Resumo:
The main aim of this thesis is strongly interdisciplinary: it involves and presumes a knowledge on Neurophysiology, to understand the mechanisms that undergo the studied phenomena, a knowledge and experience on Electronics, necessary during the hardware experimental set-up to acquire neuronal data, on Informatics and programming to write the code necessary to control the behaviours of the subjects during experiments and the visual presentation of stimuli. At last, neuronal and statistical models should be well known to help in interpreting data. The project started with an accurate bibliographic research: until now the mechanism of perception of heading (or direction of motion) are still poorly known. The main interest is to understand how the integration of visual information relative to our motion with eye position information happens. To investigate the cortical response to visual stimuli in motion and the integration with eye position, we decided to study an animal model, using Optic Flow expansion and contraction as visual stimuli. In the first chapter of the thesis, the basic aims of the research project are presented, together with the reasons why it’s interesting and important to study perception of motion. Moreover, this chapter describes the methods my research group thought to be more adequate to contribute to scientific community and underlines my personal contribute to the project. The second chapter presents an overview on useful knowledge to follow the main part of the thesis: it starts with a brief introduction on central nervous system, on cortical functions, then it presents more deeply associations areas, which are the main target of our study. Furthermore, it tries to explain why studies on animal models are necessary to understand mechanism at a cellular level, that could not be addressed on any other way. In the second part of the chapter, basics on electrophysiology and cellular communication are presented, together with traditional neuronal data analysis methods. The third chapter is intended to be a helpful resource for future works in the laboratory: it presents the hardware used for experimental sessions, how to control animal behaviour during the experiments by means of C routines and a software, and how to present visual stimuli on a screen. The forth chapter is the main core of the research project and the thesis. In the methods, experimental paradigms, visual stimuli and data analysis are presented. In the results, cellular response of area PEc to visual stimuli in motion combined with different eye positions are shown. In brief, this study led to the identification of different cellular behaviour in relation to focus of expansion (the direction of motion given by the optic flow pattern) and eye position. The originality and importance of the results are pointed out in the conclusions: this is the first study aimed to investigate perception of motion in this particular cortical area. In the last paragraph, a neuronal network model is presented: the aim is simulating cellular pre-saccadic and post-saccadic response of neuron in area PEc, during eye movement tasks. The same data presented in chapter four, are further analysed in chapter fifth. The analysis started from the observation of the neuronal responses during 1s time period in which the visual stimulation was the same. It was clear that cells activities showed oscillations in time, that had been neglected by the previous analysis based on mean firing frequency. Results distinguished two cellular behaviour by their response characteristics: some neurons showed oscillations that changed depending on eye and optic flow position, while others kept the same oscillations characteristics independent of the stimulus. The last chapter discusses the results of the research project, comments the originality and interdisciplinary of the study and proposes some future developments.
Resumo:
In der vorliegenden Arbeit werden verschiedene, insbesondere zeitliche Aspekte des Blickrichtungsnacheffekts (gaze aftereffect) untersucht. Dieser Effekt besagt, dass nach längerer Betrachtung von Bildern, die Personen mit abgewandtem Blick zeigen, die Wahrnehmung von Blickrichtungen in Richtung des adaptierten Blickes verschoben ist. Betrachter halten dann zugewandte Blicke fälschlicherweise für in die Gegenrichtung verschoben, und Blicke in die Adaptationsblickrichtung fälschlicherweise für geradeaus, d.h. sie fühlen sich angeschaut, obwohl sie es nicht werden. In dieser Dissertation wird der Blickrichtungsnacheffekt mit vier psychophysischen Experimenten untersucht, in denen die Probanden einfache kategoriale Urteile über die Blickrichtung der Testbilder abzugeben hatten.rnrnDas erste Experiment untersucht die Induktion des Blickrichtungsnacheffekts. Es wird gezeigt, dass keine separate Adaptationsphase für die Induktion des Nacheffekts notwendig ist. Auch die alleinige, relativ kurze Darbietung des zur Adaptation verwendeten Reizes (TopUp-Display) vor der Präsentation eines Testbildes führt im Laufe wiederholter experimenteller Darbietungen zu einer Verschiebung der allgemeinen Blickrichtungs-Tuningkurve, sowie zu ihrer Verbreiterung. In einem zweiten Experiment wird nachgewiesen, dass die Ausprägung des Blickrichtungsnacheffekts von der jeweiligen Darbietungszeit des Adaptationsreizes abhängt. Zwar ist der Nacheffekt umso stärker, je länger das TopUp-Display gezeigt wird. Aber auch bei sehr kurzen Darbietungszeiten von einer Sekunde kommt der Effekt bereits zustande, hier zeigt sich eine lokal begrenztere Wirkung. Die Auswertung des zeitlichen Verlaufs ergibt, dass sich der Effekt rasch vollständig aufbaut und bereits innerhalb der ersten Darbietungen entsteht. Das dritte Experiment zeigt, dass dem Nacheffekt sowohl kurzfristige Einwirkungen der direkt vor dem Testbild erfolgten Reizung zugrunde liegen, als auch langfristige Memory-Effekte, die über die im Laufe des Experiments gegebenen Wiederholungen akkumuliert werden. Bei Blickwinkeln von 5° halten sich kurzfristige und langfristige Einwirkungen in etwa die Waage. Bei Blickwinkeln von 10° aber sind nur knapp 20% kurzfristig, und etwa 80% langfristige Einwirkungen für den Effekt verantwortlich. In einem vierten Experiment wird die zeitliche Rückbildung des Effekts untersucht und gezeigt, dass sich der Blickrichtungsnacheffekt im Kontrast zu seiner schnellen Entstehung langsam, nämlich innerhalb mehrerer Minuten zurückbildet.rnrnDie Diskussion der Ergebnisse kommt zu dem Schluss, dass die hier gefundene zeitliche Dynamik des Blickrichtungsnacheffekts Adaptationsprozesse auf höheren Schichten der visuellen Informationsverarbeitung als die zugrunde liegenden Mechanismen nahe legt.
Resumo:
Zielgerichtete Orientierung ermöglicht es Lebewesen, überlebenswichtige Aufgaben, wie die Suche nach Ressourcen, Fortpflanzungspartnern und sicheren Plätzen zu bewältigen. Dafür ist es essentiell, die Umgebung sensorisch wahrzunehmen, frühere Erfahrungen zu speichern und wiederabzurufen und diese Informationen zu integrieren und in motorische Aktionen umzusetzen.rnWelche Neuronengruppen vermitteln zielgerichtete Orientierung im Gehirn einer Fliege? Welche sensorischen Informationen sind in einem gegebenen Kontext relevant und wie werden diese Informationen sowie gespeichertes Vorwissen in motorische Aktionen übersetzt? Wo findet im Gehirn der Übergang von der sensorischen Verarbeitung zur motorischen Kontrolle statt? rnDer Zentralkomplex, ein Verbund von vier Neuropilen des Zentralhirns von Drosophila melanogaster, fungiert als Übergang zwischen in den optischen Loben vorverarbeiteten visuellen Informationen und prämotorischem Ausgang. Die Neuropile sind die Protocerebralbrücke, der Fächerförmige Körper, der Ellipsoidkörper und die Noduli. rnIn der vorliegenden Arbeit konnte gezeigt werden, dass Fruchtfliegen ein räumliches Arbeitsgedächtnis besitzen. Dieses Gedächtnis kann aktuelle visuelle Information ersetzen, wenn die Sicht auf das Zielobjekt verloren geht. Dies erfordert die sensorische Wahrnehmung von Zielobjekten, die Speicherung der Position, die kontinuierliche Integration von Eigen-und Objektposition, sowie die Umsetzung der sensorischen Information in zielgerichtete Bewegung. Durch konditionale Expression von Tetanus Toxin mittels des GAL4/UAS/GAL80ts Systems konnte gezeigt werden, dass die Ringneurone, welche in den Ellipsoidkörper projizieren, für das Orientierungsgedächtnis notwendig sind. Außerdem konnte gezeigt werden, dass Fliegen, denen die ribosomale Serinkinase S6KII fehlt, die Richtung verlieren, sobald keine Objekte mehr sichtbar sind und, dass die partielle Rettung dieser Kinase ausschließlich in den Ringneuronenklassen R3 und R4d hinreichend ist, um das Gedächtnis wieder herzustellen. Bei dieser Gedächtnisleistung scheint es sich um eine idiothetische Form der Orientierung zu handeln. rn Während das räumliche Arbeitsgedächtnis nach Verschwinden von Objekten relevant ist, wurde in der vorliegende Arbeit auch die Vermittlung zielgerichteter Bewegung auf sichtbare Objekte untersucht. Dabei wurde die zentrale Frage bearbeitet, welche Neuronengruppen visuelle Orientierung vermitteln. Anhand von Gehirnstrukturmutanten konnte gezeigt werden, dass eine intakte Protocerebralbrücke notwendig ist, um Laufgeschwindigkeit, Laufaktivität und Zielgenauigkeit bei der Ansteuerung visueller Stimuli korrekt zu vermitteln. Dabei scheint das Horizontale Fasersystem, welches von der Protocerebralbrücke über den Fächerförmigen Körper auf den Zentralkomplex assoziierte Neuropile, die Ventralkörper, projiziert, notwendig für die lokomotorische Kontrolle und die zielgenaue Bewegung zu sein. Letzeres konnte zum einen durch Blockade der synaptischen Transmission anhand konditionaler Tetanus Toxin Expression mittels des GAL4/UAS/GAL80ts Systems im Horizontalen Fasersystem gezeigt werden;. zum anderen auch durch partielle Rettung der in den Strukturmutanten betroffenen Gene. rn Den aktuellen Ergebnissen und früheren Studien folgend, ergibt sich dabei ein Modell, wie zielgerichtete Bewegung auf visuelle Stimuli neuronal vermittelt werden könnte. Nach diesem Modell bildet die Protocerebralbrücke die Azimuthpositionen von Objekten ab und das Horizontale Fasersystem vermittelt die entsprechende lokomotorische Wo-Information für zielgerichtete Bewegungen. Die Eigenposition in Relation zum Zielobjekt wird über die Ringneurone und den Ellipsoidkörper vermittelt. Wenn das Objekt aus der Sicht verschwindet, kann die Relativposition ideothetisch ermittelt werden und integriert werden mit Vorinformation über das Zielobjekt, die im Fächerförmigen Körper abgelegt ist (Was-Information). Die resultierenden Informationen könnten dann über das Horizontale Fasersystem in den Ventralkörpern auf absteigende Neurone gelangen und in den Thorax zu den motorischen Zentren weitergeleitet werden.rn