935 resultados para Visual Object Identification Task
Resumo:
Visual analysis of social networks is usually based on graph drawing algorithms and tools. However, social networks are a special kind of graph in the sense that interpretation of displayed relationships is heavily dependent on context. Context, in its turn, is given by attributes associated with graph elements, such as individual nodes, edges, and groups of edges, as well as by the nature of the connections between individuals. In most systems, attributes of individuals and communities are not taken into consideration during graph layout, except to derive weights for force-based placement strategies. This paper proposes a set of novel tools for displaying and exploring social networks based on attribute and connectivity mappings. These properties are employed to layout nodes on the plane via multidimensional projection techniques. For the attribute mapping, we show that node proximity in the layout corresponds to similarity in attribute, leading to easiness in locating similar groups of nodes. The projection based on connectivity yields an initial placement that forgoes force-based or graph analysis algorithm, reaching a meaningful layout in one pass. When a force algorithm is then applied to this initial mapping, the final layout presents better properties than conventional force-based approaches. Numerical evaluations show a number of advantages of pre-mapping points via projections. User evaluation demonstrates that these tools promote ease of manipulation as well as fast identification of concepts and associations which cannot be easily expressed by conventional graph visualization alone. In order to allow better space usage for complex networks, a graph mapping on the surface of a sphere is also implemented.
Resumo:
We investigated the effects of texture gradient and the position of test stimulus in relation to the horizon on the perception of relative sizes. By using the staircase method, 50 participants adjusted the size of a bar presented above, below or on the horizon as it could be perceived in the same size of a bar presented in the lower visual field. Stimuli were presented during 100ms on five background conditions. Perspective gradient contributed more to the overestimation of relative sizes than compression gradient. The sizes of the objects which intercepted the horizon line were overestimated. Visual system was very effective in extracting information from perspective depth cues, making it even during very brief exposure.
Resumo:
Background: In epidemiological surveys, a good reliability among the examiners regarding the caries detection method is essential. However, training and calibrating those examiners is an arduous task because it involves several patients who are examined many times. To facilitate this step, we aimed to propose a laboratory methodology to simulate the examinations performed to detect caries lesions using the International Caries Detection and Assessment System (ICDAS) in epidemiological surveys. Methods: A benchmark examiner conducted all training sessions. A total of 67 exfoliated primary teeth, varying from sound to extensive cavitated, were set in seven arch models to simulate complete mouths in primary dentition. Sixteen examiners (graduate students) evaluated all surfaces of the teeth under illumination using buccal mirrors and ball-ended probe in two occasions, using only coronal primary caries scores of the ICDAS. As reference standard, two different examiners assessed the proximal surfaces by direct visual inspection, classifying them in sound, with non-cavitated or with cavitated lesions. After, teeth were sectioned in the bucco-lingual direction, and the examiners assessed the sections in stereomicroscope, classifying the occlusal and smooth surfaces according to lesion depth. Inter-examiner reproducibility was evaluated using weighted kappa. Sensitivities and specificities were calculated at two thresholds: all lesions and advanced lesions (cavitated lesions in proximal surfaces and lesions reaching the dentine in occlusal and smooth surfaces). Conclusion: The methodology purposed for training and calibration of several examiners designated for epidemiological surveys of dental caries in preschool children using the ICDAS is feasible, permitting the assessment of reliability and accuracy of the examiners previously to the survey´s development.
Resumo:
Abstract Background Catching an object is a complex movement that involves not only programming but also effective motor coordination. Such behavior is related to the activation and recruitment of cortical regions that participates in the sensorimotor integration process. This study aimed to elucidate the cortical mechanisms involved in anticipatory actions when performing a task of catching an object in free fall. Methods Quantitative electroencephalography (qEEG) was recorded using a 20-channel EEG system in 20 healthy right-handed participants performed the catching ball task. We used the EEG coherence analysis to investigate subdivisions of alpha (8-12 Hz) and beta (12-30 Hz) bands, which are related to cognitive processing and sensory-motor integration. Results Notwithstanding, we found the main effects for the factor block; for alpha-1, coherence decreased from the first to sixth block, and the opposite effect occurred for alpha-2 and beta-2, with coherence increasing along the blocks. Conclusion It was concluded that to perform successfully our task, which involved anticipatory processes (i.e. feedback mechanisms), subjects exhibited a great involvement of sensory-motor and associative areas, possibly due to organization of information to process visuospatial parameters and further catch the falling object.
Resumo:
Abstract Background Despite new brain imaging techniques that have improved the study of the underlying processes of human decision-making, to the best of our knowledge, there have been very few studies that have attempted to investigate brain activity during medical diagnostic processing. We investigated brain electroencephalography (EEG) activity associated with diagnostic decision-making in the realm of veterinary medicine using X-rays as a fundamental auxiliary test. EEG signals were analysed using Principal Components (PCA) and Logistic Regression Analysis Results The principal component analysis revealed three patterns that accounted for 85% of the total variance in the EEG activity recorded while veterinary doctors read a clinical history, examined an X-ray image pertinent to a medical case, and selected among alternative diagnostic hypotheses. Two of these patterns are proposed to be associated with visual processing and the executive control of the task. The other two patterns are proposed to be related to the reasoning process that occurs during diagnostic decision-making. Conclusions PCA analysis was successful in disclosing the different patterns of brain activity associated with hypothesis triggering and handling (pattern P1); identification uncertainty and prevalence assessment (pattern P3), and hypothesis plausibility calculation (pattern P2); Logistic regression analysis was successful in disclosing the brain activity associated with clinical reasoning success, and together with regression analysis showed that clinical practice reorganizes the neural circuits supporting clinical reasoning.
Resumo:
Cognitive dysfunction is found in patients with brain tumors and there is a need to determine whether it can be replicated in an experimental model. In the present study, the object recognition (OR) paradigm was used to investigate cognitive performance in nude mice, which represent one of the most important animal models available to study human tumors in vivo. Mice with orthotopic xenografts of the human U87MG glioblastoma cell line were trained at 9, 14, and 18days (D9, D14, and D18, respectively) after implantation of 5×10(5) cells. At D9, the mice showed normal behavior when tested 90min or 24h after training and compared to control nude mice. Animals at D14 were still able to discriminate between familiar and novel objects, but exhibited a lower performance than animals at D9. Total impairment in the OR memory was observed when animals were evaluated on D18. These alterations were detected earlier than any other clinical symptoms, which were observed only 22-24days after tumor implantation. There was a significant correlation between the discrimination index (d2) and time after tumor implantation as well as between d2 and tumor volume. These data indicate that the OR task is a robust test to identify early behavior alterations caused by glioblastoma in nude mice. In addition, these results suggest that OR task can be a reliable tool to test the efficacy of new therapies against these tumors.
Resumo:
[EN]Low cost real-time depth cameras offer new sensors for a wide field of applications apart from the gaming world. Other active research scenarios as for example surveillance, can take ad- vantage of the capabilities offered by this kind of sensors that integrate depth and visual information. In this paper, we present a system that operates in a novel application context for these devices, in troublesome scenarios where illumination conditions can suffer sudden changes. We focus on the people counting problem with re-identification and trajectory analysis.
Resumo:
[EN]Re-identi fication is commonly accomplished using appearance features based on salient points and color information. In this paper, we make an study on the use of di fferent features exclusively obtained from depth images captured with RGB-D cameras. The results achieved, using simple geometric features extracted in a top-view setup, seem to provide useful descriptors for the re-identi fication task.
Resumo:
The automatic extraction of biometric descriptors of anonymous people is a challenging scenario in camera networks. This task is typically accomplished making use of visual information. Calibrated RGBD sensors make possible the extraction of point cloud information. We present a novel approach for people semantic description and re-identification using the individual point cloud information. The proposal combines the use of simple geometric features with point cloud features based on surface normals.
Resumo:
In the collective imaginaries a robot is a human like machine as any androids in science fiction. However the type of robots that you will encounter most frequently are machinery that do work that is too dangerous, boring or onerous. Most of the robots in the world are of this type. They can be found in auto, medical, manufacturing and space industries. Therefore a robot is a system that contains sensors, control systems, manipulators, power supplies and software all working together to perform a task. The development and use of such a system is an active area of research and one of the main problems is the development of interaction skills with the surrounding environment, which include the ability to grasp objects. To perform this task the robot needs to sense the environment and acquire the object informations, physical attributes that may influence a grasp. Humans can solve this grasping problem easily due to their past experiences, that is why many researchers are approaching it from a machine learning perspective finding grasp of an object using information of already known objects. But humans can select the best grasp amongst a vast repertoire not only considering the physical attributes of the object to grasp but even to obtain a certain effect. This is why in our case the study in the area of robot manipulation is focused on grasping and integrating symbolic tasks with data gained through sensors. The learning model is based on Bayesian Network to encode the statistical dependencies between the data collected by the sensors and the symbolic task. This data representation has several advantages. It allows to take into account the uncertainty of the real world, allowing to deal with sensor noise, encodes notion of causality and provides an unified network for learning. Since the network is actually implemented and based on the human expert knowledge, it is very interesting to implement an automated method to learn the structure as in the future more tasks and object features can be introduced and a complex network design based only on human expert knowledge can become unreliable. Since structure learning algorithms presents some weaknesses, the goal of this thesis is to analyze real data used in the network modeled by the human expert, implement a feasible structure learning approach and compare the results with the network designed by the expert in order to possibly enhance it.
Resumo:
Prehension in an act of coordinated reaching and grasping. The reaching component is concerned with bringing the hand to object to be grasped (transport phase); the grasping component refers to the shaping of the hand according to the object features (grasping phase) (Jeannerod, 1981). Reaching and grasping involve different muscles, proximal and distal muscles respectively, and are controlled by different parietofrontal circuit (Jeannerod et al., 1995): a medial circuit, involving area of superior parietal lobule and dorsal premotor area 6 (PMd) (dorsomedial visual stream), is mainly concerned with reaching; a lateral circuit, involving the inferior parietal lobule and ventral premotor area 6 (PMv) (dorsolateral visual stream), with grasping. Area V6A is located in the caudalmost part of the superior parietal lobule, so it belongs to the dorsomedial visual stream; it contains neurons sensitive to visual stimuli (Galletti et al. 1993, 1996, 1999) as well as cells sensitive to the direction of gaze (Galletti et al. 1995) and cells showing saccade-related activity (Nakamura et al. 1999; Kutz et al. 2003). Area V6A contains also arm-reaching neurons likely involved in the control of the direction of the arm during movements towards objects in the peripersonal space (Galletti et al. 1997; Fattori et al. 2001). The present results confirm this finding and demonstrate that during the reach-to-grasp the V6A neurons are also modulated by the orientation of the wrist. Experiments were approved by the Bioethical Committee of the University of Bologna and were performed in accordance with National laws on care and use of laboratory animals and with the European Communities Council Directive of 24th November 1986 (86/609/EEC), recently revised by the Council of Europe guidelines (Appendix A of Convention ETS 123). Experiments were performed in two awake Macaca fascicularis. Each monkey was trained to sit in a primate chair with the head restrained to perform reaching and grasping arm movements in complete darkness while gazing a small fixation point. The object to be grasped was a handle that could have different orientation. We recorded neural activity from 163 neurons of the anterior parietal sulcus; 116/163 (71%) neurons were modulated by the reach-to-grasp task during the execution of the forward movements toward the target (epoch MOV), 111/163 (68%) during the pulling of the handle (epoch HOLD) and 102/163 during the execution of backward movements (epoch M2) (t_test, p ≤ 0.05). About the 45% of the tested cells turned out to be sensitive to the orientation of the handle (one way ANOVA, p ≤ 0.05). To study how the distal components of the movement, such as the hand preshaping during the reaching of the handle, could influence the neuronal discharge, we compared the neuronal activity during the reaching movements towards the same spatial location in reach-to-point and reach-to-grasp tasks. Both tasks required proximal arm movements; only the reach-to-grasp task required distal movements to orient the wrist and to shape the hand to grasp the handle. The 56% of V6A cells showed significant differences in the neural discharge (one way ANOVA, p ≤ 0.05) between the reach-to-point and the reach-to-grasp tasks during MOV, 54% during HOLD and 52% during M2. These data show that reaching and grasping are processed by the same population of neurons, providing evidence that the coordination of reaching and grasping takes place much earlier than previously thought, i.e., in the parieto-occipital cortex. The data here reported are in agreement with results of lesions to the medial posterior parietal cortex in both monkeys and humans, and with recent imaging data in humans, all of them indicating a functional coupling in the control of reaching and grasping by the medial parietofrontal circuit.
Resumo:
The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.
Resumo:
Generic programming is likely to become a new challenge for a critical mass of developers. Therefore, it is crucial to refine the support for generic programming in mainstream Object-Oriented languages — both at the design and at the implementation level — as well as to suggest novel ways to exploit the additional degree of expressiveness made available by genericity. This study is meant to provide a contribution towards bringing Java genericity to a more mature stage with respect to mainstream programming practice, by increasing the effectiveness of its implementation, and by revealing its full expressive power in real world scenario. With respect to the current research setting, the main contribution of the thesis is twofold. First, we propose a revised implementation for Java generics that greatly increases the expressiveness of the Java platform by adding reification support for generic types. Secondly, we show how Java genericity can be leveraged in a real world case-study in the context of the multi-paradigm language integration. Several approaches have been proposed in order to overcome the lack of reification of generic types in the Java programming language. Existing approaches tackle the problem of reification of generic types by defining new translation techniques which would allow for a runtime representation of generics and wildcards. Unfortunately most approaches suffer from several problems: heterogeneous translations are known to be problematic when considering reification of generic methods and wildcards. On the other hand, more sophisticated techniques requiring changes in the Java runtime, supports reified generics through a true language extension (where clauses) so that backward compatibility is compromised. In this thesis we develop a sophisticated type-passing technique for addressing the problem of reification of generic types in the Java programming language; this approach — first pioneered by the so called EGO translator — is here turned into a full-blown solution which reifies generic types inside the Java Virtual Machine (JVM) itself, thus overcoming both performance penalties and compatibility issues of the original EGO translator. Java-Prolog integration Integrating Object-Oriented and declarative programming has been the subject of several researches and corresponding technologies. Such proposals come in two flavours, either attempting at joining the two paradigms, or simply providing an interface library for accessing Prolog declarative features from a mainstream Object-Oriented languages such as Java. Both solutions have however drawbacks: in the case of hybrid languages featuring both Object-Oriented and logic traits, such resulting language is typically too complex, thus making mainstream application development an harder task; in the case of library-based integration approaches there is no true language integration, and some “boilerplate code” has to be implemented to fix the paradigm mismatch. In this thesis we develop a framework called PatJ which promotes seamless exploitation of Prolog programming in Java. A sophisticated usage of generics/wildcards allows to define a precise mapping between Object-Oriented and declarative features. PatJ defines a hierarchy of classes where the bidirectional semantics of Prolog terms is modelled directly at the level of the Java generic type-system.
Resumo:
Visual search and oculomotor behaviour are believed to be very relevant for athlete performance, especially for sports requiring refined visuo-motor coordination skills. Modern coaches believe that a correct visuo-motor strategy may be part of advanced training programs. In this thesis two experiments are reported in which gaze behaviour of expert and novice athletes were investigated while they were doing a real sport specific task. The experiments concern two different sports: judo and soccer. In each experiment, number of fixations, fixation locations and mean fixation duration (ms) were considered. An observational analysis was done at the end of the paper to see perceptual differences between near and far space. Purpose: The aim of the judo study was to delineate differences in gaze behaviour characteristics between a population of athletes and one of non athletes. Aspects specifically investigated were: search rate, search order and viewing time across different conditions in a real-world task. The second study was aimed at identifying gaze behaviour in varsity soccer goalkeepers while facing a penalty kick executed with instep and inside foot. Then an attempt has been done to compare the gaze strategies of expert judoka and soccer goalkeepers in order to delineate possible differences related to the different conditions of reacting to events occurring in near (peripersonal) or far (extrapersonal) space. Judo Methods: A sample of 9 judoka (black belt) and 11 near judoka (white belt) were studied. Eye movements were recorded at 500Hz using a video based eye tracker (EyeLink II). Each subject participated in 40 sessions for about 40 minutes. Gaze behaviour was considered as average number of locations fixated per trial, the average number of fixations per trial, and mean fixation duration. Soccer Methods: Seven (n = 7) intermediate level male volunteered for the experiment. The kickers and goalkeepers, had at least varsity level soccer experience. The vision-in-action (VIA) system (Vickers 1996; Vickers 2007) was used to collect the coupled gaze and motor behaviours of the goalkeepers. This system integrated input from a mobile eye tracking system (Applied Sciences Laboratories) with an external video of the goalkeeper’s saving actions. The goalkeepers took 30 penalty kicks on a synthetic pitch in accordance with FIFA (2008) laws. Judo Results: Results indicate that experts group differed significantly from near expert for fixations duration, and number of fixations per trial. The expert judokas used a less exhaustive search strategy involving fewer fixations of longer duration than their novice counterparts and focused on central regions of the body. The results showed that in defence and attack situation expert group did a greater number of transitions with respect to their novice counterpart. Soccer Results: We found significant main effect for the number of locations fixated across outcome (goal/save) but not for foot contact (instep/inside). Participants spent more time fixating the areas in instep than inside kick and in goal than in save situation. Mean and standard error in search strategy as a result of foot contact and outcome indicate that the most gaze behaviour start and finish on ball interest areas. Conclusions: Expert goalkeepers tend to spend more time in inside-save than instep-save penalty, differences that was opposite in scored penalty kick. Judo results show that differences in visual behaviour related to the level of expertise appear mainly when the test presentation is continuous, last for a relatively long period of time and present a high level of uncertainty with regard to the chronology and the nature of events. Expert judoist performers “anchor” the fovea on central regions of the scene (lapel and face) while using peripheral vision to monitor opponents’ limb movements. The differences between judo and soccer gaze strategies are discussed on the light of physiological and neuropsychological differences between near and far space perception.
Resumo:
The ability of integrating into a unified percept sensory inputs deriving from different sensory modalities, but related to the same external event, is called multisensory integration and might represent an efficient mechanism of sensory compensation when a sensory modality is damaged by a cortical lesion. This hypothesis has been discussed in the present dissertation. Experiment 1 explored the role of superior colliculus (SC) in multisensory integration, testing patients with collicular lesions, patients with subcortical lesions not involving the SC and healthy control subjects in a multisensory task. The results revealed that patients with collicular lesions, paralleling the evidence of animal studies, demonstrated a loss of multisensory enhancement, in contrast with control subjects, providing the first lesional evidence in humans of the essential role of SC in mediating audio-visual integration. Experiment 2 investigated the role of cortex in mediating multisensory integrative effects, inducing virtual lesions by inhibitory theta-burst stimulation on temporo-parietal cortex, occipital cortex and posterior parietal cortex, demonstrating that only temporo-parietal cortex was causally involved in modulating the integration of audio-visual stimuli at the same spatial location. Given the involvement of the retino-colliculo-extrastriate pathway in mediating audio-visual integration, the functional sparing of this circuit in hemianopic patients is extremely relevant in the perspective of a multisensory-based approach to the recovery of unisensory defects. Experiment 3 demonstrated the spared functional activity of this circuit in a group of hemianopic patients, revealing the presence of implicit recognition of the fearful content of unseen visual stimuli (i.e. affective blindsight), an ability mediated by the retino-colliculo-extrastriate pathway and its connections with amygdala. Finally, Experiment 4 provided evidence that a systematic audio-visual stimulation is effective in inducing long-lasting clinical improvements in patients with visual field defect and revealed that the activity of the spared retino-colliculo-extrastriate pathway is responsible of the observed clinical amelioration, as suggested by the greater improvement observed in patients with cortical lesions limited to the occipital cortex, compared to patients with lesions extending to other cortical areas, found in tasks high demanding in terms of spatial orienting. Overall, the present results indicated that multisensory integration is mediated by the retino-colliculo-extrastriate pathway and that a systematic audio-visual stimulation, activating this spared neural circuit, is able to affect orientation towards the blind field in hemianopic patients and, therefore, might constitute an effective and innovative approach for the rehabilitation of unisensory visual impairments.