19 resultados para Visual Object Identification Task

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction and aims of the research Nitric oxide (NO) and endocannabinoids (eCBs) are major retrograde messengers, involved in synaptic plasticity (long-term potentiation, LTP, and long-term depression, LTD) in many brain areas (including hippocampus and neocortex), as well as in learning and memory processes. NO is synthesized by NO synthase (NOS) in response to increased cytosolic Ca2+ and mainly exerts its functions through soluble guanylate cyclase (sGC) and cGMP production. The main target of cGMP is the cGMP-dependent protein kinase (PKG). Activity-dependent release of eCBs in the CNS leads to the activation of the Gαi/o-coupled cannabinoid receptor 1 (CB1) at both glutamatergic and inhibitory synapses. The perirhinal cortex (Prh) is a multimodal associative cortex of the temporal lobe, critically involved in visual recognition memory. LTD is proposed to be the cellular correlate underlying this form of memory. Cholinergic neurotransmission has been shown to play a critical role in both visual recognition memory and LTD in Prh. Moreover, visual recognition memory is one of the main cognitive functions impaired in the early stages of Alzheimer’s disease. The main aim of my research was to investigate the role of NO and ECBs in synaptic plasticity in rat Prh and in visual recognition memory. Part of this research was dedicated to the study of synaptic transmission and plasticity in a murine model (Tg2576) of Alzheimer’s disease. Methods Field potential recordings. Extracellular field potential recordings were carried out in horizontal Prh slices from Sprague-Dawley or Dark Agouti juvenile (p21-35) rats. LTD was induced with a single train of 3000 pulses delivered at 5 Hz (10 min), or via bath application of carbachol (Cch; 50 μM) for 10 min. LTP was induced by theta-burst stimulation (TBS). In addition, input/output curves and 5Hz-LTD were carried out in Prh slices from 3 month-old Tg2576 mice and littermate controls. Behavioural experiments. The spontaneous novel object exploration task was performed in intra-Prh bilaterally cannulated adult Dark Agouti rats. Drugs or vehicle (saline) were directly infused into the Prh 15 min before training to verify the role of nNOS and CB1 in visual recognition memory acquisition. Object recognition memory was tested at 20 min and 24h after the end of the training phase. Results Electrophysiological experiments in Prh slices from juvenile rats showed that 5Hz-LTD is due to the activation of the NOS/sGC/PKG pathway, whereas Cch-LTD relies on NOS/sGC but not PKG activation. By contrast, NO does not appear to be involved in LTP in this preparation. Furthermore, I found that eCBs are involved in LTP induction, but not in basal synaptic transmission, 5Hz-LTD and Cch-LTD. Behavioural experiments demonstrated that the blockade of nNOS impairs rat visual recognition memory tested at 24 hours, but not at 20 min; however, the blockade of CB1 did not affect visual recognition memory acquisition tested at both time points specified. In three month-old Tg2576 mice, deficits in basal synaptic transmission and 5Hz-LTD were observed compared to littermate controls. Conclusions The results obtained in Prh slices from juvenile rats indicate that NO and CB1 play a role in the induction of LTD and LTP, respectively. These results are confirmed by the observation that nNOS, but not CB1, is involved in visual recognition memory acquisition. The preliminary results obtained in the murine model of Alzheimer’s disease indicate that deficits in synaptic transmission and plasticity occur very early in Prh; further investigations are required to characterize the molecular mechanisms underlying these deficits.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Crowding is defined as the negative effect obtained by adding visual distractors around a central target which has to be identified. Some studies have suggested the presence of a marked crowding effect in developmental dyslexia (e.g. Atkinson, 1991; Spinelli et al., 2002). Inspired by Spinelli’s (2002) experimental design, we explored the hypothesis that the crowding effect may affect dyslexics’ response times (RTs) and accuracy in identification tasks dealing with words, pseudowords, illegal non-words and symbolstrings. Moreover, our study aimed to clarify the relationship between the crowding phenomenon and the word-reading process, in an inter-language comparison perspective. For this purpose we studied twenty-two French dyslexics and twenty-two Italian dyslexics (total forty-four dyslexics), compared to forty-four subjects matched for reading level (22 French and 22 Italians) and forty-four chronological age-matched subjects (22 French and 22 Italians). Children were all tested on reading and cognitive abilities. Results showed no differences between French and Italian participants suggesting that performances were homogenous. Dyslexic children were all significantly impaired in words and pseudowords reading compared to their normal reading controls. Regarding the identification task with which we assessed crowding effect, both accuracy and RTs showed a lexicality effect which meant that the recognition of words was more accurate and faster in words than pseudowords, non-words and symbolstrings. Moreover, compared to normal readers, dyslexics’ RTs and accuracy were impaired only for verbal materials but not for non-verbal material; these results are in line with the phonological hypothesis (Griffiths & Snowling, 2002; Snowling, 2000; 2006) . RTs revealed a general crowding effect (RTs in the crowding condition were slower than those recorded in the isolated condition) affecting all the subjects’ performances. This effect, however, emerged to be not specific for dyslexics. Data didn’t reveal a significant effect of language, allowing the generalization of the obtained results. We also analyzed the performance of two subgroups of dyslexics, categorized according to their reading abilities. The two subgroups produced different results regarding the crowding effect and type of material, suggesting that it is meaningful to take into account also the heterogeneity of the dyslexia disorder. Finally, we also analyzed the relationship of the identification task with both reading and cognitive abilities. In conclusion, this study points out the importance of comparing visual tasks performances of dyslexic participants with those of their reading level-matched controls. This approach may improve our comprehension of the potential causal link between crowding and reading (Goswami, 2003).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis was aimed at verifying the role of the superior colliculus (SC) in human spatial orienting. To do so, subjects performed two experimental tasks that have been shown to involve SC’s activation in animals, that is a multisensory integration task (Experiment 1 and 2) and a visual target selection task (Experiment 3). To investigate this topic in humans, we took advantage of neurophysiological finding revealing that retinal S-cones do not send projections to the collicular and magnocellular pathway. In the Experiment 1, subjects performed a simple reaction-time task in which they were required to respond as quickly as possible to any sensory stimulus (visual, auditory or bimodal audio-visual). The visual stimulus could be an S-cone stimulus (invisible to the collicular and magnocellular pathway) or a long wavelength stimulus (visible to the SC). Results showed that when using S-cone stimuli, RTs distribution was simply explained by probability summation, indicating that the redundant auditory and visual channels are independent. Conversely, with red long-wavelength stimuli, visible to the SC, the RTs distribution was related to nonlinear neural summation, which constitutes evidence of integration of different sensory information. We also demonstrate that when AV stimuli were presented at fixation, so that the spatial orienting component of the task was reduced, neural summation was possible regardless of stimulus color. Together, these findings provide support for a pivotal role of the SC in mediating multisensory spatial integration in humans, when behavior involves spatial orienting responses. Since previous studies have shown an anatomical asymmetry of fibres projecting to the SC from the hemiretinas, the Experiment 2 was aimed at investigating temporo-nasal asymmetry in multisensory integration. To do so, subjects performed monocularly the same task shown in the Experiment 1. When spatially coincident audio-visual stimuli were visible to the SC (i.e. red stimuli), the RTE depended on a neural coactivation mechanism, suggesting an integration of multisensory information. When using stimuli invisible to the SC (i.e. purple stimuli), the RTE depended only on a simple statistical facilitation effect, in which the two sensory stimuli were processed by independent channels. Finally, we demonstrate that the multisensory integration effect was stronger for stimuli presented to the temporal hemifield than to the nasal hemifield. Taken together, these findings suggested that multisensory stimulation can be differentially effective depending on specific stimulus parameters. The Experiment 3 was aimed at verifying the role of the SC in target selection by using a color-oddity search task, comprising stimuli either visible or invisible to the collicular and magnocellular pathways. Subjects were required to make a saccade toward a target that could be presented alone or with three distractors of another color (either S-cone or long-wavelength). When using S-cone distractors, invisible to the SC, localization errors were similar to those observed in the distractor-free condition. Conversely, with long-wavelength distractors, visible to the SC, saccadic localization error and variability were significantly greater than in either the distractor-free condition or the S-cone distractors condition. Our results clearly indicate that the SC plays a direct role in visual target selection in humans. Overall, our results indicate that the SC plays an important role in mediating spatial orienting responses both when required covert (Experiments 1 and 2) and overt orienting (Experiment 3).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Anche se l'isteroscopia con la biopsia endometriale è il gold standard nella diagnosi della patologia intracavitaria uterina, l'esperienza dell’isteroscopista è fondamentale per una diagnosi corretta. Il Deep Learning (DL) come metodica di intelligenza artificiale potrebbe essere un aiuto per superare questo limite. Sono disponibili pochi studi con risultati preliminari e mancano ricerche che valutano le prestazioni dei modelli di DL nell'identificazione delle lesioni intrauterine e il possibile aiuto derivato dai fattori clinici. Obiettivo: Sviluppare un modello di DL per identificare e classificare le patologie endocavitarie uterine dalle immagini isteroscopiche. Metodi: È stato eseguito uno studio di coorte retrospettivo osservazionale monocentrico su una serie consecutiva di casi isteroscopici di pazienti con patologia intracavitaria uterina confermata all’esame istologico eseguiti al Policlinico S. Orsola. Le immagini isteroscopiche sono state usate per costruire un modello di DL per la classificazione e l'identificazione delle lesioni intracavitarie con e senza l'aiuto di fattori clinici (età, menopausa, AUB, terapia ormonale e tamoxifene). Come risultati dello studio abbiamo calcolato le metriche diagnostiche del modello di DL nella classificazione e identificazione delle lesioni uterine intracavitarie con e senza l'aiuto dei fattori clinici. Risultati: Abbiamo esaminato 1.500 immagini provenienti da 266 casi: 186 pazienti avevano lesioni focali benigne, 25 lesioni diffuse benigne e 55 lesioni preneoplastiche/neoplastiche. Sia per quanto riguarda la classificazione che l’identificazione, le migliori prestazioni sono state raggiunte con l'aiuto dei fattori clinici, complessivamente con precision dell'80,11%, recall dell'80,11%, specificità del 90,06%, F1 score dell’80,11% e accuratezza dell’86,74% per la classificazione. Per l’identificazione abbiamo ottenuto un rilevamento complessivo dell’85,82%, precision 93,12%, recall del 91,63% ed F1 score del 92,37%. Conclusioni: Il modello DL ha ottenuto una bassa performance nell’identificazione e classificazione delle lesioni intracavitarie uterine dalle immagini isteroscopiche. Anche se la migliore performance diagnostica è stata ottenuta con l’aiuto di fattori clinici specifici, questo miglioramento è stato scarso.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis deals with Visual Servoing and its strictly connected disciplines like projective geometry, image processing, robotics and non-linear control. More specifically the work addresses the problem to control a robotic manipulator through one of the largely used Visual Servoing techniques: the Image Based Visual Servoing (IBVS). In Image Based Visual Servoing the robot is driven by on-line performing a feedback control loop that is closed directly in the 2D space of the camera sensor. The work considers the case of a monocular system with the only camera mounted on the robot end effector (eye in hand configuration). Through IBVS the system can be positioned with respect to a 3D fixed target by minimizing the differences between its initial view and its goal view, corresponding respectively to the initial and the goal system configurations: the robot Cartesian Motion is thus generated only by means of visual informations. However, the execution of a positioning control task by IBVS is not straightforward because singularity problems may occur and local minima may be reached where the reached image is very close to the target one but the 3D positioning task is far from being fulfilled: this happens in particular for large camera displacements, when the the initial and the goal target views are noticeably different. To overcame singularity and local minima drawbacks, maintaining the good properties of IBVS robustness with respect to modeling and camera calibration errors, an opportune image path planning can be exploited. This work deals with the problem of generating opportune image plane trajectories for tracked points of the servoing control scheme (a trajectory is made of a path plus a time law). The generated image plane paths must be feasible i.e. they must be compliant with rigid body motion of the camera with respect to the object so as to avoid image jacobian singularities and local minima problems. In addition, the image planned trajectories must generate camera velocity screws which are smooth and within the allowed bounds of the robot. We will show that a scaled 3D motion planning algorithm can be devised in order to generate feasible image plane trajectories. Since the paths in the image are off-line generated it is also possible to tune the planning parameters so as to maintain the target inside the camera field of view even if, in some unfortunate cases, the feature target points would leave the camera images due to 3D robot motions. To test the validity of the proposed approach some both experiments and simulations results have been reported taking also into account the influence of noise in the path planning strategy. The experiments have been realized with a 6DOF anthropomorphic manipulator with a fire-wire camera installed on its end effector: the results demonstrate the good performances and the feasibility of the proposed approach.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Visual correspondence is a key computer vision task that aims at identifying projections of the same 3D point into images taken either from different viewpoints or at different time instances. This task has been the subject of intense research activities in the last years in scenarios such as object recognition, motion detection, stereo vision, pattern matching, image registration. The approaches proposed in literature typically aim at improving the state of the art by increasing the reliability, the accuracy or the computational efficiency of visual correspondence algorithms. The research work carried out during the Ph.D. course and presented in this dissertation deals with three specific visual correspondence problems: fast pattern matching, stereo correspondence and robust image matching. The dissertation presents original contributions to the theory of visual correspondence, as well as applications dealing with 3D reconstruction and multi-view video surveillance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Salient stimuli, like sudden changes in the environment or emotional stimuli, generate a priority signal that captures attention even if they are task-irrelevant. However, to achieve goal-driven behavior, we need to ignore them and to avoid being distracted. It is generally agreed that top-down factors can help us to filter out distractors. A fundamental question is how and at which stage of processing the rejection of distractors is achieved. Two circumstances under which the allocation of attention to distractors is supposed to be prevented are represented by the case in which distractors occur at an unattended location (as determined by the deployment of endogenous spatial attention) and when the amount of visual working memory resources is reduced by an ongoing task. The present thesis is focused on the impact of these factors on three sources of distraction, namely auditory and visual onsets (Experiments 1 and 2, respectively) and pleasant scenes (Experiment 3). In the first two studies we recorded neural correlates of distractor processing (i.e., Event-Related Potentials), whereas in the last study we used interference effects on behavior (i.e., a slowing down of response times on a simultaneous task) to index distraction. Endogenous spatial attention reduced distraction by auditory stimuli and eliminated distraction by visual onsets. Differently, visual working memory load only affected the processing of visual onsets. Emotional interference persisted even when scenes occurred always at unattended locations and when visual working memory was loaded. Altogether, these findings indicate that the ability to detect the location of salient task-irrelevant sounds and identify the affective significance of natural scenes is preserved even when the amount of visual working memory resources is reduced by an ongoing task and when endogenous attention is elsewhere directed. However, these results also indicate that the processing of auditory and visual distractors is not entirely automatic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prehension in an act of coordinated reaching and grasping. The reaching component is concerned with bringing the hand to object to be grasped (transport phase); the grasping component refers to the shaping of the hand according to the object features (grasping phase) (Jeannerod, 1981). Reaching and grasping involve different muscles, proximal and distal muscles respectively, and are controlled by different parietofrontal circuit (Jeannerod et al., 1995): a medial circuit, involving area of superior parietal lobule and dorsal premotor area 6 (PMd) (dorsomedial visual stream), is mainly concerned with reaching; a lateral circuit, involving the inferior parietal lobule and ventral premotor area 6 (PMv) (dorsolateral visual stream), with grasping. Area V6A is located in the caudalmost part of the superior parietal lobule, so it belongs to the dorsomedial visual stream; it contains neurons sensitive to visual stimuli (Galletti et al. 1993, 1996, 1999) as well as cells sensitive to the direction of gaze (Galletti et al. 1995) and cells showing saccade-related activity (Nakamura et al. 1999; Kutz et al. 2003). Area V6A contains also arm-reaching neurons likely involved in the control of the direction of the arm during movements towards objects in the peripersonal space (Galletti et al. 1997; Fattori et al. 2001). The present results confirm this finding and demonstrate that during the reach-to-grasp the V6A neurons are also modulated by the orientation of the wrist. Experiments were approved by the Bioethical Committee of the University of Bologna and were performed in accordance with National laws on care and use of laboratory animals and with the European Communities Council Directive of 24th November 1986 (86/609/EEC), recently revised by the Council of Europe guidelines (Appendix A of Convention ETS 123). Experiments were performed in two awake Macaca fascicularis. Each monkey was trained to sit in a primate chair with the head restrained to perform reaching and grasping arm movements in complete darkness while gazing a small fixation point. The object to be grasped was a handle that could have different orientation. We recorded neural activity from 163 neurons of the anterior parietal sulcus; 116/163 (71%) neurons were modulated by the reach-to-grasp task during the execution of the forward movements toward the target (epoch MOV), 111/163 (68%) during the pulling of the handle (epoch HOLD) and 102/163 during the execution of backward movements (epoch M2) (t_test, p ≤ 0.05). About the 45% of the tested cells turned out to be sensitive to the orientation of the handle (one way ANOVA, p ≤ 0.05). To study how the distal components of the movement, such as the hand preshaping during the reaching of the handle, could influence the neuronal discharge, we compared the neuronal activity during the reaching movements towards the same spatial location in reach-to-point and reach-to-grasp tasks. Both tasks required proximal arm movements; only the reach-to-grasp task required distal movements to orient the wrist and to shape the hand to grasp the handle. The 56% of V6A cells showed significant differences in the neural discharge (one way ANOVA, p ≤ 0.05) between the reach-to-point and the reach-to-grasp tasks during MOV, 54% during HOLD and 52% during M2. These data show that reaching and grasping are processed by the same population of neurons, providing evidence that the coordination of reaching and grasping takes place much earlier than previously thought, i.e., in the parieto-occipital cortex. The data here reported are in agreement with results of lesions to the medial posterior parietal cortex in both monkeys and humans, and with recent imaging data in humans, all of them indicating a functional coupling in the control of reaching and grasping by the medial parietofrontal circuit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Generic programming is likely to become a new challenge for a critical mass of developers. Therefore, it is crucial to refine the support for generic programming in mainstream Object-Oriented languages — both at the design and at the implementation level — as well as to suggest novel ways to exploit the additional degree of expressiveness made available by genericity. This study is meant to provide a contribution towards bringing Java genericity to a more mature stage with respect to mainstream programming practice, by increasing the effectiveness of its implementation, and by revealing its full expressive power in real world scenario. With respect to the current research setting, the main contribution of the thesis is twofold. First, we propose a revised implementation for Java generics that greatly increases the expressiveness of the Java platform by adding reification support for generic types. Secondly, we show how Java genericity can be leveraged in a real world case-study in the context of the multi-paradigm language integration. Several approaches have been proposed in order to overcome the lack of reification of generic types in the Java programming language. Existing approaches tackle the problem of reification of generic types by defining new translation techniques which would allow for a runtime representation of generics and wildcards. Unfortunately most approaches suffer from several problems: heterogeneous translations are known to be problematic when considering reification of generic methods and wildcards. On the other hand, more sophisticated techniques requiring changes in the Java runtime, supports reified generics through a true language extension (where clauses) so that backward compatibility is compromised. In this thesis we develop a sophisticated type-passing technique for addressing the problem of reification of generic types in the Java programming language; this approach — first pioneered by the so called EGO translator — is here turned into a full-blown solution which reifies generic types inside the Java Virtual Machine (JVM) itself, thus overcoming both performance penalties and compatibility issues of the original EGO translator. Java-Prolog integration Integrating Object-Oriented and declarative programming has been the subject of several researches and corresponding technologies. Such proposals come in two flavours, either attempting at joining the two paradigms, or simply providing an interface library for accessing Prolog declarative features from a mainstream Object-Oriented languages such as Java. Both solutions have however drawbacks: in the case of hybrid languages featuring both Object-Oriented and logic traits, such resulting language is typically too complex, thus making mainstream application development an harder task; in the case of library-based integration approaches there is no true language integration, and some “boilerplate code” has to be implemented to fix the paradigm mismatch. In this thesis we develop a framework called PatJ which promotes seamless exploitation of Prolog programming in Java. A sophisticated usage of generics/wildcards allows to define a precise mapping between Object-Oriented and declarative features. PatJ defines a hierarchy of classes where the bidirectional semantics of Prolog terms is modelled directly at the level of the Java generic type-system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual search and oculomotor behaviour are believed to be very relevant for athlete performance, especially for sports requiring refined visuo-motor coordination skills. Modern coaches believe that a correct visuo-motor strategy may be part of advanced training programs. In this thesis two experiments are reported in which gaze behaviour of expert and novice athletes were investigated while they were doing a real sport specific task. The experiments concern two different sports: judo and soccer. In each experiment, number of fixations, fixation locations and mean fixation duration (ms) were considered. An observational analysis was done at the end of the paper to see perceptual differences between near and far space. Purpose: The aim of the judo study was to delineate differences in gaze behaviour characteristics between a population of athletes and one of non athletes. Aspects specifically investigated were: search rate, search order and viewing time across different conditions in a real-world task. The second study was aimed at identifying gaze behaviour in varsity soccer goalkeepers while facing a penalty kick executed with instep and inside foot. Then an attempt has been done to compare the gaze strategies of expert judoka and soccer goalkeepers in order to delineate possible differences related to the different conditions of reacting to events occurring in near (peripersonal) or far (extrapersonal) space. Judo Methods: A sample of 9 judoka (black belt) and 11 near judoka (white belt) were studied. Eye movements were recorded at 500Hz using a video based eye tracker (EyeLink II). Each subject participated in 40 sessions for about 40 minutes. Gaze behaviour was considered as average number of locations fixated per trial, the average number of fixations per trial, and mean fixation duration. Soccer Methods: Seven (n = 7) intermediate level male volunteered for the experiment. The kickers and goalkeepers, had at least varsity level soccer experience. The vision-in-action (VIA) system (Vickers 1996; Vickers 2007) was used to collect the coupled gaze and motor behaviours of the goalkeepers. This system integrated input from a mobile eye tracking system (Applied Sciences Laboratories) with an external video of the goalkeeper’s saving actions. The goalkeepers took 30 penalty kicks on a synthetic pitch in accordance with FIFA (2008) laws. Judo Results: Results indicate that experts group differed significantly from near expert for fixations duration, and number of fixations per trial. The expert judokas used a less exhaustive search strategy involving fewer fixations of longer duration than their novice counterparts and focused on central regions of the body. The results showed that in defence and attack situation expert group did a greater number of transitions with respect to their novice counterpart. Soccer Results: We found significant main effect for the number of locations fixated across outcome (goal/save) but not for foot contact (instep/inside). Participants spent more time fixating the areas in instep than inside kick and in goal than in save situation. Mean and standard error in search strategy as a result of foot contact and outcome indicate that the most gaze behaviour start and finish on ball interest areas. Conclusions: Expert goalkeepers tend to spend more time in inside-save than instep-save penalty, differences that was opposite in scored penalty kick. Judo results show that differences in visual behaviour related to the level of expertise appear mainly when the test presentation is continuous, last for a relatively long period of time and present a high level of uncertainty with regard to the chronology and the nature of events. Expert judoist performers “anchor” the fovea on central regions of the scene (lapel and face) while using peripheral vision to monitor opponents’ limb movements. The differences between judo and soccer gaze strategies are discussed on the light of physiological and neuropsychological differences between near and far space perception.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability of integrating into a unified percept sensory inputs deriving from different sensory modalities, but related to the same external event, is called multisensory integration and might represent an efficient mechanism of sensory compensation when a sensory modality is damaged by a cortical lesion. This hypothesis has been discussed in the present dissertation. Experiment 1 explored the role of superior colliculus (SC) in multisensory integration, testing patients with collicular lesions, patients with subcortical lesions not involving the SC and healthy control subjects in a multisensory task. The results revealed that patients with collicular lesions, paralleling the evidence of animal studies, demonstrated a loss of multisensory enhancement, in contrast with control subjects, providing the first lesional evidence in humans of the essential role of SC in mediating audio-visual integration. Experiment 2 investigated the role of cortex in mediating multisensory integrative effects, inducing virtual lesions by inhibitory theta-burst stimulation on temporo-parietal cortex, occipital cortex and posterior parietal cortex, demonstrating that only temporo-parietal cortex was causally involved in modulating the integration of audio-visual stimuli at the same spatial location. Given the involvement of the retino-colliculo-extrastriate pathway in mediating audio-visual integration, the functional sparing of this circuit in hemianopic patients is extremely relevant in the perspective of a multisensory-based approach to the recovery of unisensory defects. Experiment 3 demonstrated the spared functional activity of this circuit in a group of hemianopic patients, revealing the presence of implicit recognition of the fearful content of unseen visual stimuli (i.e. affective blindsight), an ability mediated by the retino-colliculo-extrastriate pathway and its connections with amygdala. Finally, Experiment 4 provided evidence that a systematic audio-visual stimulation is effective in inducing long-lasting clinical improvements in patients with visual field defect and revealed that the activity of the spared retino-colliculo-extrastriate pathway is responsible of the observed clinical amelioration, as suggested by the greater improvement observed in patients with cortical lesions limited to the occipital cortex, compared to patients with lesions extending to other cortical areas, found in tasks high demanding in terms of spatial orienting. Overall, the present results indicated that multisensory integration is mediated by the retino-colliculo-extrastriate pathway and that a systematic audio-visual stimulation, activating this spared neural circuit, is able to affect orientation towards the blind field in hemianopic patients and, therefore, might constitute an effective and innovative approach for the rehabilitation of unisensory visual impairments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reaching and grasping an object is an action that can be performed in light, under visual guidance, as well as in darkness, under proprioceptive control only. Area V6A is a visuomotor area involved in the control of reaching movements. V6A, besides neurons activated by the execution of reaching movements, shows passive somatosensory and visual responses. This suggests fro V6A a multimodal capability of integrating sensory and motor-related information, We wanted to know whether this integration occurrs in reaching movements and in the present study we tested whether the visual feedback influenced the reaching activity of V6A neurons. In order to better address this question, we wanted to interpret the neural data in the light of the kinematic of reaching performance. We used an experimental paradigm that could examine V6A responses in two different visual backgrounds, light and dark. In these conditions, the monkey performed an istructed-delay reaching task moving the hand towards different target positions located in the peripersonal space. During the execution of reaching task, the visual feedback is processed in a variety of patterns of modulation, sometimes not expected. In fact, having already demonstrated in V6A reach-related discharges in absence of visual feedback, we expected two types of neural modulation: 1) the addition of light in the environment enhanced reach-related discharges recorded in the dark; 2) the light left the neural response unmodified. Unexpectedly, the results show a complex pattern of modulation that argues against a simple additive interaction between visual and motor-related signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We usually perform actions in a dynamic environment and changes in the location of a target for an upcoming action require both covert shifts of attention and motor planning update. In this study we tested whether, similarly to oculomotor areas that provide signals for overt and covert attention shifts, covert attention shifts modulate activity in cortical area V6A, which provides a bridge between visual signals and arm-motor control. We performed single cell recordings in monkeys trained to fixate straight-ahead while shifting attention outward to a peripheral cue and inward again to the fixation point. We found that neurons in V6A are influenced by spatial attention demonstrating that visual, motor, and attentional responses can occur in combination in single neurons of V6A. This modulation in an area primarily involved in visuo-motor transformation for reaching suggests that also reach-related regions could directly contribute in the shifts of spatial attention necessary to plan and control goal-directed arm movements. Moreover, to test whether V6A is causally involved in these processes, we have performed a human study using on-line repetitive transcranial magnetic stimulation over the putative human V6A (pV6A) during an attention and a reaching task requiring covert shifts of attention and reaching movements towards cued targets in space. We demonstrate that the pV6A is causally involved in attention reorienting to target detection and that this process interferes with the execution of reaching movements towards unattended targets. The current findings suggest the direct involvement of the action-related dorso-medial visual stream in attentional processes, and a more specific role of V6A in attention reorienting. Therefore, we propose that attention signals are used by the V6A to rapidly update the current motor plan or the ongoing action when a behaviorally relevant object unexpectedly appears at an unattended location.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays robotic applications are widespread and most of the manipulation tasks are efficiently solved. However, Deformable-Objects (DOs) still represent a huge limitation for robots. The main difficulty in DOs manipulation is dealing with the shape and dynamics uncertainties, which prevents the use of model-based approaches (since they are excessively computationally complex) and makes sensory data difficult to interpret. This thesis reports the research activities aimed to address some applications in robotic manipulation and sensing of Deformable-Linear-Objects (DLOs), with particular focus to electric wires. In all the works, a significant effort was made in the study of an effective strategy for analyzing sensory signals with various machine learning algorithms. In the former part of the document, the main focus concerns the wire terminals, i.e. detection, grasping, and insertion. First, a pipeline that integrates vision and tactile sensing is developed, then further improvements are proposed for each module. A novel procedure is proposed to gather and label massive amounts of training images for object detection with minimal human intervention. Together with this strategy, we extend a generic object detector based on Convolutional-Neural-Networks for orientation prediction. The insertion task is also extended by developing a closed-loop control capable to guide the insertion of a longer and curved segment of wire through a hole, where the contact forces are estimated by means of a Recurrent-Neural-Network. In the latter part of the thesis, the interest shifts to the DLO shape. Robotic reshaping of a DLO is addressed by means of a sequence of pick-and-place primitives, while a decision making process driven by visual data learns the optimal grasping locations exploiting Deep Q-learning and finds the best releasing point. The success of the solution leverages on a reliable interpretation of the DLO shape. For this reason, further developments are made on the visual segmentation.