924 resultados para Long Visual Fibres
Resumo:
Because of attentional limitations, the human visual system can process for awareness and response only a fraction of the input received. Lesion and functional imaging studies have identified frontal, temporal, and parietal areas as playing a major role in the attentional control of visual processing, but very little is known about how these areas interact to form a dynamic attentional network. We hypothesized that the network communicates by means of neural phase synchronization, and we used magnetoencephalography to study transient long-range interarea phase coupling in a well studied attentionally taxing dual-target task (attentional blink). Our results reveal that communication within the fronto-parieto-temporal attentional network proceeds via transient long-range phase synchronization in the beta band. Changes in synchronization reflect changes in the attentional demands of the task and are directly related to behavioral performance. Thus, we show how attentional limitations arise from the way in which the subsystems of the attentional network interact. The human brain faces an inestimable task of reducing a potentially overloading amount of input into a manageable flow of information that reflects both the current needs of the organism and the external demands placed on it. This task is accomplished via a ubiquitous construct known as “attention,” whose mechanism, although well characterized behaviorally, is far from understood at the neurophysiological level. Whereas attempts to identify particular neural structures involved in the operation of attention have met with considerable success (1-5) and have resulted in the identification of frontal, parietal, and temporal regions, far less is known about the interaction among these structures in a way that can account for the task-dependent successes and failures of attention. The goal of the present research was, thus, to unravel the means by which the subsystems making up the human attentional network communicate and to relate the temporal dynamics of their communication to observed attentional limitations in humans. A prime candidate for communication among distributed systems in the human brain is neural synchronization (for review, see ref. 6). Indeed, a number of studies provide converging evidence that long-range interarea communication is related to synchronized oscillatory activity (refs. 7-14; for review, see ref. 15). To determine whether neural synchronization plays a role in attentional control, we placed humans in an attentionally demanding task and used magnetoencephalography (MEG) to track interarea communication by means of neural synchronization. In particular, we presented 10 healthy subjects with two visual target letters embedded in streams of 13 distractor letters, appearing at a rate of seven per second. The targets were separated in time by a single distractor. This condition leads to the “attentional blink” (AB), a well studied dual-task phenomenon showing the reduced ability to report the second of two targets when an interval <500 ms separates them (16-18). Importantly, the AB does not prevent perceptual processing of missed target stimuli but only their conscious report (19), demonstrating the attentional nature of this effect and making it a good candidate for the purpose of our investigation. Although numerous studies have investigated factors, e.g., stimulus and timing parameters, that manipulate the magnitude of a particular AB outcome, few have sought to characterize the neural state under which “standard” AB parameters produce an inability to report the second target on some trials but not others. We hypothesized that the different attentional states leading to different behavioral outcomes (second target reported correctly or not) are characterized by specific patterns of transient long-range synchronization between brain areas involved in target processing. Showing the hypothesized correspondence between states of neural synchronization and human behavior in an attentional task entails two demonstrations. First, it needs to be demonstrated that cortical areas that are suspected to be involved in visual-attention tasks, and the AB in particular, interact by means of neural synchronization. This demonstration is particularly important because previous brain-imaging studies (e.g., ref. 5) only showed that the respective areas are active within a rather large time window in the same task and not that they are concurrently active and actually create an interactive network. Second, it needs to be demonstrated that the pattern of neural synchronization is sensitive to the behavioral outcome; specifically, the ability to correctly identify the second of two rapidly succeeding visual targets
Resumo:
Diabetes mellitus (DM) is a metabolic disorder which is characterised by hyperglycaemia resulting from defects in insulin secretion, insulin action or both. The long-term specific effects of DM include the development of retinopathy, nephropathy and neuropathy. Cardiac disease, peripheral arterial and cerebrovascular disease are also known to be linked with DM. Type 1 diabetes mellitus (T1DM) accounts for approximately 10% of all individuals with DM, and insulin therapy is the only available treatment. Type 2 diabetes mellitus (T2DM) accounts for 90% of all individuals with DM. Diet, exercise, oral hypoglycaemic agents and occasionally exogenous insulin are used to manage T2DM. The diagnosis of DM is made where the glycated haemoglobin (HbA1c) percentage is greater than 6.5%. Pattern-reversal visual evoked potential (PVEP) testing is an objective means of evaluating impulse conduction along the central nervous pathways. Increased peak time of the visual P100 waveform is an expression of structural damage at the level of myelinated optic nerve fibres. This was an observational cross sectional study. The participants were grouped into two phases. Phase 1, the control group, consisted of 30 healthy non-diabetic participants. Phase 2 comprised of 104 diabetic participants of whom 52 had an HbA1c greater than 10% (poorly controlled DM) and 52 whose HbA1c was 10% and less (moderately controlled DM). The aim of this study was to firstly observe the possible association between glycated haemoglobin levels and P100 peak time of pattern-reversal visual evoked potentials (PVEPs) in DM. Secondly, to assess whether the central nervous system (CNS) and in particular visual function is affected by type and/or duration of DM. The cut-off values to define P100 peak time delay was calculated as the mean P100 peak time plus 2.5 X standard deviations as measured for the non-diabetic control group, and were 110.64 ms for the right eye. The proportion of delayed P100 peak time amounted to 38.5% for both diabetic groups, thus the poorly controlled group (HbA1c > 10%) did not pose an increased risk for delayed P100 peak time, relative to the moderately controlled group (HbA1c ≤ 10%). The P100 PVEP results for this study, do however, reflect significant delay (p < 0.001) of the DM group as compared to the non-diabetic group; thus, subclincal neuropathy of the CNS occurs in 38.5% of cases. The duration of DM and type of DM had no influence on the P100 peak time measurements.
Resumo:
Visual localization systems that are practical for autonomous vehicles in outdoor industrial applications must perform reliably in a wide range of conditions. Changing outdoor conditions cause difficulty by drastically altering the information available in the camera images. To confront the problem, we have developed a visual localization system that uses a surveyed three-dimensional (3D)-edge map of permanent structures in the environment. The map has the invariant properties necessary to achieve long-term robust operation. Previous 3D-edge map localization systems usually maintain a single pose hypothesis, making it difficult to initialize without an accurate prior pose estimate and also making them susceptible to misalignment with unmapped edges detected in the camera image. A multihypothesis particle filter is employed here to perform the initialization procedure with significant uncertainty in the vehicle's initial pose. A novel observation function for the particle filter is developed and evaluated against two existing functions. The new function is shown to further improve the abilities of the particle filter to converge given a very coarse estimate of the vehicle's initial pose. An intelligent exposure control algorithm is also developed that improves the quality of the pertinent information in the image. Results gathered over an entire sunny day and also during rainy weather illustrate that the localization system can operate in a wide range of outdoor conditions. The conclusion is that an invariant map, a robust multihypothesis localization algorithm, and an intelligent exposure control algorithm all combine to enable reliable visual localization through challenging outdoor conditions.
Resumo:
The Simultaneous Localisation And Mapping (SLAM) problem is one of the major challenges in mobile robotics. Probabilistic techniques using high-end range finding devices are well established in the field, but recent work has investigated vision-only approaches. We present an alternative approach to the leading existing techniques, which extracts approximate rotational and translation velocity information from a vehicle-mounted consumer camera, without tracking landmarks. When coupled with an existing SLAM system, the vision module is able to map a 45 metre long indoor loop and a 1.6 km long outdoor road loop, without any parameter or system adjustment between tests. The work serves as a promising pilot study into ground-based vision-only SLAM, with minimal geometric interpretation of the environment.
Resumo:
Purpose. To investigate how temporal processing is altered in myopia and during myopic progression. Methods. In backward visual masking, a target's visibility is reduced by a mask presented quickly after the target. Thirty emmetropes, 40 low myopes, and 22 high myopes aged 18 to 26 years completed location and resolution masking tasks. The location task examined the ability to detect letters with low contrast and large stimulus size. The resolution task involved identifying a small letter and tested resolution and color discrimination. Target and mask stimuli were presented at nine short interstimulus intervals (12 to 259 ms) and at 1000 ms (long interstimulus interval condition). Results. In comparison with emmetropes, myopes had reduced ability in both locating and identifying briefly presented stimuli but were more affected by backward masking for a low contrast location task than for a resolution task. Performances of low and high myopes, as well as stable and progressing myopes, were similar for both masking tasks. Task performance was not correlated with myopia magnitude. Conclusions. Myopes were more affected than emmetropes by masking stimuli for the location task. This was not affected by magnitude or progression rate of myopia, suggesting that myopes have the propensity for poor performance in locating briefly presented low contrast objects at an early stage of myopia development.
Resumo:
In this video, a male voice recites a script comprised entirely of jokes. Words flash on screen in time with the spoken words. Sometimes the two sets of words match, and sometimes they differ. This work examines processes of signification. It emphasizes disruption and disconnection as fundamental and generative operations in making meaning. Extending on post-structural and deconstructionist ideas, this work questions the relationship between written and spoken words. By deliberately confusing the signifying structures of jokes and narratives, it questions the sites and mechanisms of comprehension, humour and signification.
Resumo:
A whole tradition is said to be based on the hierarchical distinction between the perceptual and conceptual. In art, Niklas Luhmann argues, this schism is played out and repeated in conceptual art. This paper complicates this depiction by examining Ian Burn's last writings in which I argue the artist-writer reviews the challenge of minimal-conceptual art in terms of its perceptual pre-occupations. Burn revisits his own work and the legacy of minimal-conceptual by moving away from the kind of ideology critique he is best known for internationally in order to reassert the long overlooked visual-perceptual preoccupations of the conceptual in art.
Resumo:
Visual sea-floor mapping is a rapidly growing application for Autonomous Underwater Vehicles (AUVs). AUVs are well-suited to the task as they remove humans from a potentially dangerous environment, can reach depths human divers cannot, and are capable of long-term operation in adverse conditions. The output of sea-floor maps generated by AUVs has a number of applications in scientific monitoring: from classifying coral in high biological value sites to surveying sea sponges to evaluate marine environment health.
Resumo:
In this paper we demonstrate passive vision-based localization in environments more than two orders of magnitude darker than the current benchmark using a 100 webcam and a 500 camera. Our approach uses the camera’s maximum exposure duration and sensor gain to achieve appropriately exposed images even in unlit night-time environments, albeit with extreme levels of motion blur. Using the SeqSLAM algorithm, we first evaluate the effect of variable motion blur caused by simulated exposures of 132 ms to 10000 ms duration on localization performance. We then use actual long exposure camera datasets to demonstrate day-night localization in two different environments. Finally we perform a statistical analysis that compares the baseline performance of matching unprocessed greyscale images to using patch normalization and local neighbourhood normalization – the two key SeqSLAM components. Our results and analysis show for the first time why the SeqSLAM algorithm is effective, and demonstrate the potential for cheap camera-based localization systems that function across extreme perceptual change.
Resumo:
Stereo-based visual odometry algorithms are heavily dependent on an accurate calibration of the rigidly fixed stereo pair. Even small shifts in the rigid transform between the cameras can impact on feature matching and 3D scene triangulation, adversely affecting pose estimates and applications dependent on long-term autonomy. In many field-based scenarios where vibration, knocks and pressure change affect a robotic vehicle, maintaining an accurate stereo calibration cannot be guaranteed over long periods. This paper presents a novel method of recalibrating overlapping stereo camera rigs from online visual data while simultaneously providing an up-to-date and up-to-scale pose estimate. The proposed technique implements a novel form of partitioned bundle adjustment that explicitly includes the homogeneous transform between a stereo camera pair to generate an optimal calibration. Pose estimates are computed in parallel to the calibration, providing online recalibration which seamlessly integrates into a stereo visual odometry framework. We present results demonstrating accurate performance of the algorithm on both simulated scenarios and real data gathered from a wide-baseline stereo pair on a ground vehicle traversing urban roads.
Resumo:
This paper presents a long-term experiment where a mobile robot uses adaptive spherical views to localize itself and navigate inside a non-stationary office environment. The office contains seven members of staff and experiences a continuous change in its appearance over time due to their daily activities. The experiment runs as an episodic navigation task in the office over a period of eight weeks. The spherical views are stored in the nodes of a pose graph and they are updated in response to the changes in the environment. The updating mechanism is inspired by the concepts of long- and short-term memories. The experimental evaluation is done using three performance metrics which evaluate the quality of both the adaptive spherical views and the navigation over time.
Resumo:
In this paper we present a novel place recognition algorithm inspired by recent discoveries in human visual neuroscience. The algorithm combines intolerant but fast low resolution whole image matching with highly tolerant, sub-image patch matching processes. The approach does not require prior training and works on single images (although we use a cohort normalization score to exploit temporal frame information), alleviating the need for either a velocity signal or image sequence, differentiating it from current state of the art methods. We demonstrate the algorithm on the challenging Alderley sunny day – rainy night dataset, which has only been previously solved by integrating over 320 frame long image sequences. The system is able to achieve 21.24% recall at 100% precision, matching drastically different day and night-time images of places while successfully rejecting match hypotheses between highly aliased images of different places. The results provide a new benchmark for single image, condition-invariant place recognition.
Resumo:
Long-term autonomy in robotics requires perception systems that are resilient to unusual but realistic conditions that will eventually occur during extended missions. For example, unmanned ground vehicles (UGVs) need to be capable of operating safely in adverse and low-visibility conditions, such as at night or in the presence of smoke. The key to a resilient UGV perception system lies in the use of multiple sensor modalities, e.g., operating at different frequencies of the electromagnetic spectrum, to compensate for the limitations of a single sensor type. In this paper, visual and infrared imaging are combined in a Visual-SLAM algorithm to achieve localization. We propose to evaluate the quality of data provided by each sensor modality prior to data combination. This evaluation is used to discard low-quality data, i.e., data most likely to induce large localization errors. In this way, perceptual failures are anticipated and mitigated. An extensive experimental evaluation is conducted on data sets collected with a UGV in a range of environments and adverse conditions, including the presence of smoke (obstructing the visual camera), fire, extreme heat (saturating the infrared camera), low-light conditions (dusk), and at night with sudden variations of artificial light. A total of 240 trajectory estimates are obtained using five different variations of data sources and data combination strategies in the localization method. In particular, the proposed approach for selective data combination is compared to methods using a single sensor type or combining both modalities without preselection. We show that the proposed framework allows for camera-based localization resilient to a large range of low-visibility conditions.