422 resultados para visual diagnosis
Resumo:
Background Screening tests of basic cognitive status or ‘mental state’ have been shown to predict mortality and functional outcomes in adults. This study examined the relationship between mental state and outcomes in children with type 1 diabetes. Objective We aimed to determine whether mental state at diagnosis predicts longer term cognitive function of children with a new diagnosis of type 1 diabetes. Methods Mental state of 87 patients presenting with newly diagnosed type 1 diabetes was assessed using the School-Years Screening Test for the Evaluation of Mental Status. Cognitive abilities were assessed 1 wk and 6 months postdiagnosis using standardized tests of attention, memory, and intelligence. Results Thirty-seven children (42.5%) had reduced mental state at diagnosis. Children with impaired mental state had poorer attention and memory in the week following diagnosis, and, after controlling for possible confounding factors, significantly lower IQ at 6 months compared to those with unimpaired mental state (p < 0.05). Conclusions Cognition is impaired acutely in a significant number of children presenting with newly diagnosed type 1 diabetes. Mental state screening is an effective method of identifying children at risk of ongoing cognitive difficulties in the days and months following diagnosis. Clinicians may consider mental state screening for all newly diagnosed diabetic children to identify those at risk of cognitive sequelae.
Resumo:
Rapid prototyping environments can speed up the research of visual control algorithms. We have designed and implemented a software framework for fast prototyping of visual control algorithms for Micro Aerial Vehicles (MAV). We have applied a combination of a proxy-based network communication architecture and a custom Application Programming Interface. This allows multiple experimental configurations, like drone swarms or distributed processing of a drone's video stream. Currently, the framework supports a low-cost MAV: the Parrot AR.Drone. Real tests have been performed on this platform and the results show comparatively low figures of the extra communication delay introduced by the framework, while adding new functionalities and flexibility to the selected drone. This implementation is open-source and can be downloaded from www.vision4uav.com/?q=VC4MAV-FW
Resumo:
Process-aware information systems, ranging from generic workflow systems to dedicated enterprise information systems, use work-lists to offer so-called work items to users. In real scenarios, users can be confronted with a very large number of work items that stem from multiple cases of different processes. In this jungle of work items, users may find it hard to choose the right item to work on next. The system cannot autonomously decide which is the right work item, since the decision is also dependent on conditions that are somehow outside the system. For instance, what is “best” for an organisation should be mediated with what is “best” for its employees. Current work-list handlers show work items as a simple sorted list and therefore do not provide much decision support for choosing the right work item. Since the work-list handler is the dominant interface between the system and its users, it is worthwhile to provide an intuitive graphical interface that uses contextual information about work items and users to provide suggestions about prioritisation of work items. This paper uses the so-called map metaphor to visualise work items and resources (e.g., users) in a sophisticated manner. Moreover, based on distance notions, the work-list handler can suggest the next work item by considering different perspectives. For example, urgent work items of a type that suits the user may be highlighted. The underlying map and distance notions may be of a geographical nature (e.g., a map of a city or office building), but may also be based on process designs, organisational structures, social networks, due dates, calendars, etc. The framework proposed in this paper is generic and can be applied to any process-aware information system. Moreover, in order to show its practical feasibility, the paper discusses a full-fledged implementation developed in the context of the open-source workflow environment YAWL, together with two real examples stemming from two very different scenarios. The results of an initial usability evaluation of the implementation are also presented, which provide a first indication of the validity of the approach.
Resumo:
This paper presents an image-based visual servoing system that was used to track the atmospheric Earth re-entry of Hayabusa. The primary aim of this ground based tracking platform was to record the emission spectrum radiating from the superheated gas of the shock layer and the surface of the heat shield during re-entry. To the author's knowledge, this is the first time that a visual servoing system has successfully tracked a super-orbital re-entry of a spacecraft and recorded its pectral signature. Furthermore, we improved the system by including a simplified dynamic model for feed-forward control and demonstrate improved tracking performance on the International Space Station (ISS). We present comparisons between simulation and experimental results on different target trajectories including tracking results from Hayabusa and ISS. The required performance for tracking both spacecraft is demanding when combined with a narrow field of view (FOV). We also briefly discuss the preliminary results obtained from the spectroscopy of the Hayabusa's heat shield during re-entry.
Resumo:
The authors present a Cause-Effect fault diagnosis model, which utilises the Root Cause Analysis approach and takes into account the technical features of a digital substation. The Dempster/Shafer evidence theory is used to integrate different types of fault information in the diagnosis model so as to implement a hierarchical, systematic and comprehensive diagnosis based on the logic relationship between the parent and child nodes such as transformer/circuit-breaker/transmission-line, and between the root and child causes. A real fault scenario is investigated in the case study to demonstrate the developed approach in diagnosing malfunction of protective relays and/or circuit breakers, miss or false alarms, and other commonly encountered faults at a modern digital substation.
Resumo:
In this paper we use a sequence-based visual localization algorithm to reveal surprising answers to the question, how much visual information is actually needed to conduct effective navigation? The algorithm actively searches for the best local image matches within a sliding window of short route segments or 'sub-routes', and matches sub-routes by searching for coherent sequences of local image matches. In contract to many existing techniques, the technique requires no pre-training or camera parameter calibration. We compare the algorithm's performance to the state-of-the-art FAB-MAP 2.0 algorithm on a 70 km benchmark dataset. Performance matches or exceeds the state of the art feature-based localization technique using images as small as 4 pixels, fields of view reduced by a factor of 250, and pixel bit depths reduced to 2 bits. We present further results demonstrating the system localizing in an office environment with near 100% precision using two 7 bit Lego light sensors, as well as using 16 and 32 pixel images from a motorbike race and a mountain rally car stage. By demonstrating how little image information is required to achieve localization along a route, we hope to stimulate future 'low fidelity' approaches to visual navigation that complement probabilistic feature-based techniques.
Resumo:
Learning and then recognizing a route, whether travelled during the day or at night, in clear or inclement weather, and in summer or winter is a challenging task for state of the art algorithms in computer vision and robotics. In this paper, we present a new approach to visual navigation under changing conditions dubbed SeqSLAM. Instead of calculating the single location most likely given a current image, our approach calculates the best candidate matching location within every local navigation sequence. Localization is then achieved by recognizing coherent sequences of these “local best matches”. This approach removes the need for global matching performance by the vision front-end - instead it must only pick the best match within any short sequence of images. The approach is applicable over environment changes that render traditional feature-based techniques ineffective. Using two car-mounted camera datasets we demonstrate the effectiveness of the algorithm and compare it to one of the most successful feature-based SLAM algorithms, FAB-MAP. The perceptual change in the datasets is extreme; repeated traverses through environments during the day and then in the middle of the night, at times separated by months or years and in opposite seasons, and in clear weather and extremely heavy rain. While the feature-based method fails, the sequence-based algorithm is able to match trajectory segments at 100% precision with recall rates of up to 60%.
Resumo:
Traditional analytic models for power system fault diagnosis are usually formulated as an unconstrained 0–1 integer programming problem. The key issue of the models is to seek the fault hypothesis that minimizes the discrepancy between the actual and the expected states of the concerned protective relays and circuit breakers. The temporal information of alarm messages has not been well utilized in these methods, and as a result, the diagnosis results may be not unique and hence indefinite, especially when complicated and multiple faults occur. In order to solve this problem, this paper presents a novel analytic model employing the temporal information of alarm messages along with the concept of related path. The temporal relationship among the actions of protective relays and circuit breakers, and the different protection configurations in a modern power system can be reasonably represented by the developed model, and therefore, the diagnosed results will be more definite under different circumstances of faults. Finally, an actual power system fault was served to verify the proposed method.
Resumo:
Audio-visualspeechrecognition, or the combination of visual lip-reading with traditional acoustic speechrecognition, has been previously shown to provide a considerable improvement over acoustic-only approaches in noisy environments, such as that present in an automotive cabin. The research presented in this paper will extend upon the established audio-visualspeechrecognition literature to show that further improvements in speechrecognition accuracy can be obtained when multiple frontal or near-frontal views of a speaker's face are available. A series of visualspeechrecognition experiments using a four-stream visual synchronous hidden Markov model (SHMM) are conducted on the four-camera AVICAR automotiveaudio-visualspeech database. We study the relative contribution between the side and central orientated cameras in improving visualspeechrecognition accuracy. Finally combination of the four visual streams with a single audio stream in a five-stream SHMM demonstrates a relative improvement of over 56% in word recognition accuracy when compared to the acoustic-only approach in the noisiest conditions of the AVICAR database.
Resumo:
Visual sea-floor mapping is a rapidly growing application for Autonomous Underwater Vehicles (AUVs). AUVs are well-suited to the task as they remove humans from a potentially dangerous environment, can reach depths human divers cannot, and are capable of long-term operation in adverse conditions. The output of sea-floor maps generated by AUVs has a number of applications in scientific monitoring: from classifying coral in high biological value sites to surveying sea sponges to evaluate marine environment health.
Resumo:
Purpose: To investigate the correlations of the global flash multifocal electroretinogram (MOFO mfERG) with common clinical visual assessments – Humphrey perimetry and Stratus circumpapillary retinal nerve fiber layer (RNFL) thickness measurement in type II diabetic patients. Methods: Forty-two diabetic patients participated in the study: ten were free from diabetic retinopathy (DR) while the remainder suffered from mild to moderate non-proliferative diabetic retinopathy (NPDR). Fourteen age-matched controls were recruited for comparison. MOFO mfERG measurements were made under high and low contrast conditions. Humphrey central 30-2 perimetry and Stratus OCT circumpapillary RNFL thickness measurements were also performed. Correlations between local values of implicit time and amplitude of the mfERG components (direct component (DC) and induced component (IC)), and perimetric sensitivity and RNFL thickness were evaluated by mapping the localized responses for the three subject groups. Results: MOFO mfERG was superior to perimetry and RNFL assessments in showing differences between the diabetic groups (with and without DR) and the controls. All the MOFO mfERG amplitudes (except IC amplitude at high contrast) correlated better with perimetry findings (Pearson’s r ranged from 0.23 to 0.36, p<0.01) than did the mfERG implicit time at both high and low contrasts across all subject groups. No consistent correlation was found between the mfERG and RNFL assessments for any group or contrast conditions. The responses of the local MOFO mfERG correlated with local perimetric sensitivity but not with RNFL thickness. Conclusion: Early functional changes in the diabetic retina seem to occur before morphological changes in the RNFL.
Resumo:
Aims/hypothesis: Impaired central vision has been shown to predict diabetic peripheral neuropathy (DPN). Several studies have demonstrated diffuse retinal neurodegenerative changes in diabetic patients prior to retinopathy development, raising the prospect that non-central vision may also be compromised by primary neural damage. We hypothesise that type 2 diabetic patients with DPN exhibit visual sensitivity loss in a distinctive pattern across the visual field, compared with a control group of type 2 diabetic patients without DPN. Methods: Increment light sensitivity was measured by standard perimetry in the central 30 degree of visual field for two age-matched groups of type 2 diabetic patients, with and without neuropathy (n=40/30). Neuropathy status was assigned using the neuropathy disability score. Mean visual sensitivity values were calculated globally, for each quadrant and for three eccentricities (0-10 degree , 11-20 degree and 21-30 degree ). Data were analysed using a generalised additive mixed model (GAMM). Results: Global and quadrant between-group visual sensitivity mean differences were marginally but consistently lower (by about 1 dB) in the neuropathy cohort compared with controls. Between-group mean differences increased from 0.36 to 1.81 dB with increasing eccentricity. GAMM analysis, after adjustment for age, showed these differences to be significant beyond 15 degree eccentricity and monotonically increasing. Retinopathy levels and disease duration were not significant factors within the model (p=0.90). Conclusions/interpretation: Visual sensitivity reduces disproportionately with increasing eccentricity in type 2 diabetic patients with peripheral neuropathy. This sensitivity reduction within the central 30 degree of visual field may be indicative of more consequential loss in the far periphery.
Resumo:
This paper presents a reactive Sense and Avoid approach using spherical image-based visual servoing. Avoidance of point targets in the lateral or vertical plane is achieved without requiring an estimate of range. Simulated results for static and dynamic targets are provided using a realistic model of a small fixed wing unmanned aircraft.