866 resultados para Camera vision system
Resumo:
Universität Magdeburg, Dissertation, 2016
Resumo:
Universität Magdeburg, Dissertation, 2016
Resumo:
Vita: p. 141.
Resumo:
Shipping list no.: 2001-0096-P.
Resumo:
Molecular investigation of the origin of colour vision has discovered five visual pigment (opsin) genes, all of which are expressed in an agnathan (jawless) fish, the lamprey Geotria australis. Lampreys are extant representatives of an ancient group of vertebrates whose origins are thought to date back to at least the early Cambrian, approximately 540 million years ago [1.]. Phylogenetic analysis has identified the visual pigment opsin genes of G. australis as orthologues of the major classes of vertebrate opsin genes. Therefore, multiple opsin genes must have originated very early in vertebrate evolution, prior to the separation of the jawed and jawless vertebrate lineages, and thereby provided the genetic basis for colour vision in all vertebrate species. The southern hemisphere lamprey Geotria australis (Figure 1A,B) possesses a predominantly cone-based visual system designed for photopic (bright light) vision [2. S.P. Collin, I.C. Potter and C.R. Braekevelt, The ocular morphology of the southern hemisphere lamprey Geotria australis Gray, with special reference to optical specializations and the characterisation and phylogeny of photoreceptor types. Brain Behav. Evol. 54 (1999), pp. 96–111.2. and 3.]. Previous work identified multiple cone types suggesting that the potential for colour vision may have been present in the earliest members of this group. In order to trace the molecular evolution and origins of vertebrate colour vision, we have examined the genetic complement of visual pigment opsins in G. australis.
Resumo:
Large and powerful ocean predators such as swordfishes, some tunas, and several shark species are unique among fishes in that they are capable of maintaining elevated body temperatures (endothermy) when hunting for prey in deep and cold water [1-3]. In these animals, warming the central nervous system and the eyes is the one common feature of this energetically costly adaptation [4]. In the swordfish (Xiphias gladius), a highly specialized heating system located in an extraocular muscle specifically warms the eyes and brain up to 10degreesC-15degreesC above ambient water temperatures [2, 5]. Although the function of neural warming in fishes has been the subject of considerable speculation [1, 6, 7], the biological significance of this unusual ability has until now remained unknown. We show here that warming the retina significantly improves temporal resolution, and hence the detection of rapid motion, in fast-swimming predatory fishes such as the swordfish. Depending on diving depth, temporal resolution can be more than ten times greater in these fishes than in fishes with eyes at the same temperature as the surrounding water. The enhanced temporal resolution allowed by heated eyes provides warm-blooded and highly visual oceanic predators, such as swordfishes, tunas, and sharks, with a crucial advantage over their agile, cold-blooded prey.
Resumo:
Invasive vertebrate pests together with overabundant native species cause significant economic and environmental damage in the Australian rangelands. Access to artificial watering points, created for the pastoral industry, has been a major factor in the spread and survival of these pests. Existing methods of controlling watering points are mechanical and cannot discriminate between target species. This paper describes an intelligent system of controlling watering points based on machine vision technology. Initial test results clearly demonstrate proof of concept for machine vision in this application. These initial experiments were carried out as part of a 3-year project using machine vision software to manage all large vertebrates in the Australian rangelands. Concurrent work is testing the use of automated gates and innovative laneway and enclosure design. The system will have application in any habitat throughout the world where a resource is limited and can be enclosed for the management of livestock or wildlife.
Resumo:
A reliable perception of the real world is a key-feature for an autonomous vehicle and the Advanced Driver Assistance Systems (ADAS). Obstacles detection (OD) is one of the main components for the correct reconstruction of the dynamic world. Historical approaches based on stereo vision and other 3D perception technologies (e.g. LIDAR) have been adapted to the ADAS first and autonomous ground vehicles, after, providing excellent results. The obstacles detection is a very broad field and this domain counts a lot of works in the last years. In academic research, it has been clearly established the essential role of these systems to realize active safety systems for accident prevention, reflecting also the innovative systems introduced by industry. These systems need to accurately assess situational criticalities and simultaneously assess awareness of these criticalities by the driver; it requires that the obstacles detection algorithms must be reliable and accurate, providing: a real-time output, a stable and robust representation of the environment and an estimation independent from lighting and weather conditions. Initial systems relied on only one exteroceptive sensor (e.g. radar or laser for ACC and camera for LDW) in addition to proprioceptive sensors such as wheel speed and yaw rate sensors. But, current systems, such as ACC operating at the entire speed range or autonomous braking for collision avoidance, require the use of multiple sensors since individually they can not meet these requirements. It has led the community to move towards the use of a combination of them in order to exploit the benefits of each one. Pedestrians and vehicles detection are ones of the major thrusts in situational criticalities assessment, still remaining an active area of research. ADASs are the most prominent use case of pedestrians and vehicles detection. Vehicles should be equipped with sensing capabilities able to detect and act on objects in dangerous situations, where the driver would not be able to avoid a collision. A full ADAS or autonomous vehicle, with regard to pedestrians and vehicles, would not only include detection but also tracking, orientation, intent analysis, and collision prediction. The system detects obstacles using a probabilistic occupancy grid built from a multi-resolution disparity map. Obstacles classification is based on an AdaBoost SoftCascade trained on Aggregate Channel Features. A final stage of tracking and fusion guarantees stability and robustness to the result.
Resumo:
In emergency situations, where time for blood transfusion is reduced, the O negative blood type (the universal donor) is administrated. However, sometimes even the universal donor can cause transfusion reactions that can be fatal to the patient. As commercial systems do not allow fast results and are not suitable for emergency situations, this paper presents the steps considered for the development and validation of a prototype, able to determine blood type compatibilities, even in emergency situations. Thus it is possible, using the developed system, to administer a compatible blood type, since the first blood unit transfused. In order to increase the system’s reliability, this prototype uses different approaches to classify blood types, the first of which is based on Decision Trees and the second one based on support vector machines. The features used to evaluate these classifiers are the standard deviation values, histogram, Histogram of Oriented Gradients and fast Fourier transform, computed on different regions of interest. The main characteristics of the presented prototype are small size, lightweight, easy transportation, ease of use, fast results, high reliability and low cost. These features are perfectly suited for emergency scenarios, where the prototype is expected to be used.
Resumo:
The perception of an object as a single entity within a visual scene requires that its features are bound together and segregated from the background and/or other objects. Here, we used magnetoencephalography (MEG) to assess the hypothesis that coherent percepts may arise from the synchronized high frequency (gamma) activity between neurons that code features of the same object. We also assessed the role of low frequency (alpha, beta) activity in object processing. The target stimulus (i.e. object) was a small patch of a concentric grating of 3c/°, viewed eccentrically. The background stimulus was either a blank field or a concentric grating of 3c/° periodicity, viewed centrally. With patterned backgrounds, the target stimulus emerged--through rotation about its own centre--as a circular subsection of the background. Data were acquired using a 275-channel whole-head MEG system and analyzed using Synthetic Aperture Magnetometry (SAM), which allows one to generate images of task-related cortical oscillatory power changes within specific frequency bands. Significant oscillatory activity across a broad range of frequencies was evident at the V1/V2 border, and subsequent analyses were based on a virtual electrode at this location. When the target was presented in isolation, we observed that: (i) contralateral stimulation yielded a sustained power increase in gamma activity; and (ii) both contra- and ipsilateral stimulation yielded near identical transient power changes in alpha (and beta) activity. When the target was presented against a patterned background, we observed that: (i) contralateral stimulation yielded an increase in high-gamma (>55 Hz) power together with a decrease in low-gamma (40-55 Hz) power; and (ii) both contra- and ipsilateral stimulation yielded a transient decrease in alpha (and beta) activity, though the reduction tended to be greatest for contralateral stimulation. The opposing power changes across different regions of the gamma spectrum with 'figure/ground' stimulation suggest a possible dual role for gamma rhythms in visual object coding, and provide general support of the binding-by-synchronization hypothesis. As the power changes in alpha and beta activity were largely independent of the spatial location of the target, however, we conclude that their role in object processing may relate principally to changes in visual attention.