883 resultados para Compton Camera
Resumo:
This paper describes a real-time multi-camera surveillance system that can be applied to a range of application domains. This integrated system is designed to observe crowded scenes and has mechanisms to improve tracking of objects that are in close proximity. The four component modules described in this paper are (i) motion detection using a layered background model, (ii) object tracking based on local appearance, (iii) hierarchical object recognition, and (iv) fused multisensor object tracking using multiple features and geometric constraints. This integrated approach to complex scene tracking is validated against a number of representative real-world scenarios to show that robust, real-time analysis can be performed. Copyright (C) 2007 Hindawi Publishing Corporation. All rights reserved.
Resumo:
The populations of many species are structured such that mating is not random and occurs between members of local patches. When patches are founded by a single female and all matings occur between siblings, brothers may compete with each other for matings with their sisters. This local mate competition (LMC) selects for a female-biased sex ratio, especially in species where females have control over offspring sex, as in the parasitic Hymenoptera. Two factors are predicted to decrease the degree of female bias: (1) an increase in the number of foundress females in the patch and (2) an increase in the fraction of individuals mating after dispersal from the natal patch. Pollinating fig wasps are well known as classic examples of species where all matings occur in the local patch. We studied non-pollinating fig wasps, which are more diverse than the pollinating fig wasps and also provide natural experimental groups of species with different male morphologies that are linked to different mating structures. In this group of wasps, species with wingless males mate in the local patch (i.e. the fig fruit) while winged male species mate after dispersal. Species with both kinds of male have a mixture of local and non-local mating. Data from 44 species show that sex ratios (defined as the proportion of males) are in accordance with theoretical predictions: wingless male species < wing-dimorphic male species < winged male species. These results are also supported by a formal comparative analysis that controls for phylogeny. The foundress number is difficult to estimate directly for non-pollinating fig wasps but a robust indirect method leads to the prediction that foundress number, and hence sex ratio, should increase with the proportion of patches occupied in a crop. This result is supported strongly across 19 species with wingless males, but not across 8 species with winged males. The mean sex ratios for species with winged males are not significantly different from 0.5, and the absence of the correlation observed across species with wingless males may reflect weak selection to adjust the sex ratio in species whose population mating structure tends not to be subdivided. The same relationship is also predicted to occur within species if individual females adjust their sex ratios facultatively. This final prediction was not supported by data from a wingless male species, a male wing-dimorphic species or a winged male species.
Resumo:
This article is a close analysis of The Cry of the Owl (Thraves, 2009). It is also part of larger project to bring together traditions of detailed criticism with those of production history, which culminates in second article on the film due to be published in 2011. The detail of the argument concerns analysing a range of the film’s key signifying systems, with a particular interest in the way the film explores the gap between images / impressions and characters’ realities; engages in a complex way with generic traditions and modes of address; establishes complex patterns of connection and contrast through blocking, camera strategies and narrative structure.
Resumo:
The project investigated whether it would be possible to remove the main technical hindrance to precision application of herbicides to arable crops in the UK, namely creating geo-referenced weed maps for each field. The ultimate goal is an information system so that agronomists and farmers can plan precision weed control and create spraying maps. The project focussed on black-grass in wheat, but research was also carried out on barley and beans and on wild-oats, barren brome, rye-grass, cleavers and thistles which form stable patches in arable fields. Farmers may also make special efforts to control them. Using cameras mounted on farm machinery, the project explored the feasibility of automating the process of mapping black-grass in fields. Geo-referenced images were captured from June to December 2009, using sprayers, a tractor, combine harvesters and on foot. Cameras were mounted on the sprayer boom, on windows or on top of tractor and combine cabs and images were captured with a range of vibration levels and at speeds up to 20 km h-1. For acceptability to farmers, it was important that every image containing black-grass was classified as containing black-grass; false negatives are highly undesirable. The software algorithms recorded no false negatives in sample images analysed to date, although some black-grass heads were unclassified and there were also false positives. The density of black-grass heads per unit area estimated by machine vision increased as a linear function of the actual density with a mean detection rate of 47% of black-grass heads in sample images at T3 within a density range of 13 to 1230 heads m-2. A final part of the project was to create geo-referenced weed maps using software written in previous HGCA-funded projects and two examples show that geo-location by machine vision compares well with manually-mapped weed patches. The consortium therefore demonstrated for the first time the feasibility of using a GPS-linked computer-controlled camera system mounted on farm machinery (tractor, sprayer or combine) to geo-reference black-grass in winter wheat between black-grass head emergence and seed shedding.
Resumo:
Many weeds occur in patches but farmers frequently spray whole fields to control the weeds in these patches. Given a geo-referenced weed map, technology exists to confine spraying to these patches. Adoption of patch spraying by arable farmers has, however, been negligible partly due to the difficulty of constructing weed maps. Building on previous DEFRA and HGCA projects, this proposal aims to develop and evaluate a machine vision system to automate the weed mapping process. The project thereby addresses the principal technical stumbling block to widespread adoption of site specific weed management (SSWM). The accuracy of weed identification by machine vision based on a single field survey may be inadequate to create herbicide application maps. We therefore propose to test the hypothesis that sufficiently accurate weed maps can be constructed by integrating information from geo-referenced images captured automatically at different times of the year during normal field activities. Accuracy of identification will also be increased by utilising a priori knowledge of weeds present in fields. To prove this concept, images will be captured from arable fields on two farms and processed offline to identify and map the weeds, focussing especially on black-grass, wild oats, barren brome, couch grass and cleavers. As advocated by Lutman et al. (2002), the approach uncouples the weed mapping and treatment processes and builds on the observation that patches of these weeds are quite stable in arable fields. There are three main aspects to the project. 1) Machine vision hardware. Hardware component parts of the system are one or more cameras connected to a single board computer (Concurrent Solutions LLC) and interfaced with an accurate Global Positioning System (GPS) supplied by Patchwork Technology. The camera(s) will take separate measurements for each of the three primary colours of visible light (red, green and blue) in each pixel. The basic proof of concept can be achieved in principle using a single camera system, but in practice systems with more than one camera may need to be installed so that larger fractions of each field can be photographed. Hardware will be reviewed regularly during the project in response to feedback from other work packages and updated as required. 2) Image capture and weed identification software. The machine vision system will be attached to toolbars of farm machinery so that images can be collected during different field operations. Images will be captured at different ground speeds, in different directions and at different crop growth stages as well as in different crop backgrounds. Having captured geo-referenced images in the field, image analysis software will be developed to identify weed species by Murray State and Reading Universities with advice from The Arable Group. A wide range of pattern recognition and in particular Bayesian Networks will be used to advance the state of the art in machine vision-based weed identification and mapping. Weed identification algorithms used by others are inadequate for this project as we intend to collect and correlate images collected at different growth stages. Plants grown for this purpose by Herbiseed will be used in the first instance. In addition, our image capture and analysis system will include plant characteristics such as leaf shape, size, vein structure, colour and textural pattern, some of which are not detectable by other machine vision systems or are omitted by their algorithms. Using such a list of features observable using our machine vision system, we will determine those that can be used to distinguish weed species of interest. 3) Weed mapping. Geo-referenced maps of weeds in arable fields (Reading University and Syngenta) will be produced with advice from The Arable Group and Patchwork Technology. Natural infestations will be mapped in the fields but we will also introduce specimen plants in pots to facilitate more rigorous system evaluation and testing. Manual weed maps of the same fields will be generated by Reading University, Syngenta and Peter Lutman so that the accuracy of automated mapping can be assessed. The principal hypothesis and concept to be tested is that by combining maps from several surveys, a weed map with acceptable accuracy for endusers can be produced. If the concept is proved and can be commercialised, systems could be retrofitted at low cost onto existing farm machinery. The outputs of the weed mapping software would then link with the precision farming options already built into many commercial sprayers, allowing their use for targeted, site-specific herbicide applications. Immediate economic benefits would, therefore, arise directly from reducing herbicide costs. SSWM will also reduce the overall pesticide load on the crop and so may reduce pesticide residues in food and drinking water, and reduce adverse impacts of pesticides on non-target species and beneficials. Farmers may even choose to leave unsprayed some non-injurious, environmentally-beneficial, low density weed infestations. These benefits fit very well with the anticipated legislation emerging in the new EU Thematic Strategy for Pesticides which will encourage more targeted use of pesticides and greater uptake of Integrated Crop (Pest) Management approaches, and also with the requirements of the Water Framework Directive to reduce levels of pesticides in water bodies. The greater precision of weed management offered by SSWM is therefore a key element in preparing arable farming systems for the future, where policy makers and consumers want to minimise pesticide use and the carbon footprint of farming while maintaining food production and security. The mapping technology could also be used on organic farms to identify areas of fields needing mechanical weed control thereby reducing both carbon footprints and also damage to crops by, for example, spring tines. Objective i. To develop a prototype machine vision system for automated image capture during agricultural field operations; ii. To prove the concept that images captured by the machine vision system over a series of field operations can be processed to identify and geo-reference specific weeds in the field; iii. To generate weed maps from the geo-referenced, weed plants/patches identified in objective (ii).
Resumo:
This solo exhibition featured Ballet, a filmed performance and video installation by Szuper Gallery and an installation of original archive films. Ballet engages with recent histories of rural filmmaking, with movement and dance, linking everyday farming movements with the aesthetics of dance. The starting point for this new work was a series of archival films from the MERL collection, which were made for British farmers as a means both for information and for propaganda, to provide warnings of contagion and nuclear catastrophe, describing procedure and instruction in the case of emergency. Gestural performances and movements of background actors observed in those films were re-scripted into a new choreography of movement for camera to form a playful assemblage.
Resumo:
Video:35 mins, 2006. The video shows a group of performers in a studio and seminar situation. Individually addressing the camera they offer personal views and experiences of their own art production in relation to the institution, while reflecting on their role as teachers. The performance scripts mainly originate from a series of real interviews with a diverse group of artist teachers, who emphasise the collaborative, performative and subversive nature of teaching. These views may seems symptomatic for contemporary art practices, but are ultimately antagonistic to the ongoing commodification of the system of art education.
Resumo:
An overview is given of a vision system for locating, recognising and tracking multiple vehicles, using an image sequence taken by a single camera mounted on a moving vehicle. The camera motion is estimated by matching features on the ground plane from one image to the next. Vehicle detection and hypothesis generation are performed using template correlation and a 3D wire frame model of the vehicle is fitted to the image. Once detected and identified, vehicles are tracked using dynamic filtering. A separate batch mode filter obtains the 3D trajectories of nearby vehicles over an extended time. Results are shown for a motorway image sequence.
Resumo:
This paper presents a review of the design and development of the Yorick series of active stereo camera platforms and their integration into real-time closed loop active vision systems, whose applications span surveillance, navigation of autonomously guided vehicles (AGVs), and inspection tasks for teleoperation, including immersive visual telepresence. The mechatronic approach adopted for the design of the first system, including head/eye platform, local controller, vision engine, gaze controller and system integration, proved to be very successful. The design team comprised researchers with experience in parallel computing, robot control, mechanical design and machine vision. The success of the project has generated sufficient interest to sanction a number of revisions of the original head design, including the design of a lightweight compact head for use on a robot arm, and the further development of a robot head to look specifically at increasing visual resolution for visual telepresence. The controller and vision processing engines have also been upgraded, to include the control of robot heads on mobile platforms and control of vergence through tracking of an operator's eye movement. This paper details the hardware development of the different active vision/telepresence systems.
Resumo:
Within the context of active vision, scant attention has been paid to the execution of motion saccades—rapid re-adjustments of the direction of gaze to attend to moving objects. In this paper we first develop a methodology for, and give real-time demonstrations of, the use of motion detection and segmentation processes to initiate capture saccades towards a moving object. The saccade is driven by both position and velocity of the moving target under the assumption of constant target velocity, using prediction to overcome the delay introduced by visual processing. We next demonstrate the use of a first order approximation to the segmented motion field to compute bounds on the time-to-contact in the presence of looming motion. If the bound falls below a safe limit, a panic saccade is fired, moving the camera away from the approaching object. We then describe the use of image motion to realize smooth pursuit, tracking using velocity information alone, where the camera is moved so as to null a single constant image motion fitted within a central image region. Finally, we glue together capture saccades with smooth pursuit, thus effecting changes in both what is being attended to and how it is being attended to. To couple the different visual activities of waiting, saccading, pursuing and panicking, we use a finite state machine which provides inherent robustness outside of visual processing and provides a means of making repeated exploration. We demonstrate in repeated trials that the transition from saccadic motion to tracking is more likely to succeed using position and velocity control, than when using position alone.
Resumo:
The objective of a Visual Telepresence System is to provide the operator with a high fidelity image from a remote stereo camera pair linked to a pan/tilt device such that the operator may reorient the camera position by use of head movement. Systems such as these which utilise virtual reality style helmet mounted displays have a number of limitations. The geometry of the camera positions and of the displays is generally fixed and is most suitable only for viewing elements of a scene at a particular distance. To address such limitations, a prototype system has been developed where the geometry of the displays and cameras is dynamically controlled by the eye movement of the operator. This paper explores why it is necessary to actively adjust the display system as well as the cameras and justifies the use of mechanical adjustment of the displays as an alternative to adjustment by electronic or image processing methods. The electronic and mechanical design is described including optical arrangements and control algorithms. The performance and accuracy of the system is assessed with respect to eye movement.
Resumo:
A robot mounted camera is useful in many machine vision tasks as it allows control over view direction and position. In this paper we report a technique for calibrating both the robot and the camera using only a single corresponding point. All existing head-eye calibration systems we have encountered rely on using pre-calibrated robots, pre- calibrated cameras, special calibration objects or combinations of these. Our method avoids using large scale non-linear optimizations by recovering the parameters in small dependent groups. This is done by performing a series of planned, but initially uncalibrated robot movements. Many of the kinematic parameters are obtained using only camera views in which the calibration feature is at, or near the image center, thus avoiding errors which could be introduced by lens distortion. The calibration is shown to be both stable and accurate. The robotic system we use consists of camera with pan-tilt capability mounted on a Cartesian robot, providing a total of 5 degrees of freedom.
Resumo:
A visual telepresence system has been developed at the University of Reading which utilizes eye tracing to adjust the horizontal orientation of the cameras and display system according to the convergence state of the operator's eyes. Slaving the cameras to the operator's direction of gaze enables the object of interest to be centered on the displays. The advantage of this is that the camera field of view may be decreased to maximize the achievable depth resolution. An active camera system requires an active display system if appropriate binocular cues are to be preserved. For some applications, which critically depend upon the veridical perception of the object's location and dimensions, it is imperative that the contribution of binocular cues to these judgements be ascertained because they are directly influenced by camera and display geometry. Using the active telepresence system, we investigated the contribution of ocular convergence information to judgements of size, distance and shape. Participants performed an open- loop reach and grasp of the virtual object under reduced cue conditions where the orientation of the cameras and the displays were either matched or unmatched. Inappropriate convergence information produced weak perceptual distortions and caused problems in fusing the images.
Resumo:
While the Cluster spacecraft were located near the high-latitude magnetopause, between 1010 and 1040 UT on 16 January 2004, three typical flux transfer event (FTE) signatures were observed. During this interval, simultaneous and conjugated all‐sky camera measurements, recorded at Yellow River Station, Svalbard, are available at 630.0 and 557.7 nm that show poleward‐moving auroral forms (PMAFs), consistent with magnetic reconnection at the dayside magnetopause. Simultaneous FTEs seen at the magnetopause mainly move northward, but having duskward (eastward) and tailward velocity components, roughly consistent with the observed direction of motion of the PMAFs in all‐sky images. Between the PMAFs meridional keograms, extracted from the all‐sky images, show intervals of lower intensity aurora which migrate equatorward just before the PMAFs intensify. This is strong evidence for an equatorward eroding and poleward moving open‐closed boundary associated with a variable magnetopause reconnection rate under variable IMF conditions. From the durations of the PMAFs, we infer that the evolution time of FTEs is 5–11 minutes from its origin on the magnetopause to its addition to the polar cap.
Resumo:
Patterns of substitution in chloroplast encoded trnL_F regions were compared between species of Actaea (Ranunculales), Digitalis (Scrophulariales), Drosera (Caryophyllales), Panicoideae (Poales), the small chromosome species clade of Pelargonium (Geraniales), each representing a different order of flowering plants, and Huperzia (Lycopodiales). In total, the study included 265 taxa, each with > 900-bp sequences, totaling 0.24 Mb. Both pairwise and phylogeny-based comparisons were used to assess nucleotide substitution patterns. In all six groups, we found that transition/transversion ratios, as estimated by maximum likelihood on most-parsimonious trees, ranged between 0.8 and 1.0 for ingroups. These values occurred both at low sequence divergences, where substitutional saturation, i.e., multiple substitutions having occurred at the same (homologous) nucleotide position, was not expected, and at higher levels of divergence. This suggests that the angiosperm trnL-F regions evolve in a pattern different from that generally observed for nuclear and animal mtDNA (transitional/transversion ratio > or = 2). Transition/transversion ratios in the intron and the spacer region differed in all alignments compared, yet base compositions between the regions were highly similar in all six groups. A>-