45 resultados para Cameras
em CentAUR: Central Archive University of Reading - UK
Resumo:
Capsule Avian predators are principally responsible. Aims To document the fate of Spotted Flycatcher nests and to identify the species responsible for nest predation. Methods During 2005-06, purpose-built, remote, digital nest-cameras were deployed at 65 out of 141 Spotted Flycatcher nests monitored in two study areas, one in south Devon and the second on the border of Bedfordshire and Cambridgeshire. Results Of the 141 nests monitored, 90 were successful (non-camera nests, 49 out of 76 successful, camera nests, 41 out of 65). Fate was determined for 63 of the 65 nests monitored by camera, with 20 predation events documented, all of which occurred during daylight hours. Avian predators carried out 17 of the 20 predations, with the principal nest predator identified as Eurasian Jay Garrulus glandarius. The only mammal recorded predating nests was the Domestic Cat Felis catus, the study therefore providing no evidence that Grey Squirrels Sciurus carolinensis are an important predator of Spotted Flycatcher nests. There was no evidence of differences in nest survival rates at nests with and without cameras. Nest remains following predation events gave little clue as to the identity of the predator species responsible. Conclusions Nest-cameras can be useful tools in the identification of nest predators, and may be deployed with no subsequent effect on nest survival. The majority of predation of Spotted Flycatcher nests in this study was by avian predators, principally the Jay. There was little evidence of predation by mammalian predators. Identification of specific nest predators enhances studies of breeding productivity and predation risk.
Resumo:
This paper presents an image motion model for airborne three-line-array (TLA) push-broom cameras. Both aircraft velocity and attitude instability are taken into account in modeling image motion. Effects of aircraft pitch, roll, and yaw on image motion are analyzed based on geometric relations in designated coordinate systems. The image motion is mathematically modeled by image motion velocity multiplied by exposure time. Quantitative analysis to image motion velocity is then conducted in simulation experiments. The results have shown that image motion caused by aircraft velocity is space invariant while image motion caused by aircraft attitude instability is more complicated. Pitch,roll and yaw all contribute to image motion to different extents. Pitch dominates the along-track image motion and both roll and yaw greatly contribute to the cross-track image motion. These results provide a valuable base for image motion compensation to ensure high accuracy imagery in aerial photogrammetry.
Resumo:
In the last decade, several research results have presented formulations for the auto-calibration problem. Most of these have relied on the evaluation of vanishing points to extract the camera parameters. Normally vanishing points are evaluated using pedestrians or the Manhattan World assumption i.e. it is assumed that the scene is necessarily composed of orthogonal planar surfaces. In this work, we present a robust framework for auto-calibration, with improved results and generalisability for real-life situations. This framework is capable of handling problems such as occlusions and the presence of unexpected objects in the scene. In our tests, we compare our formulation with the state-of-the-art in auto-calibration using pedestrians and Manhattan World-based assumptions. This paper reports on the experiments conducted using publicly available datasets; the results have shown that our formulation represents an improvement over the state-of-the-art.
Resumo:
Model based vision allows use of prior knowledge of the shape and appearance of specific objects to be used in the interpretation of a visual scene; it provides a powerful and natural way to enforce the view consistency constraint. A model based vision system has been developed within ESPRIT VIEWS: P2152 which is able to classify and track moving objects (cars and other vehicles) in complex, cluttered traffic scenes. The fundamental basis of the method has been previously reported. This paper presents recent developments which have extended the scope of the system to include (i) multiple cameras, (ii) variable camera geometry, and (iii) articulated objects. All three enhancements have easily been accommodated within the original model-based approach
Resumo:
The paper reports an interactive tool for calibrating a camera, suitable for use in outdoor scenes. The motivation for the tool was the need to obtain an approximate calibration for images taken with no explicit calibration data. Such images are frequently presented to research laboratories, especially in surveillance applications, with a request to demonstrate algorithms. The method decomposes the calibration parameters into intuitively simple components, and relies on the operator interactively adjusting the parameter settings to achieve a visually acceptable agreement between a rectilinear calibration model and his own perception of the scene. Using the tool, we have been able to calibrate images of unknown scenes, taken with unknown cameras, in a matter of minutes. The standard of calibration has proved to be sufficient for model-based pose recovery and tracking of vehicles.
Resumo:
We have designed and implemented a low-cost digital system using closed-circuit television cameras coupled to a digital acquisition system for the recording of in vivo behavioral data in rodents and for allowing observation and recording of more than 10 animals simultaneously at a reduced cost, as compared with commercially available solutions. This system has been validated using two experimental rodent models: one involving chemically induced seizures and one assessing appetite and feeding. We present observational results showing comparable or improved levels of accuracy and observer consistency between this new system and traditional methods in these experimental models, discuss advantages of the presented system over conventional analog systems and commercially available digital systems, and propose possible extensions to the system and applications to nonrodent studies.
Resumo:
Plasma parcels are observed propagating from the Sun out to the large coronal heights monitored by the Heliospheric Imagers (HI) instruments onboard the NASA STEREO spacecraft during September 2007. The source region of these out-flowing parcels is found to corotate with the Sun and to be rooted near the western boundary of an equatorial coronal hole. These plasma enhancements evolve during their propagation through the HI cameras’ fields of view and only becoming fully developed in the outer camera field of view. We provide evidence that HI is observing the formation of a Corotating Interaction Region(CIR) where fast solar wind from the equatorial coronal hole is interacting with the slow solar wind of the streamer belt located on the western edge of that coronal hole. A dense plasma parcel is also observed near the footpoint of the observed CIR at a distance less than 0.1AU from the Sun where fast wind would have not had time to catch up slow wind. We suggest that this low-lying plasma enhancement is a plasma parcel which has been disconnected from a helmet streamer and subsequently becomes embedded inside the corotating interaction region.
Resumo:
Optical data are compared with EISCAT radar observations of multiple Naturally Enhanced Ion-Acoustic Line (NEIAL) events in the dayside cusp. This study uses narrow field of view cameras to observe small-scale, short-lived auroral features. Using multiple-wavelength optical observations, a direct link between NEIAL occurrences and low energy (about 100 eV) optical emissions is shown. This is consistent with the Langmuir wave decay interpretation of NEIALs being driven by streams of low-energy electrons. Modelling work connected with this study shows that, for the measured ionospheric conditions and precipitation characteristics, growth of unstable Langmuir (electron plasma) waves can occur, which decay into ion-acoustic wave modes. The link with low energy optical emissions shown here, will enable future studies of the shape, extent, lifetime, grouping and motions of NEIALs.
Resumo:
Carbendazim is highly toxic to earthworms and is used as a standard control substance when running field-based trials of pesticides, but results using carbendazim are highly variable. In the present study, impacts of timing of rainfall events following carbendazim application on earthworms were investigated. Lumbricus terrestris were maintained in soil columns to which carbendazim and then deionized water (a rainfall substitute) were applied. Carbendazim was applied at 4 kg/ha, the rate recommended in pesticide field trials. Three rainfall regimes were investigated: initial and delayed heavy rainfall 24 h and 6 d after carbendazim application, and frequent rainfall every 48 h. Earthworm mortality and movement of carbendazim through the soil was assessed 14 d after carbendazim application. No detectable movement of carbendazim occurred through the soil in any of the treatments or controls. Mortality in the initial heavy and frequent rainfall was significantly higher (approximately 55%) than in the delayed rainfall treatment (approximately 25%). This was due to reduced bioavailability of carbendazim in the latter treatment due to a prolonged period of sorption of carbendazim to soil particles before rainfall events. The impact of carbendazim application on earthworm surface activity was assessed using video cameras. Carbendazim applications significantly reduced surface activity due to avoidance behavior of the earthworms. Surface activity reductions were least in the delayed rainfall treatment due to the reduced bioavailability of the carbendazim. The nature of rainfall events' impacts on the response of earthworms to carbendazim applications, and details of rainfall events preceding and following applications during field trials should be made at a higher level of resolution than is currently practiced according to standard International Organization for Standardization protocols.
Resumo:
We have designed and implemented a low-cost digital system using closed-circuit television cameras coupled to a digital acquisition system for the recording of in vivo behavioral data in rodents and for allowing observation and recording of more than 10 animals simultaneously at a reduced cost, as compared with commercially available solutions. This system has been validated using two experimental rodent models: one involving chemically induced seizures and one assessing appetite and feeding. We present observational results showing comparable or improved levels of accuracy and observer consistency between this new system and traditional methods in these experimental models, discuss advantages of the presented system over conventional analog systems and commercially available digital systems, and propose possible extensions to the system and applications to non-rodent studies.
Resumo:
This paper describes the crowd image analysis challenge that forms part of the PETS 2009 workshop. The aim of this challenge is to use new or existing systems for i) crowd count and density estimation, ii) tracking of individual(s) within a crowd, and iii) detection of separate flows and specific crowd events, in a real-world environment. The dataset scenarios were filmed from multiple cameras and involve multiple actors.
Resumo:
This paper describes the crowd image analysis challenge that forms part of the PETS 2009 workshop. The aim of this challenge is to use new or existing systems for i) crowd count and density estimation, ii) tracking of individual(s) within a crowd, and iii) detection of separate flows and specific crowd events, in a real-world environment. The dataset scenarios were filmed from multiple cameras and involve multiple actors.
Resumo:
Calibrated cameras are an extremely useful resource for computer vision scenarios. Typically, cameras are calibrated through calibration targets, measurements of the observed scene, or self-calibrated through features matched between cameras with overlapping fields of view. This paper considers an approach to camera calibration based on observations of a pedestrian and compares the resulting calibration to a commonly used approach requiring that measurements be made of the scene.
Resumo:
The project investigated whether it would be possible to remove the main technical hindrance to precision application of herbicides to arable crops in the UK, namely creating geo-referenced weed maps for each field. The ultimate goal is an information system so that agronomists and farmers can plan precision weed control and create spraying maps. The project focussed on black-grass in wheat, but research was also carried out on barley and beans and on wild-oats, barren brome, rye-grass, cleavers and thistles which form stable patches in arable fields. Farmers may also make special efforts to control them. Using cameras mounted on farm machinery, the project explored the feasibility of automating the process of mapping black-grass in fields. Geo-referenced images were captured from June to December 2009, using sprayers, a tractor, combine harvesters and on foot. Cameras were mounted on the sprayer boom, on windows or on top of tractor and combine cabs and images were captured with a range of vibration levels and at speeds up to 20 km h-1. For acceptability to farmers, it was important that every image containing black-grass was classified as containing black-grass; false negatives are highly undesirable. The software algorithms recorded no false negatives in sample images analysed to date, although some black-grass heads were unclassified and there were also false positives. The density of black-grass heads per unit area estimated by machine vision increased as a linear function of the actual density with a mean detection rate of 47% of black-grass heads in sample images at T3 within a density range of 13 to 1230 heads m-2. A final part of the project was to create geo-referenced weed maps using software written in previous HGCA-funded projects and two examples show that geo-location by machine vision compares well with manually-mapped weed patches. The consortium therefore demonstrated for the first time the feasibility of using a GPS-linked computer-controlled camera system mounted on farm machinery (tractor, sprayer or combine) to geo-reference black-grass in winter wheat between black-grass head emergence and seed shedding.
Resumo:
Many weeds occur in patches but farmers frequently spray whole fields to control the weeds in these patches. Given a geo-referenced weed map, technology exists to confine spraying to these patches. Adoption of patch spraying by arable farmers has, however, been negligible partly due to the difficulty of constructing weed maps. Building on previous DEFRA and HGCA projects, this proposal aims to develop and evaluate a machine vision system to automate the weed mapping process. The project thereby addresses the principal technical stumbling block to widespread adoption of site specific weed management (SSWM). The accuracy of weed identification by machine vision based on a single field survey may be inadequate to create herbicide application maps. We therefore propose to test the hypothesis that sufficiently accurate weed maps can be constructed by integrating information from geo-referenced images captured automatically at different times of the year during normal field activities. Accuracy of identification will also be increased by utilising a priori knowledge of weeds present in fields. To prove this concept, images will be captured from arable fields on two farms and processed offline to identify and map the weeds, focussing especially on black-grass, wild oats, barren brome, couch grass and cleavers. As advocated by Lutman et al. (2002), the approach uncouples the weed mapping and treatment processes and builds on the observation that patches of these weeds are quite stable in arable fields. There are three main aspects to the project. 1) Machine vision hardware. Hardware component parts of the system are one or more cameras connected to a single board computer (Concurrent Solutions LLC) and interfaced with an accurate Global Positioning System (GPS) supplied by Patchwork Technology. The camera(s) will take separate measurements for each of the three primary colours of visible light (red, green and blue) in each pixel. The basic proof of concept can be achieved in principle using a single camera system, but in practice systems with more than one camera may need to be installed so that larger fractions of each field can be photographed. Hardware will be reviewed regularly during the project in response to feedback from other work packages and updated as required. 2) Image capture and weed identification software. The machine vision system will be attached to toolbars of farm machinery so that images can be collected during different field operations. Images will be captured at different ground speeds, in different directions and at different crop growth stages as well as in different crop backgrounds. Having captured geo-referenced images in the field, image analysis software will be developed to identify weed species by Murray State and Reading Universities with advice from The Arable Group. A wide range of pattern recognition and in particular Bayesian Networks will be used to advance the state of the art in machine vision-based weed identification and mapping. Weed identification algorithms used by others are inadequate for this project as we intend to collect and correlate images collected at different growth stages. Plants grown for this purpose by Herbiseed will be used in the first instance. In addition, our image capture and analysis system will include plant characteristics such as leaf shape, size, vein structure, colour and textural pattern, some of which are not detectable by other machine vision systems or are omitted by their algorithms. Using such a list of features observable using our machine vision system, we will determine those that can be used to distinguish weed species of interest. 3) Weed mapping. Geo-referenced maps of weeds in arable fields (Reading University and Syngenta) will be produced with advice from The Arable Group and Patchwork Technology. Natural infestations will be mapped in the fields but we will also introduce specimen plants in pots to facilitate more rigorous system evaluation and testing. Manual weed maps of the same fields will be generated by Reading University, Syngenta and Peter Lutman so that the accuracy of automated mapping can be assessed. The principal hypothesis and concept to be tested is that by combining maps from several surveys, a weed map with acceptable accuracy for endusers can be produced. If the concept is proved and can be commercialised, systems could be retrofitted at low cost onto existing farm machinery. The outputs of the weed mapping software would then link with the precision farming options already built into many commercial sprayers, allowing their use for targeted, site-specific herbicide applications. Immediate economic benefits would, therefore, arise directly from reducing herbicide costs. SSWM will also reduce the overall pesticide load on the crop and so may reduce pesticide residues in food and drinking water, and reduce adverse impacts of pesticides on non-target species and beneficials. Farmers may even choose to leave unsprayed some non-injurious, environmentally-beneficial, low density weed infestations. These benefits fit very well with the anticipated legislation emerging in the new EU Thematic Strategy for Pesticides which will encourage more targeted use of pesticides and greater uptake of Integrated Crop (Pest) Management approaches, and also with the requirements of the Water Framework Directive to reduce levels of pesticides in water bodies. The greater precision of weed management offered by SSWM is therefore a key element in preparing arable farming systems for the future, where policy makers and consumers want to minimise pesticide use and the carbon footprint of farming while maintaining food production and security. The mapping technology could also be used on organic farms to identify areas of fields needing mechanical weed control thereby reducing both carbon footprints and also damage to crops by, for example, spring tines. Objective i. To develop a prototype machine vision system for automated image capture during agricultural field operations; ii. To prove the concept that images captured by the machine vision system over a series of field operations can be processed to identify and geo-reference specific weeds in the field; iii. To generate weed maps from the geo-referenced, weed plants/patches identified in objective (ii).