865 resultados para streak camera
Resumo:
Plasma parcels are observed propagating from the Sun out to the large coronal heights monitored by the Heliospheric Imagers (HI) instruments onboard the NASA STEREO spacecraft during September 2007. The source region of these out-flowing parcels is found to corotate with the Sun and to be rooted near the western boundary of an equatorial coronal hole. These plasma enhancements evolve during their propagation through the HI cameras’ fields of view and only becoming fully developed in the outer camera field of view. We provide evidence that HI is observing the formation of a Corotating Interaction Region(CIR) where fast solar wind from the equatorial coronal hole is interacting with the slow solar wind of the streamer belt located on the western edge of that coronal hole. A dense plasma parcel is also observed near the footpoint of the observed CIR at a distance less than 0.1AU from the Sun where fast wind would have not had time to catch up slow wind. We suggest that this low-lying plasma enhancement is a plasma parcel which has been disconnected from a helmet streamer and subsequently becomes embedded inside the corotating interaction region.
Resumo:
The third episode of lava dome growth at Soufrière Hills Volcano began 1 August 2005 and ended 20 April 2007. Volumes of the dome and talus produced were measured using a photo-based method with a calibrated camera for increased accuracy. The total dense rock equivalent (DRE) volume of extruded andesite magma (306 ± 51 Mm3) was similar within error to that produced in the earlier episodes but the average extrusion rate was 5.6 ± 0.9 m3s−1 (DRE), higher than the previous episodes. Extrusion rates varied in a pulsatory manner from <0.5 m3s−1 to ∼20 m3s−1. On 18 May 2006, the lava dome had reached a volume of 85 Mm3 DRE and it was removed in its entirety during a massive dome collapse on 20 May 2006. Extrusion began again almost immediately and built a dome of 170 Mm3 DRE with a summit height 1047 m above sea level by 4 April 2007. There were few moderate-sized dome collapses (1–10 Mm3) during this extrusive episode in contrast to the first episode of dome growth in 1995–8 when they were numerous. The first and third episodes of dome growth showed a similar pattern of low (<0.5 m3s−1) but increasing magma flux during the early stages, with steady high flux after extrusion of ∼25 Mm3
Resumo:
Capsule Avian predators are principally responsible. Aims To document the fate of Spotted Flycatcher nests and to identify the species responsible for nest predation. Methods During 2005-06, purpose-built, remote, digital nest-cameras were deployed at 65 out of 141 Spotted Flycatcher nests monitored in two study areas, one in south Devon and the second on the border of Bedfordshire and Cambridgeshire. Results Of the 141 nests monitored, 90 were successful (non-camera nests, 49 out of 76 successful, camera nests, 41 out of 65). Fate was determined for 63 of the 65 nests monitored by camera, with 20 predation events documented, all of which occurred during daylight hours. Avian predators carried out 17 of the 20 predations, with the principal nest predator identified as Eurasian Jay Garrulus glandarius. The only mammal recorded predating nests was the Domestic Cat Felis catus, the study therefore providing no evidence that Grey Squirrels Sciurus carolinensis are an important predator of Spotted Flycatcher nests. There was no evidence of differences in nest survival rates at nests with and without cameras. Nest remains following predation events gave little clue as to the identity of the predator species responsible. Conclusions Nest-cameras can be useful tools in the identification of nest predators, and may be deployed with no subsequent effect on nest survival. The majority of predation of Spotted Flycatcher nests in this study was by avian predators, principally the Jay. There was little evidence of predation by mammalian predators. Identification of specific nest predators enhances studies of breeding productivity and predation risk.
Resumo:
Thermal non-destructive testing (NDT) is commonly used for assessing aircraft structures. This research work evaluates the potential of pulsed -- transient thermography for locating fixtures beneath aircraft skins in order to facilitate accurate automated assembly operations. Representative aluminium and carbon fibre aircraft skin-fixture assemblies were modelled using thermal modelling software. The assemblies were also experimentally investigated with an integrated pulsed thermographic evaluation system, as well as using a custom built system incorporating a miniature un-cooled camera. Modelling showed that the presence of an air gap between skin and fixture significantly reduced the thermal contrast developed, especially in aluminium. Experimental results show that fixtures can be located to accuracies of 0.5 mm.
Resumo:
Accurate calibration of a head mounted display (HMD) is essential both for research on the visual system and for realistic interaction with virtual objects. Yet, existing calibration methods are time consuming and depend on human judgements, making them error prone. The methods are also limited to optical see-through HMDs. Building on our existing HMD calibration method [1], we show here how it is possible to calibrate a non-see-through HMD. A camera is placed inside an HMD displaying an image of a regular grid, which is captured by the camera. The HMD is then removed and the camera, which remains fixed in position, is used to capture images of a tracked calibration object in various positions. The locations of image features on the calibration object are then re-expressed in relation to the HMD grid. This allows established camera calibration techniques to be used to recover estimates of the display’s intrinsic parameters (width, height, focal length) and extrinsic parameters (optic centre and orientation of the principal ray). We calibrated a HMD in this manner in both see-through and in non-see-through modes and report the magnitude of the errors between real image features and reprojected features. Our calibration method produces low reprojection errors and involves no error-prone human measurements.
Resumo:
This paper describes a new method for reconstructing 3D surface using a small number, e.g. 10, of 2D photographic images. The images are taken at different viewing directions by a perspective camera with full prior knowledge of the camera configurations. The reconstructed object's surface is represented a set of triangular facets. We empirically demonstrate that if the viewing directions are uniformly distributed around the object's viewing sphere, then the reconstructed 3D points optimally cluster closely on a highly curved part of the surface and are widely, spread on smooth or fat parts. The advantage of this property is that the reconstructed points along a surface or a contour generator are not undersampled or underrepresented because surfaces or contours should be sampled or represented with more densely points where their curvatures are high. The more complex the contour's shape, the greater is the number of points required, but the greater the number of points is automatically generated by the proposed method Given that the viewing directions are uniformly distributed, the number and distribution of the reconstructed points depend on the shape or the curvature of the surface regardless of the size of the surface or the size of the object.
Resumo:
Urban surveillance footage can be of poor quality, partly due to the low quality of the camera and partly due to harsh lighting and heavily reflective scenes. For some computer surveillance tasks very simple change detection is adequate, but sometimes a more detailed change detection mask is desirable, eg, for accurately tracking identity when faced with multiple interacting individuals and in pose-based behaviour recognition. We present a novel technique for enhancing a low-quality change detection into a better segmentation using an image combing estimator in an MRF based model.
Resumo:
In this paper, we evaluate the Probabilistic Occupancy Map (POM) pedestrian detection algorithm on the PETS 2009 benchmark dataset. POM is a multi-camera generative detection method, which estimates ground plane occupancy from multiple background subtraction views. Occupancy probabilities are iteratively estimated by fitting a synthetic model of the background subtraction to the binary foreground motion. Furthermore, we test the integration of this algorithm into a larger framework designed for understanding human activities in real environments. We demonstrate accurate detection and localization on the PETS dataset, despite suboptimal calibration and foreground motion segmentation input.
Resumo:
This paper describes a new method for reconstructing 3D surface points and a wireframe on the surface of a freeform object using a small number, e.g. 10, of 2D photographic images. The images are taken at different viewing directions by a perspective camera with full prior knowledge of the camera configurations. The reconstructed surface points are frontier points and the wireframe is a network of contour generators. Both of them are reconstructed by pairing apparent contours in the 2D images. Unlike previous works, we empirically demonstrate that if the viewing directions are uniformly distributed around the object's viewing sphere, then the reconstructed 3D points automatically cluster closely on a highly curved part of the surface and are widely spread on smooth or flat parts. The advantage of this property is that the reconstructed points along a surface or a contour generator are not under-sampled or under-represented because surfaces or contours should be sampled or represented with more densely points where their curvatures are high. The more complex the contour's shape, the greater is the number of points required, but the greater the number of points is automatically generated by the proposed method. Given that the viewing directions are uniformly distributed, the number and distribution of the reconstructed points depend on the shape or the curvature of the surface regardless of the size of the surface or the size of the object. The unique pattern of the reconstructed points and contours may be used in 31) object recognition and measurement without computationally intensive full surface reconstruction. The results are obtained from both computer-generated and real objects. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
This paper describes a real-time multi-camera surveillance system that can be applied to a range of application domains. This integrated system is designed to observe crowded scenes and has mechanisms to improve tracking of objects that are in close proximity. The four component modules described in this paper are (i) motion detection using a layered background model, (ii) object tracking based on local appearance, (iii) hierarchical object recognition, and (iv) fused multisensor object tracking using multiple features and geometric constraints. This integrated approach to complex scene tracking is validated against a number of representative real-world scenarios to show that robust, real-time analysis can be performed. Copyright (C) 2007 Hindawi Publishing Corporation. All rights reserved.
Resumo:
This article is a close analysis of The Cry of the Owl (Thraves, 2009). It is also part of larger project to bring together traditions of detailed criticism with those of production history, which culminates in second article on the film due to be published in 2011. The detail of the argument concerns analysing a range of the film’s key signifying systems, with a particular interest in the way the film explores the gap between images / impressions and characters’ realities; engages in a complex way with generic traditions and modes of address; establishes complex patterns of connection and contrast through blocking, camera strategies and narrative structure.
Resumo:
The project investigated whether it would be possible to remove the main technical hindrance to precision application of herbicides to arable crops in the UK, namely creating geo-referenced weed maps for each field. The ultimate goal is an information system so that agronomists and farmers can plan precision weed control and create spraying maps. The project focussed on black-grass in wheat, but research was also carried out on barley and beans and on wild-oats, barren brome, rye-grass, cleavers and thistles which form stable patches in arable fields. Farmers may also make special efforts to control them. Using cameras mounted on farm machinery, the project explored the feasibility of automating the process of mapping black-grass in fields. Geo-referenced images were captured from June to December 2009, using sprayers, a tractor, combine harvesters and on foot. Cameras were mounted on the sprayer boom, on windows or on top of tractor and combine cabs and images were captured with a range of vibration levels and at speeds up to 20 km h-1. For acceptability to farmers, it was important that every image containing black-grass was classified as containing black-grass; false negatives are highly undesirable. The software algorithms recorded no false negatives in sample images analysed to date, although some black-grass heads were unclassified and there were also false positives. The density of black-grass heads per unit area estimated by machine vision increased as a linear function of the actual density with a mean detection rate of 47% of black-grass heads in sample images at T3 within a density range of 13 to 1230 heads m-2. A final part of the project was to create geo-referenced weed maps using software written in previous HGCA-funded projects and two examples show that geo-location by machine vision compares well with manually-mapped weed patches. The consortium therefore demonstrated for the first time the feasibility of using a GPS-linked computer-controlled camera system mounted on farm machinery (tractor, sprayer or combine) to geo-reference black-grass in winter wheat between black-grass head emergence and seed shedding.
Resumo:
Many weeds occur in patches but farmers frequently spray whole fields to control the weeds in these patches. Given a geo-referenced weed map, technology exists to confine spraying to these patches. Adoption of patch spraying by arable farmers has, however, been negligible partly due to the difficulty of constructing weed maps. Building on previous DEFRA and HGCA projects, this proposal aims to develop and evaluate a machine vision system to automate the weed mapping process. The project thereby addresses the principal technical stumbling block to widespread adoption of site specific weed management (SSWM). The accuracy of weed identification by machine vision based on a single field survey may be inadequate to create herbicide application maps. We therefore propose to test the hypothesis that sufficiently accurate weed maps can be constructed by integrating information from geo-referenced images captured automatically at different times of the year during normal field activities. Accuracy of identification will also be increased by utilising a priori knowledge of weeds present in fields. To prove this concept, images will be captured from arable fields on two farms and processed offline to identify and map the weeds, focussing especially on black-grass, wild oats, barren brome, couch grass and cleavers. As advocated by Lutman et al. (2002), the approach uncouples the weed mapping and treatment processes and builds on the observation that patches of these weeds are quite stable in arable fields. There are three main aspects to the project. 1) Machine vision hardware. Hardware component parts of the system are one or more cameras connected to a single board computer (Concurrent Solutions LLC) and interfaced with an accurate Global Positioning System (GPS) supplied by Patchwork Technology. The camera(s) will take separate measurements for each of the three primary colours of visible light (red, green and blue) in each pixel. The basic proof of concept can be achieved in principle using a single camera system, but in practice systems with more than one camera may need to be installed so that larger fractions of each field can be photographed. Hardware will be reviewed regularly during the project in response to feedback from other work packages and updated as required. 2) Image capture and weed identification software. The machine vision system will be attached to toolbars of farm machinery so that images can be collected during different field operations. Images will be captured at different ground speeds, in different directions and at different crop growth stages as well as in different crop backgrounds. Having captured geo-referenced images in the field, image analysis software will be developed to identify weed species by Murray State and Reading Universities with advice from The Arable Group. A wide range of pattern recognition and in particular Bayesian Networks will be used to advance the state of the art in machine vision-based weed identification and mapping. Weed identification algorithms used by others are inadequate for this project as we intend to collect and correlate images collected at different growth stages. Plants grown for this purpose by Herbiseed will be used in the first instance. In addition, our image capture and analysis system will include plant characteristics such as leaf shape, size, vein structure, colour and textural pattern, some of which are not detectable by other machine vision systems or are omitted by their algorithms. Using such a list of features observable using our machine vision system, we will determine those that can be used to distinguish weed species of interest. 3) Weed mapping. Geo-referenced maps of weeds in arable fields (Reading University and Syngenta) will be produced with advice from The Arable Group and Patchwork Technology. Natural infestations will be mapped in the fields but we will also introduce specimen plants in pots to facilitate more rigorous system evaluation and testing. Manual weed maps of the same fields will be generated by Reading University, Syngenta and Peter Lutman so that the accuracy of automated mapping can be assessed. The principal hypothesis and concept to be tested is that by combining maps from several surveys, a weed map with acceptable accuracy for endusers can be produced. If the concept is proved and can be commercialised, systems could be retrofitted at low cost onto existing farm machinery. The outputs of the weed mapping software would then link with the precision farming options already built into many commercial sprayers, allowing their use for targeted, site-specific herbicide applications. Immediate economic benefits would, therefore, arise directly from reducing herbicide costs. SSWM will also reduce the overall pesticide load on the crop and so may reduce pesticide residues in food and drinking water, and reduce adverse impacts of pesticides on non-target species and beneficials. Farmers may even choose to leave unsprayed some non-injurious, environmentally-beneficial, low density weed infestations. These benefits fit very well with the anticipated legislation emerging in the new EU Thematic Strategy for Pesticides which will encourage more targeted use of pesticides and greater uptake of Integrated Crop (Pest) Management approaches, and also with the requirements of the Water Framework Directive to reduce levels of pesticides in water bodies. The greater precision of weed management offered by SSWM is therefore a key element in preparing arable farming systems for the future, where policy makers and consumers want to minimise pesticide use and the carbon footprint of farming while maintaining food production and security. The mapping technology could also be used on organic farms to identify areas of fields needing mechanical weed control thereby reducing both carbon footprints and also damage to crops by, for example, spring tines. Objective i. To develop a prototype machine vision system for automated image capture during agricultural field operations; ii. To prove the concept that images captured by the machine vision system over a series of field operations can be processed to identify and geo-reference specific weeds in the field; iii. To generate weed maps from the geo-referenced, weed plants/patches identified in objective (ii).
Resumo:
This solo exhibition featured Ballet, a filmed performance and video installation by Szuper Gallery and an installation of original archive films. Ballet engages with recent histories of rural filmmaking, with movement and dance, linking everyday farming movements with the aesthetics of dance. The starting point for this new work was a series of archival films from the MERL collection, which were made for British farmers as a means both for information and for propaganda, to provide warnings of contagion and nuclear catastrophe, describing procedure and instruction in the case of emergency. Gestural performances and movements of background actors observed in those films were re-scripted into a new choreography of movement for camera to form a playful assemblage.
Resumo:
Video:35 mins, 2006. The video shows a group of performers in a studio and seminar situation. Individually addressing the camera they offer personal views and experiences of their own art production in relation to the institution, while reflecting on their role as teachers. The performance scripts mainly originate from a series of real interviews with a diverse group of artist teachers, who emphasise the collaborative, performative and subversive nature of teaching. These views may seems symptomatic for contemporary art practices, but are ultimately antagonistic to the ongoing commodification of the system of art education.