73 resultados para cameras and camera accessories
em CentAUR: Central Archive University of Reading - UK
Resumo:
The stylistic strategies, in particular those concerning camera placement and movement, of The Shield (FX, 2002-08) seem to directly fit into an aesthetic tradition developed by US cop dramas like Hill Street Blues (NBC, 1981-87), Homicide: Life on the Street (NBC, 1993-99) and NYPD Blue (ABC, 1993-2005). In these precinct dramas, decisions concerning spatial arrangements of camera and performer foreground a desire to present and react to action while it is happening, and with a minimum of apparent construction. As Jonathan Bignell (2009) has argued, the intimacy and immediacy of this stylistic approach, which has at its core an attempt at a documentary-like realism, is important to the police drama as a genre, while also being tendencies that have been taken as specific characteristics of television more generally. I explore how The Shield develops this tradition of a reactive camera style in its strategy of shooting with two cameras rather than one, with specific attention to how this shapes the presentation of performance. Through a detailed examination of the relationship between performer and camera(s) the chapter considers the way the series establishes access to the fictional world, which is crucial to the manner of police investigation central to its drama, and the impact of this on how we engage with performance. The cameras’ placement appears to balance various impulses, including: the demands of attending to an ensemble cast, spontaneous performance style, and action that is physically dynamic and involving. In a series that makes stylistic decisions around presentation of the body on-screen deliberately close yet obstructive, involving yet fleeting, the chapter explores the affect of this on the watching experience.
Resumo:
A visual telepresence system has been developed at the University of Reading which utilizes eye tracing to adjust the horizontal orientation of the cameras and display system according to the convergence state of the operator's eyes. Slaving the cameras to the operator's direction of gaze enables the object of interest to be centered on the displays. The advantage of this is that the camera field of view may be decreased to maximize the achievable depth resolution. An active camera system requires an active display system if appropriate binocular cues are to be preserved. For some applications, which critically depend upon the veridical perception of the object's location and dimensions, it is imperative that the contribution of binocular cues to these judgements be ascertained because they are directly influenced by camera and display geometry. Using the active telepresence system, we investigated the contribution of ocular convergence information to judgements of size, distance and shape. Participants performed an open- loop reach and grasp of the virtual object under reduced cue conditions where the orientation of the cameras and the displays were either matched or unmatched. Inappropriate convergence information produced weak perceptual distortions and caused problems in fusing the images.
Resumo:
This paper describes the crowd image analysis challenge that forms part of the PETS 2009 workshop. The aim of this challenge is to use new or existing systems for i) crowd count and density estimation, ii) tracking of individual(s) within a crowd, and iii) detection of separate flows and specific crowd events, in a real-world environment. The dataset scenarios were filmed from multiple cameras and involve multiple actors.
Resumo:
The objective of a Visual Telepresence System is to provide the operator with a high fidelity image from a remote stereo camera pair linked to a pan/tilt device such that the operator may reorient the camera position by use of head movement. Systems such as these which utilise virtual reality style helmet mounted displays have a number of limitations. The geometry of the camera positions and of the displays is generally fixed and is most suitable only for viewing elements of a scene at a particular distance. To address such limitations, a prototype system has been developed where the geometry of the displays and cameras is dynamically controlled by the eye movement of the operator. This paper explores why it is necessary to actively adjust the display system as well as the cameras and justifies the use of mechanical adjustment of the displays as an alternative to adjustment by electronic or image processing methods. The electronic and mechanical design is described including optical arrangements and control algorithms. The performance and accuracy of the system is assessed with respect to eye movement.
Resumo:
Visual Telepresence system which utilize virtual reality style helmet mounted displays have a number of limitations. The geometry of the camera positions and of the display is fixed and is most suitable only for viewing elements of a scene at a particular distance. In such a system, the operator's ability to gaze around without use of head movement is severely limited. A trade off must be made between a poor viewing resolution or a narrow width of viewing field. To address these limitations a prototype system where the geometry of the displays and cameras is dynamically controlled by the eye movement of the operator has been developed. This paper explores the reasons why is necessary to actively adjust both the display system and the cameras and furthermore justifies the use of mechanical adjustment of the displays as an alternative to adjustment by electronic or image processing methods. The electronic and mechanical design is described including optical arrangements and control algorithms, An assessment of the performance of the system against a fixed camera/display system when operators are assigned basic tasks involving depth and distance/size perception. The sensitivity to variations in transient performance of the display and camera vergence is also assessed.
Resumo:
Reliable techniques for screening large numbers of plants for root traits are still being developed, but include aeroponic, hydroponic and agar plate systems. Coupled with digital cameras and image analysis software, these systems permit the rapid measurement of root numbers, length and diameter in moderate ( typically <1000) numbers of plants. Usually such systems are employed with relatively small seedlings, and information is recorded in 2D. Recent developments in X-ray microtomography have facilitated 3D non-invasive measurement of small root systems grown in solid media, allowing angular distributions to be obtained in addition to numbers and length. However, because of the time taken to scan samples, only a small number can be screened (typically<10 per day, not including analysis time of the large spatial datasets generated) and, depending on sample size, limited resolution may mean that fine roots remain unresolved. Although agar plates allow differences between lines and genotypes to be discerned in young seedlings, the rank order may not be the same when the same materials are grown in solid media. For example, root length of dwarfing wheat ( Triticum aestivum L.) lines grown on agar plates was increased by similar to 40% relative to wild-type and semi-dwarfing lines, but in a sandy loam soil under well watered conditions it was decreased by 24-33%. Such differences in ranking suggest that significant soil environment-genotype interactions are occurring. Developments in instruments and software mean that a combination of high-throughput simple screens and more in-depth examination of root-soil interactions is becoming viable.
Resumo:
The Solar TErrestrial RElations Observatory (STEREO) provides high cadence and high resolution images of the structure and morphology of coronal mass ejections (CMEs) in the inner heliosphere. CME directions and propagation speeds have often been estimated through the use of time-elongation maps obtained from the STEREO Heliospheric Imager (HI) data. Many of these CMEs have been identified by citizen scientists working within the SolarStormWatch project ( www.solarstormwatch.com ) as they work towards providing robust real-time identification of Earth-directed CMEs. The wide field of view of HI allows scientists to directly observe the two-dimensional (2D) structures, while the relative simplicity of time-elongation analysis means that it can be easily applied to many such events, thereby enabling a much deeper understanding of how CMEs evolve between the Sun and the Earth. For events with certain orientations, both the rear and front edges of the CME can be monitored at varying heliocentric distances (R) between the Sun and 1 AU. Here we take four example events with measurable position angle widths and identified by the citizen scientists. These events were chosen for the clarity of their structure within the HI cameras and their long track lengths in the time-elongation maps. We show a linear dependency with R for the growth of the radial width (W) and the 2D aspect ratio (χ) of these CMEs, which are measured out to ≈ 0.7 AU. We estimated the radial width from a linear best fit for the average of the four CMEs. We obtained the relationships W=0.14R+0.04 for the width and χ=2.5R+0.86 for the aspect ratio (W and R in units of AU).
Resumo:
The current state of the art and direction of research in computer vision aimed at automating the analysis of CCTV images is presented. This includes low level identification of objects within the field of view of cameras, following those objects over time and between cameras, and the interpretation of those objects’ appearance and movements with respect to models of behaviour (and therefore intentions inferred). The potential ethical problems (and some potential opportunities) such developments may pose if and when deployed in the real world are presented, and suggestions made as to the necessary new regulations which will be needed if such systems are not to further enhance the power of the surveillers against the surveilled.
Resumo:
This work presents a method of information fusion involving data captured by both a standard CCD camera and a ToF camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time of light information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localization. Further development of these methods will make it possible to identify objects and their position in the real world, and to use this information to prevent possible collisions between the robot and such objects.
Resumo:
This work presents a method of information fusion involving data captured by both a standard charge-coupled device (CCD) camera and a time-of-flight (ToF) camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time-of-flight information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time, this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localisation. Further development of these methods will make it possible to identify objects and their position in the real world and to use this information to prevent possible collisions between the robot and such objects.
Resumo:
We present combined observations made near midnight by the EISCAT radar, all-sky cameras and the combined released and radiation efects satellite (CRRES) shortly before and during a substorm. In particular, we study a discrete, equatorward-drifting auroral arc, seen several degrees poleward of the onset region. The arc passes through the field-aligned beam of the EISCAT radar and is seen to be associated with a considerable upflow of ionospheric plasma. During the substorm, the CRRES satellite observed two major injections, 17 min apart, the second of which was dominated by O+ ions. We show that the observed are was in a suitable location in both latitude and MLT to have fed O+ ions into the second injection and that the upward flux of ions associated with it was sufficient to explain the observed injection. We interpret these data as showing that arcs in the nightside plasma-sheet boundary layer could be the source of O+ ions energised by a dipolarisation of the mid- and near-Earth tail, as opposed to ions ejected from the dayside ionosphere in the cleft ion fountain.
Resumo:
Optical observations of a dayside auroral brightening sequence, by means of all-sky TV cameras and meridian scanning photometers, have been combined with EISCAT ion drift observations within the same invariant latitude-MLT sector. The observations were made during a January 1989 campaign by utilizing the high F region ion densities during the maximum phase of the solar cycle. The characteristic intermittent optical events, covering ∼300 km in east-west extent, move eastward (antisunward) along the poleward boundary of the persistent background aurora at velocities of ∼1.5 km s−1 and are associated with ion flows which swing from eastward to westward, with a subsequent return to eastward, during the interval of a few minutes when there is enhanced auroral emission within the radar field of view. The breakup of discrete auroral forms occurs at the reversal (negative potential) that forms between eastward plasma flow, maximizing near the persistent arc poleward boundary, and strong transient westward flow to the south. The reported events, covering a 35 min interval around 1400 MLT, are embedded within a longer period of similar auroral activity between 0830 (1200 MLT) and 1300 UT (1600 MLT). These observations are discussed in relation to recent models of boundary layer plasma dynamics and the associated magnetosphere-ionosphere coupling. The ionospheric events may correspond to large-scale wave like motions of the low-latitude boundary layer (LLBL)/plasma sheet (PS) boundary. On the basis of this interpretation the observed spot size, speed and repetition period (∼10 min) give a wavelength (the distance between spots) of ∼900 km in the present case. The events can also be explained as ionospheric signatures of newly opened flux tubes associated with reconnection bursts at the magnetopause near 1400 MLT. We also discuss these data in relation to random, patchy reconnection (as has recently been invoked to explain the presence of the sheathlike plasma on closed field lines in the LLBL). In view of the lack of IMF data, and the existing uncertainty on the location of the open-closed field line boundary relative to the optical events, an unambiguous discrimination between the different alternatives is not easily obtained.
Resumo:
This audiovisual essay was created at the wonderful NEH-funded workshop in videographic criticism at Middlebury College, ‘Scholarship in Sound and Image’. The essay provides an analysis of the orchestration of long takes and camera movement in the opening of Caught (Ophuls, 1949), and develops a comparison with the opening of Madame de… (Ophuls, 1953 – U.S. release title The Earrings of Madame de…), not least through a series of juxtapositions, which can be directly presented and compared in an audiovisual essay. The openings share a concern with the subjectivity of the female protagonists and our relationship toward it, evoking the women’s experience while balancing this with other kinds of perspective. As has been noted in the critical literature on Ophuls, and on melodramas of passion more generally, such views enable us to perceive the women concerned to be caught in material and ideological frameworks of which they are at best partially aware. Among the interests of this particular comparison, however, is the extent to which the dynamic around female subjectivity is played in relation to luxury goods, imagined, owned or admired. Tensions between on- and off-screen spaces and sounds are critical to the interest of the long takes under discussion. Camera movements subtly inflect the extent to which we are aligned (or otherwise) with the characters and the ways in which their material circumstances are revealed to us.
Resumo:
This paper describes the crowd image analysis challenge that forms part of the PETS 2009 workshop. The aim of this challenge is to use new or existing systems for i) crowd count and density estimation, ii) tracking of individual(s) within a crowd, and iii) detection of separate flows and specific crowd events, in a real-world environment. The dataset scenarios were filmed from multiple cameras and involve multiple actors.
Resumo:
Prêt-à-Médiatiser by House of POLLYFIBRE is a performance/film that takes the fashion show catwalk as a site for exploration, with a focus on the dialogue between liveness and mediatisation. The performance showcases a clothing collection that has been designed to be documented and thus is challenged in the context of the live event. Motivated by the 2-dimensionality and biased perspective of mediated images such as magazine photography, social network profiles images and the surfaces of digital interfaces, the garments are one sided and obstruct the models in their attempt to play out familiar fashion poses, unless they align themselves 'correctly' for the lense. There is material metaphor and wordplay throughout, for example the clothing pieces are made from interfacing fabrics that are physically cut, pasted and layered to create the rigid flat silhouettes. The performance is accompanied by live sound created by tools of the fashion industry (including scissors and camera clicks) that have been adapted and amplified to be used as instruments. The audience and press are invited to document the live event and the subsequent film is made using footage collated from the crew, the audience and the official press