964 resultados para choreography for the camera


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Summary Eyes with refractive error have reduced visual acuity and are rarely found in the wild. Vertebrate eyes possess a visually guided emmetropisation process within the retina which detects the sign of defocus, and regulates eye growth to align the retina at the focal plane of the eye's optical components to avoid the development of refractive error, such as myopia, an increasing problem in humans [1]. However, the vertebrate retina is complex, and it is not known which of the many classes of retinal neurons are involved [2]. We investigated whether the camera-type eye of an invertebrate, the squid, displays visually guided emmetropisation, despite squid eyes having a simple photoreceptor-only retina [3]. We exploited inherent longitudinal chromatic aberration (LCA) to create disparate focal lengths within squid eyes. We found that squid raised under orange light had proportionately longer eyes and more myopic refractions than those raised under blue light, and when switched between wavelengths, eye size and refractive status changed appropriately within a few days. This demonstrates that squid eye growth is visually guided, and suggests that the complex retina seen in vertebrates may not be required for emmetropisation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article will address several areas of research. Firstly it will propose that a dance experience can translate into another discipline such as visual art. In my visual art practice I combine both photography, which is traditionally seen as a still medium, and performance in order to create a new form of embodiment. By acknowledging the inter-relationship between the body and the camera my project seeks to challenge a perceived separation between the disciplines. Fly Rhythm, an exhibition of 13 photographs and one video projection was conceived through a performative somatic process. I have developed the term ‘somatic photography’ to articulate subjective experiences in the context of my process of imaging movement in stillness. My thinking has been informed by visual art practice exploring movement and meaning using video and an older history of performance as a dancer and choreographer. I am primarily interested in movement initiated by a bodily response to light through still rather than moving imagery although artists such as Maya Deren whose films explore themes of time and space have influenced me. In my practice the term ‘somatic photography’ helps articulate the act of taking photographs, which is how meaning is being created rather than purely in the finished art works. The term somatic photography puts emphasis on the action of taking the image. Through using a custom made camera I was able to negotiate time and space as a dancer and create a visual drawing that talked to both choreography and fine art practice. This article engages with the following ideas: somatic photography, photography as choreography, body memory, ageing body, technology as collaborator, gallery interface, screen interface and movement.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

NOGUEIRA, Marcelo B. ; MEDEIROS, Adelardo A. D. ; ALSINA, Pablo J. Pose Estimation of a Humanoid Robot Using Images from an Mobile Extern Camera. In: IFAC WORKSHOP ON MULTIVEHICLE SYSTEMS, 2006, Salvador, BA. Anais... Salvador: MVS 2006, 2006.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

New formularizations, techniques and devices have become the dental whitening most safe and with better results. Although this, the verification of the levels whitening is being continued for visual comparison, that is an empirical, subjective method, subject to errors and dependent of the individual interpretation. Normally the result of the whitening is express for the amplitude of displacement between the initial and the final color, being take like reference the tonalities of a scale of color commanded of darkest for more clearly. Although to be the most used scale, the ordinance of the Vita Classical (R) - Vita, according to recommendations of the manufacturer, reveals inadequate for the evaluation of the whitening. From digital images and of algorithm OER (ordinance of the reference scale), especially developed for the ScanWhite (C), the ordinance of the tonalities of the scale Vita Classical (R) was made. For such, the values of the canals of color R, G, and B of medium part average of the crowns was adopted as reference for evaluation. The images had been taken with the camera Sony Cybershoot DSC F828. The results of the computational ordinance had been compared with the sequence proposal for the manufacturer and with the earned one for the visual evaluation, carried through by 10 volunteers, under standardized conditions of illumination. It statistics analyzes demonstrated significant differences between the ordinances.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The article discusses a proposal of displacement measurement using a unique digital camera aiming at to exploit its feasibility for Modal Analysis applications. The proposal discusses a non-contact measuring approach able to measure multiple points simultaneously by using a unique digital camera. A modal analysis of a reduced scale lab building structure based only at the responses of the structure measured with the camera is presented. It focuses at the feasibility of using a simple ordinary camera for performing the output only modal analysis of structures and its advantage. The modal parameters of the structure are estimated from the camera data and also by using ordinary experimental modal analysis based on the Frequency Response Function (FRF) obtained by using the usual sensors like accelerometer and force cell. The comparison of the both analysis showed that the technique is promising noncontact measuring tool relatively simple and effective to be used in structural modal analysis

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There is increasing interest in the diving behavior of marine mammals. However, identifying foraging among recorded dives often requires several assumptions. The simultaneous acquisition of images of the prey encountered, together with records of diving behavior will allow researchers to more fully investigate the nature of subsurface behavior. We tested a novel digital camera linked to a time-depth recorder on Antarctic fur seals (Arctocephalus gazella). During the austral summer 2000-2001, this system was deployed on six lactating female fur seals at Bird Island, South Georgia, each for a single foraging trip. The camera was triggered at depths greater than 10 m. Five deployments recorded still images (640 x 480 pixels) at 3-sec intervals (total 8,288 images), the other recorded movie images at 0.2-sec intervals (total 7,598 frames). Memory limitation (64 MB) restricted sampling to approximately 1.5 d of 5-7 d foraging trips. An average of 8.5% of still pictures (2.4%-11.6%) showed krill (Euphausia superba) distinctly, while at least half the images in each deployment were empty, the remainder containing blurred or indistinct prey. In one deployment krill images were recorded within 2.5 h (16 km, assuming 1.8 m/sec travel speed) of leaving the beach. Five of the six deployments also showed other fur seals foraging in conjunction with the study animal. This system is likely to generate exciting new avenues for interpretation of diving behavior.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Coexistence of sympatric species is mediated by resource partitioning. Pumas occur sympatrically with jaguars throughout most of the jaguar's range but few studies have investigated space partitioning between both species. Here, camera trapping and occupancy models accounting for imperfect detection were employed in a Bayesian framework to investigate space partitioning between the jaguar and puma in Emas National Park (ENP), central Brazil. Jaguars were estimated to occupy 54.1% and pumas 39.3% of the sample sites. Jaguar occupancy was negatively correlated with distance to water and positively correlated with the amount of dense habitat surrounding the camera trap. Puma occupancy only showed a weak negative correlation with distance to water and with jaguar presence. Both species were less often present at the same site than expected under independent distributions. Jaguars had a significantly higher detection probability at cameras on roads than at off-road locations. For pumas, detection was similar on and off-road. Results indicate that both differences in habitat use and active avoidance shape space partitioning between jaguars and pumas in ENP. Considering its size, the jaguar is likely the competitively dominant of the two species. Owing to its habitat preferences, suitable jaguar habitat outside the park is probably sparse. Consequently, the jaguar population is likely largely confined to the park, while the puma population is known to extend into ENP's surroundings. (C) 2011 Deutsche Gesellschaft fur Saugetierkunde. Published by Elsevier GmbH. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Optical transition radiation (OTR) plays an important role in beam diagnostics for high energy particle accelerators. Its linear intensity with beam current is a great advantage as compared to fluorescent screens, which are subject to saturation. Moreover, the measurement of the angular distribution of the emitted radiation enables the determination of many beam parameters in a single observation point. However, few works deals with the application of OTR to monitor low energy beams. In this work we describe the design of an OTR based beam monitor used to measure the transverse beam charge distribution of the 1.9-MeV electron beam of the linac injector of the IFUSP microtron using a standard vision machine camera. The average beam current in pulsed operation mode is of the order of tens of nano-Amps. Low energy and low beam current make OTR observation difficult. To improve sensitivity, the beam incidence angle on the target was chosen to maximize the photon flux in the camera field-of-view. Measurements that assess OTR observation (linearity with beam current, polarization, and spectrum shape) are presented, as well as a typical 1.9-MeV electron beam charge distribution obtained from OTR. Some aspects of emittance measurement using this device are also discussed. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4748519]

Relevância:

90.00% 90.00%

Publicador:

Resumo:

[EN] In this paper we study a variational problem derived from a computer vision application: video camera calibration with smoothing constraint. By video camera calibration we meanto estimate the location, orientation and lens zoom-setting of the camera for each video frame taking into account image visible features. To simplify the problem we assume that the camera is mounted on a tripod, in such case, for each frame captured at time t , the calibration is provided by 3 parameters : (1) P(t) (PAN) which represents the tripod vertical axis rotation, (2) T(t) (TILT) which represents the tripod horizontal axis rotation and (3) Z(t) (CAMERA ZOOM) the camera lens zoom setting. The calibration function t -> u(t) = (P(t),T(t),Z(t)) is obtained as the minima of an energy function I[u] . In thIs paper we study the existence of minima of such energy function as well as the solutions of the associated Euler-Lagrange equations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

[EN] In this paper, we present a vascular tree model made with synthetic materials and which allows us to obtain images to make a 3D reconstruction. In order to create this model, we have used PVC tubes of several diameters and lengths that will let us evaluate the accuracy of our 3D reconstruction. We have made the 3D reconstruction from a series of images that we have from our model and after we have calibrated the camera. In order to calibrate it we have used a corner detector. Also we have used Optical Flow techniques to follow the points through the images going and going back. Once we have the set of images where we have located a point, we have made the 3D reconstruction choosing by chance a couple of images and we have calculated the projection error. After several repetitions, we have found the best 3D location for the point.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

When depicting both virtual and physical worlds, the viewer's impression of presence in these worlds is strongly linked to camera motion. Plausible and artist-controlled camera movement can substantially increase scene immersion. While physical camera motion exhibits subtle details of position, rotation, and acceleration, these details are often missing for virtual camera motion. In this work, we analyze camera movement using signal theory. Our system allows us to stylize a smooth user-defined virtual base camera motion by enriching it with plausible details. A key component of our system is a database of videos filmed by physical cameras. These videos are analyzed with a camera-motion estimation algorithm (structure-from-motion) and labeled manually with a specific style. By considering spectral properties of location, orientation and acceleration, our solution learns camera motion details. Consequently, an arbitrary virtual base motion, defined in any conventional animation package, can be automatically modified according to a user-selected style. In an animation package the camera motion base path is typically defined by the user via function curves. Another possibility is to obtain the camera path by using a mixed reality camera in motion capturing studio. As shown in our experiments, the resulting shots are still fully artist-controlled, but appear richer and more physically plausible.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Defocus blur is an indicator for the depth structure of a scene. However, given a single input image from a conventional camera one cannot distinguish between blurred objects lying in front or behind the focal plane, as they may be subject to exactly the same amount of blur. In this paper we address this limitation by exploiting coded apertures. Previous work in this area focuses on setups where the scene is placed either entirely in front or entirely behind the focal plane. We demonstrate that asymmetric apertures result in unique blurs for all distances from the camera. To exploit asymmetric apertures we propose an algorithm that can unambiguously estimate scene depth and texture from a single input image. One of the main advantages of our method is that, within the same depth range, we can work with less blurred data than in other methods. The technique is tested on both synthetic and real images.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose Ophthalmologists are confronted with a set of different image modalities to diagnose eye tumors e.g., fundus photography, CT and MRI. However, these images are often complementary and represent pathologies differently. Some aspects of tumors can only be seen in a particular modality. A fusion of modalities would improve the contextual information for diagnosis. The presented work attempts to register color fundus photography with MRI volumes. This would complement the low resolution 3D information in the MRI with high resolution 2D fundus images. Methods MRI volumes were acquired from 12 infants under the age of 5 with unilateral retinoblastoma. The contrast-enhanced T1-FLAIR sequence was performed with an isotropic resolution of less than 0.5mm. Fundus images were acquired with a RetCam camera. For healthy eyes, two landmarks were used: the optic disk and the fovea. The eyes were detected and extracted from the MRI volume using a 3D adaption of the Fast Radial Symmetry Transform (FRST). The cropped volume was automatically segmented using the Split Bregman algorithm. The optic nerve was enhanced by a Frangi vessel filter. By intersection the nerve with the retina the optic disk was found. The fovea position was estimated by constraining the position with the angle between the optic and the visual axis as well as the distance from the optic disk. The optical axis was detected automatically by fitting a parable on to the lens surface. On the fundus, the optic disk and the fovea were detected by using the method of Budai et al. Finally, the image was projected on to the segmented surface using the lens position as the camera center. In tumor affected eyes, the manually segmented tumors were used instead of the optic disk and macula for the registration. Results In all of the 12 MRI volumes that were tested the 24 eyes were found correctly, including healthy and pathological cases. In healthy eyes the optic nerve head was found in all of the tested eyes with an error of 1.08 +/- 0.37mm. A successful registration can be seen in figure 1. Conclusions The presented method is a step toward automatic fusion of modalities in ophthalmology. The combination enhances the MRI volume with higher resolution from the color fundus on the retina. Tumor treatment planning is improved by avoiding critical structures and disease progression monitoring is made easier.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A three-level satellite to ground monitoring scheme for conservation easement monitoring has been implemented in which high-resolution imagery serves as an intermediate step for inspecting high priority sites. A digital vertical aerial camera system was developed to fulfill the need for an economical source of imagery for this intermediate step. A method for attaching the camera system to small aircraft was designed, and the camera system was calibrated and tested. To ensure that the images obtained were of suitable quality for use in Level 2 inspections, rectified imagery was required to provide positional accuracy of 5 meters or less to be comparable to current commercially available high-resolution satellite imagery. Focal length calibration was performed to discover the infinity focal length at two lens settings (24mm and 35mm) with a precision of O.1mm. Known focal length is required for creation of navigation points representing locations to be photographed (waypoints). Photographing an object of known size at distances on a test range allowed estimates of focal lengths of 25.lmm and 35.4mm for the 24mm and 35mm lens settings, respectively. Constants required for distortion removal procedures were obtained using analytical plumb-line calibration procedures for both lens settings, with mild distortion at the 24mm setting and virtually no distortion found at the 35mm setting. The system was designed to operate in a series of stages: mission planning, mission execution, and post-mission processing. During mission planning, waypoints were created using custom tools in geographic information system (GIs) software. During mission execution, the camera is connected to a laptop computer with a global positioning system (GPS) receiver attached. Customized mobile GIs software accepts position information from the GPS receiver, provides information for navigation, and automatically triggers the camera upon reaching the desired location. Post-mission processing (rectification) of imagery for removal of lens distortion effects, correction of imagery for horizontal displacement due to terrain variations (relief displacement), and relating the images to ground coordinates were performed with no more than a second-order polynomial warping function. Accuracy testing was performed to verify the positional accuracy capabilities of the system in an ideal-case scenario as well as a real-world case. Using many welldistributed and highly accurate control points on flat terrain, the rectified images yielded median positional accuracy of 0.3 meters. Imagery captured over commercial forestland with varying terrain in eastern Maine, rectified to digital orthophoto quadrangles, yielded median positional accuracies of 2.3 meters with accuracies of 3.1 meters or better in 75 percent of measurements made. These accuracies were well within performance requirements. The images from the digital camera system are of high quality, displaying significant detail at common flying heights. At common flying heights the ground resolution of the camera system ranges between 0.07 meters and 0.67 meters per pixel, satisfying the requirement that imagery be of comparable resolution to current highresolution satellite imagery. Due to the high resolution of the imagery, the positional accuracy attainable, and the convenience with which it is operated, the digital aerial camera system developed is a potentially cost-effective solution for use in the intermediate step of a satellite to ground conservation easement monitoring scheme.