8 resultados para camera motion estimation
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Introduction: Spinal fusion is a widely and successfully performed strategy for the treatment of spinal deformities and degenerative diseases. The general approach has been to stabilize the spine with implants so that a solid bony fusion between the vertebrae can develop. However, new implant designs have emerged that aim at preservation or restoration of the motion of the spinal segment. In addition to static, load sharing principles, these designs also require a profound knowledge of kinematic and dynamic properties to properly characterise the in vivo performance of the implants. Methods: To address this, an apparatus was developed that enables the intraoperative determination of the load–displacement behavior of spinal motion segments. The apparatus consists of a sensor-equipped distractor to measure the applied force between the transverse processes, and an optoelectronic camera to track the motion of vertebrae and the distractor. In this intraoperative trial, measurements from two patients with adolescent idiopathic scoliosis with right thoracic curves were made at four motion segments each. Results: At a lateral bending moment of 5 N m, the mean flexibility of all eight motion segments was 0.18 ± 0.08°/N m on the convex side and 0.24 ± 0.11°/N m on the concave side. Discussion: The results agree with published data obtained from cadaver studies with and without axial preload. Intraoperatively acquired data with this method may serve as an input for mathematical models and contribute to the development of new implants and treatment strategies.
Resumo:
BACKGROUND: To investigate if non-rigid image-registration reduces motion artifacts in triggered and non-triggered diffusion tensor imaging (DTI) of native kidneys. A secondary aim was to determine, if improvements through registration allow for omitting respiratory-triggering. METHODS: Twenty volunteers underwent coronal DTI of the kidneys with nine b-values (10-700 s/mm2 ) at 3 Tesla. Image-registration was performed using a multimodal nonrigid registration algorithm. Data processing yielded the apparent diffusion coefficient (ADC), the contribution of perfusion (FP ), and the fractional anisotropy (FA). For comparison of the data stability, the root mean square error (RMSE) of the fitting and the standard deviations within the regions of interest (SDROI ) were evaluated. RESULTS: RMSEs decreased significantly after registration for triggered and also for non-triggered scans (P < 0.05). SDROI for ADC, FA, and FP were significantly lower after registration in both medulla and cortex of triggered scans (P < 0.01). Similarly the SDROI of FA and FP decreased significantly in non-triggered scans after registration (P < 0.05). RMSEs were significantly lower in triggered than in non-triggered scans, both with and without registration (P < 0.05). CONCLUSION: Respiratory motion correction by registration of individual echo-planar images leads to clearly reduced signal variations in renal DTI for both triggered and particularly non-triggered scans. Secondarily, the results suggest that respiratory-triggering still seems advantageous.J. Magn. Reson. Imaging 2014. (c) 2014 Wiley Periodicals, Inc.
Resumo:
During November 2010–February 2011, we used camera traps to estimate the population density of Eurasian lynx Lynx lynx in Ciglikara Nature Reserve, Turkey, an isolated population in southwest Asia. Lynx density was calculated through spatial capture—recapture models. In a sampling eff ort of 1093 camera trap days, we identifi ed 15 independent individuals and estimated a density of 4.20 independent lynx per 100 km2, an unreported high density for this species. Camera trap results also indicated that the lynx is likely to be preying on brown hare Lepus europaeus, which accounted for 63% of the non-target species pictured. As lagomorph populations tend to fl uctuate, the high lynx density recorded in Ciglikara may be temporary and may decline with prey fl uctuation. Therefore we recommend to survey other protected areas in southwestern Turkey where lynx is known or assumed to exist, and continuously monitor the lynx populations with reliable methods in order to understand the populations structure and dynamics, defi ne sensible measures and management plans to conserve this important species.
Resumo:
Real cameras have a limited depth of field. The resulting defocus blur is a valuable cue for estimating the depth structure of a scene. Using coded apertures, depth can be estimated from a single frame. For optical flow estimation between frames, however, the depth dependent degradation can introduce errors. These errors are most prominent when objects move relative to the focal plane of the camera. We incorporate coded aperture defocus blur into optical flow estimation and allow for piecewise smooth 3D motion of objects. With coded aperture flow, we can establish dense correspondences between pixels in succeeding coded aperture frames. We compare several approaches to compute accurate correspondences for coded aperture images showing objects with arbitrary 3D motion.
Resumo:
The finite depth of field of a real camera can be used to estimate the depth structure of a scene. The distance of an object from the plane in focus determines the defocus blur size. The shape of the blur depends on the shape of the aperture. The blur shape can be designed by masking the main lens aperture. In fact, aperture shapes different from the standard circular aperture give improved accuracy of depth estimation from defocus blur. We introduce an intuitive criterion to design aperture patterns for depth from defocus. The criterion is independent of a specific depth estimation algorithm. We formulate our design criterion by imposing constraints directly in the data domain and optimize the amount of depth information carried by blurred images. Our criterion is a quadratic function of the aperture transmission values. As such, it can be numerically evaluated to estimate optimized aperture patterns quickly. The proposed mask optimization procedure is applicable to different depth estimation scenarios. We use it for depth estimation from two images with different focus settings, for depth estimation from two images with different aperture shapes as well as for depth estimation from a single coded aperture image. In this work we show masks obtained with this new evaluation criterion and test their depth discrimination capability using a state-of-the-art depth estimation algorithm.
Resumo:
Pressure–Temperature–time (P–T–t) estimates of the syn-kinematic strain at the peak-pressure conditions reached during shallow underthrusting of the Briançonnais Zone in the Alpine subduction zone was made by thermodynamic modelling and 40Ar/39Ar dating in the Plan-de-Phasy unit (SE of the Pelvoux Massif, Western Alps). The dated phengite minerals crystallized syn-kinematically in a shear zone indicating top-to-the-N motion. By combining X-ray mapping with multi-equilibrium calculations, we estimate the phengite crystallization conditions at 270 ± 50 °C and 8.1 ± 2 kbar at an age of 45.9 ± 1.1 Ma. Combining this P–T–t estimate with data from the literature allows us to constrain the timing and geometry of Alpine continental subduction. We propose that the Briançonnais units were scalped on top of the slab during ongoing continental subduction and exhumed continuously until collision.
Resumo:
This paper describes a general workflow for the registration of terrestrial radar interferometric data with 3D point clouds derived from terrestrial photogrammetry and structure from motion. After the determination of intrinsic and extrinsic orientation parameters, data obtained by terrestrial radar interferometry were projected on point clouds and then on the initial photographs. Visualisation of slope deformation measurements on photographs provides an easily understandable and distributable information product, especially of inaccessible target areas such as steep rock walls or in rockfall run-out zones. The suitability and error propagation of the referencing steps and final visualisation of four approaches are compared: (a) the classic approach using a metric camera and stereo-image photogrammetry; (b) images acquired with a metric camera, automatically processed using structure from motion; (c) images acquired with a digital compact camera, processed with structure from motion; and (d) a markerless approach, using images acquired with a digital compact camera using structure from motion without artificial ground control points. The usability of the completely markerless approach for the visualisation of high-resolution radar interferometry assists the production of visualisation products for interpretation.
Resumo:
In this paper we propose a solution to blind deconvolution of a scene with two layers (foreground/background). We show that the reconstruction of the support of these two layers from a single image of a conventional camera is not possible. As a solution we propose to use a light field camera. We demonstrate that a single light field image captured with a Lytro camera can be successfully deblurred. More specifically, we consider the case of space-varying motion blur, where the blur magnitude depends on the depth changes in the scene. Our method employs a layered model that handles occlusions and partial transparencies due to both motion blur and out of focus blur of the plenoptic camera. We reconstruct each layer support, the corresponding sharp textures, and motion blurs via an optimization scheme. The performance of our algorithm is demonstrated on synthetic as well as real light field images.