982 resultados para depth image


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The SES_GR1-Mesozooplankton faecal pellet production rates dataset is based on samples taken during April 2008 in the North-Eastern Aegean Sea. Mesozooplankton is collected by vertical tows within the Black sea water body mass layer in the NE Aegean, using a WP-2 200 µm net equipped with a large non-filtering cod-end (10 l). Macrozooplankton organisms are removed using a 2000 µm net. A few unsorted animals (approximately 100) are placed inside several glass beaker of 250 ml filled with GF/F or 0.2 µm Nucleopore filtered seawater and with a 100 µm net placed 1 cm above the beaker bottom. Beakers are then placed in an incubator at natural light and maintaining the in situ temperature. After 1 hour pellets are separated from animals and placed in separated flasks and preserved with formalin. Pellets are counted and measured using an inverted microscope. Animals are scanned and counted using an image analysis system. Carbon- Specific faecal pellet production is calculated from a) faecal pellet production, b) individual carbon: Animals are scanned and their body area is measured using an image analysis system. Body volume is then calculated as an ellipsoid using the major and minor axis of an ellipse of same area as the body. Individual carbon is calculated from a carbon- total body volume of organisms (relationship obtained for the Mediterranean Sea by Alcaraz et al. (2003) divided by the total number of individuals scanned and c) faecal pellet carbon: Faecal pellet length and width is measured using an inverted microscope. Faecal pellet volume is calculated from length and width assuming cylindrical shape. Conversion of faecal pellet volume to carbon is done using values obtained in the Mediterranean from: a) faecal pellet density 1,29 g cm**3 (or pg µm**3) from Komar et al. (1981); b) faecal pellet DW/WW=0,23 from Elder and Fowler (1977) and c) faecal pellet C%DW=25,5 Marty et al. (1994).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AIM To report the finding of extension of the 4th hyper-reflective band and retinal tissue into the optic disc in patients with cavitary optic disc anomalies (CODAs). METHODS In this observational study, 10 patients (18 eyes) with sporadic or autosomal dominant CODA were evaluated with enhanced depth imaging optical coherence tomography (EDI-OCT) and colour fundus images for the presence of 4th hyper-reflective band extension into the optic disc. RESULTS Of 10 CODA patients (18 eyes), five patients (8 eyes) showed a definite 4th hyper-reflective band (presumed retinal pigment epithelium (RPE)) extension into the optic disc. In these five patients (seven eyes), the inner retinal layers also extended with the 4th hyper-reflective band into the optic disc. Best corrected visual acuity ranged from 20/20 to 20/200. In three patients (four eyes), retinal splitting/schisis was present and in two patients (two eyes), the macula was involved. In all cases, the 4th hyper-reflective band extended far beyond the termination of the choroid into the optic disc. The RPE extension was found either temporally or nasally in areas of optic nerve head excavation, most often adjacent to peripapillary pigment. Compared with eyes without RPE extension, eyes with RPE extension were more myopic (mean dioptres -0.9±2.6 vs -8.8±5, p=0.043). CONCLUSIONS The RPE usually stops near the optic nerve border separated by a border tissue. With CODA, extension of this hyper-reflective band and retinal tissue into the disc is possible and best evaluable using EDI-OCT or analogous image modalities. Whether this is a finding specific for CODA, linked to specific gene loci or is also seen in patients with other optic disc abnormalities needs further evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we consider a scenario where 3D scenes are modeled through a View+Depth representation. This representation is to be used at the rendering side to generate synthetic views for free viewpoint video. The encoding of both type of data (view and depth) is carried out using two H.264/AVC encoders. In this scenario we address the reduction of the encoding complexity of depth data. Firstly, an analysis of the Mode Decision and Motion Estimation processes has been conducted for both view and depth sequences, in order to capture the correlation between them. Taking advantage of this correlation, we propose a fast mode decision and motion estimation algorithm for the depth encoding. Results show that the proposed algorithm reduces the computational burden with a negligible loss in terms of quality of the rendered synthetic views. Quality measurements have been conducted using the Video Quality Metric.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present an efficient hole filling strategy that improves the quality of the depth maps obtained with the Microsoft Kinect device. The proposed approach is based on a joint-bilateral filtering framework that includes spatial and temporal information. The missing depth values are obtained applying iteratively a joint-bilateral filter to their neighbor pixels. The filter weights are selected considering three different factors: visual data, depth information and a temporal-consistency map. Video and depth data are combined to improve depth map quality in presence of edges and homogeneous regions. Finally, the temporal-consistency map is generated in order to track the reliability of the depth measurements near the hole regions. The obtained depth values are included iteratively in the filtering process of the successive frames and the accuracy of the hole regions depth values increases while new samples are acquired and filtered

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present a depth-color scene modeling strategy for indoors 3D contents generation. It combines depth and visual information provided by a low-cost active depth camera to improve the accuracy of the acquired depth maps considering the different dynamic nature of the scene elements. Accurate depth and color models of the scene background are iteratively built, and used to detect moving elements in the scene. The acquired depth data is continuously processed with an innovative joint-bilateral filter that efficiently combines depth and visual information thanks to the analysis of an edge-uncertainty map and the detected foreground regions. The main advantages of the proposed approach are: removing depth maps spatial noise and temporal random fluctuations; refining depth data at object boundaries, generating iteratively a robust depth and color background model and an accurate moving object silhouette.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The readout procedure of charge-coupled device (CCD) cameras is known to generate some image degradation in different scientific imaging fields, especially in astrophysics. In the particular field of particle image velocimetry (PIV), widely extended in the scientific community, the readout procedure of the interline CCD sensor induces a bias in the registered position of particle images. This work proposes simple procedures to predict the magnitude of the associated measurement error. Generally, there are differences in the position bias for the different images of a certain particle at each PIV frame. This leads to a substantial bias error in the PIV velocity measurement (~0.1 pixels). This is the order of magnitude that other typical PIV errors such as peak-locking may reach. Based on modern CCD technology and architecture, this work offers a description of the readout phenomenon and proposes a modeling for the CCD readout bias error magnitude. This bias, in turn, generates a velocity measurement bias error when there is an illumination difference between two successive PIV exposures. The model predictions match the experiments performed with two 12-bit-depth interline CCD cameras (MegaPlus ES 4.0/E incorporating the Kodak KAI-4000M CCD sensor with 4 megapixels). For different cameras, only two constant values are needed to fit the proposed calibration model and predict the error from the readout procedure. Tests by different researchers using different cameras would allow verification of the model, that can be used to optimize acquisition setups. Simple procedures to obtain these two calibration values are also described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A depth-based face recognition algorithm specially adapted to high range resolution data acquired by the new Microsoft Kinect 2 sensor is presented. A novel descriptor called Depth Local Quantized Pattern descriptor has been designed to make use of the extended range resolution of the new sensor. This descriptor is a substantial modification of the popular Local Binary Pattern algorithm. One of the main contributions is the introduction of a quantification step, increasing its capacity to distinguish different depth patterns. The proposed descriptor has been used to train and test a Support Vector Machine classifier, which has proven to be able to accurately recognize different people faces from a wide range of poses. In addition, a new depth-based face database acquired by the new Kinect 2 sensor have been created and made public to evaluate the proposed face recognition system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Image Based Visual Servoing (IBVS) is a robotic control scheme based on vision. This scheme uses only the visual information obtained from a camera to guide a robot from any robot pose to a desired one. However, IBVS requires the estimation of different parameters that cannot be obtained directly from the image. These parameters range from the intrinsic camera parameters (which can be obtained from a previous camera calibration), to the measured distance on the optical axis between the camera and visual features, it is the depth. This paper presents a comparative study of the performance of D-IBVS estimating the depth from three different ways using a low cost RGB-D sensor like Kinect. The visual servoing system has been developed over ROS (Robot Operating System), which is a meta-operating system for robots. The experiments prove that the computation of the depth value for each visual feature improves the system performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This layer is a georeferenced raster image of the historic paper map entitled: A plan of Bombay harbour : principally illustrative of the entrance, constructed from measured bases, and a series of angles, taken in 1803 & 4 by James Horsburgh. It was published by James Horsburgh in 1806. Scale [ca.1:37,820].The image inside the map neatline is georeferenced to the surface of the earth and fit to the Kalianpur 1975 India Zone III projected coordinate system. All map collar and inset information is also available as part of the raster image, including any inset maps, profiles, statistical tables, directories, text, illustrations, index maps, legends, or other information associated with the principal map. This map shows features such as drainage, cities and other human settlements, fortification, shoreline features (rocks, shoals, anchorage points, ports, inlets, lighthouses, etc.), and more. Relief shown by depth soundings. Includes also profile views and navigational notes.This layer is part of a selection of digitally scanned and georeferenced historic maps from the Harvard Map Collection. These maps typically portray both natural and manmade features. The selection represents a range of originators, ground condition dates, scales, and map purposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This layer is a georeferenced raster image of the historic paper map entitled: Insvla Zeilan, olim Taprobana, nunc incolis Tenarisim, [by] Joannes Janssonius. It was published by J. Jansson, ca. 1650. Scale [ca. 1:1,000,000]. Covers Sri Lanka. Map in Latin. The image inside the map neatline is georeferenced to the surface of the earth and fit to the Universal Transverse Mercator (UTM Zone 44N, meters, WGS 1984) projected coordinate system. All map collar and inset information is also available as part of the raster image, including any inset maps, profiles, statistical tables, directories, text, illustrations, index maps, legends, or other information associated with the principal map. This map shows features such as drainage, cities and other human settlements, roads, shoreline features, and more. Relief shown pictorially, depth by soundings.This layer is part of a selection of digitally scanned and georeferenced historic maps from the Harvard Map Collection. These maps typically portray both natural and manmade features. The selection represents a range of originators, ground condition dates, scales, and map purposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This layer is a georeferenced raster image of the United States Geological Survey sheep map set entitled: Los Angeles and vicinity, East [and West], California. Edition 1953. It was published in 1956. Compiled from 1:24,000 scale maps of the Burbank 1953, Van Nuys 1953, Canoga Park 1952, Topanga 1952, Beverly Hills 1950, Hollywood 1953, Inglewood 1952, and Venice 1950 7.5 minute quadrangles. Hydrography compiled from USC&GS Chart 5144. Scale 1:24,000. This layer is image 2 of 2 total images of the two sheet source map set representing the western portion of the map set. The image inside the map neatline is georeferenced to the surface of the earth and fit to the California State Plane Zone V Coordinate System NAD27 (in Feet) (Fipszone 0405). All map collar and inset information is also available as part of the raster image, including any inset maps, profiles, statistical tables, directories, text, illustrations, index maps, legends, or other information associated with the principal map. USGS maps are typical topographic maps portraying both natural and manmade features. They show and name works of nature, such as mountains, valleys, lakes, rivers, vegetation, etc. They also identify the principal works of humans, such as roads, railroads, boundaries, transmission lines, major buildings, etc. Relief is shown with standard contour intervals of 5 and 25 feet. Depth curves in feet. Please pay close attention to map collar information on projections, spheroid, sources, dates, and keys to grid numbering and other numbers which appear inside the neatline. This layer is part of a selection of digitally scanned and georeferenced historic maps from The Harvard Map Collection as part of the Imaging the Urban Environment project. Maps selected for this project represent major urban areas and cities of the world, at various time periods. These maps typically portray both natural and manmade features at a large scale. The selection represents a range of regions, originators, ground condition dates, scales, and purposes.