995 resultados para National Image


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Olympic Coast National Marine Sanctuary (OCNMS) continues to invest significant resources into seafloor mapping activities along Washington’s outer coast (Intelmann and Cochrane 2006; Intelmann et al. 2006; Intelmann 2006). Results from these annual mapping efforts offer a snapshot of current ground conditions, help to guide research and management activities, and provide a baseline for assessing the impacts of various threats to important habitat. During the months of August 2004 and May and July 2005, we used side scan sonar to image several regions of the sea floor in the northern OCNMS, and the data were mosaicked at 1-meter pixel resolution. Video from a towed camera sled, bathymetry data, sedimentary samples and side scan sonar mapping were integrated to describe geological and biological aspects of habitat. Polygon features were created and attributed with a hierarchical deep-water marine benthic classification scheme (Greene et al. 1999). For three small areas that were mapped with both side scan sonar and multibeam echosounder, we made a comparison of output from the classified images indicating little difference in results between the two methods. With these considerations, backscatter derived from multibeam bathymetry is currently a costefficient and safe method for seabed imaging in the shallow (<30 meters) rocky waters of OCNMS. The image quality is sufficient for classification purposes, the associated depths provide further descriptive value and risks to gear are minimized. In shallow waters (<30 meters) which do not have a high incidence of dangerous rock pinnacles, a towed multi-beam side scan sonar could provide a better option for obtaining seafloor imagery due to the high rate of acquisition speed and high image quality, however the high probability of losing or damaging such a costly system when deployed as a towed configuration in the extremely rugose nearshore zones within OCNMS is a financially risky proposition. The development of newer technologies such as intereferometric multibeam systems and bathymetric side scan systems could also provide great potential for mapping these nearshore rocky areas as they allow for high speed data acquisition, produce precisely geo-referenced side scan imagery to bathymetry, and do not experience the angular depth dependency associated with multibeam echosounders allowing larger range scales to be used in shallower water. As such, further investigation of these systems is needed to assess their efficiency and utility in these environments compared to traditional side scan sonar and multibeam bathymetry. (PDF contains 43 pages.)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In September 2002, side scan sonar was used to image a portion of the sea floor in the northern OCNMS and was mosaiced at 1-meter pixel resolution using 100 kHz data collected at 300-meter range scale. Video from a remotely-operated vehicle (ROV), bathymetry data, sedimentary samples, and sonar mapping have been integrated to describe geological and biological aspects of habitat and polygon features have been created and attributed with a hierarchical deep-water marine benthic classification scheme (Greene et al. 1999). The data can be used with geographic information system (GIS) software for display, query, and analysis. Textural analysis of the sonar images provided a relatively automated method for delineating substrate into three broad classes representing soft, mixed sediment, and hard bottom. Microhabitat and presence of certain biologic attributes were also populated into the polygon features, but strictly limited to areas where video groundtruthing occurred. Further groundtruthing work in specific areas would improve confidence in the classified habitat map. (PDF contains 22 pages.)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the sinusoidal phase modulating interferometer technique, the high-speed CCD is necessary to detect the interference signals. The reason of ordinary CCD's low frame rate was analyzed, and a novel high-speed image sensing technique with adjustable frame rate based on ail ordinary CCD was proposed. And the principle of the image sensor was analyzed. When the maximum frequency and channel bandwidth were constant, a custom high-speed sensor was designed by using the ordinary CCD under the control of the special driving circuit. The frame rate of the ordinary CCD has been enhanced by controlling the number of pixels of every frame; therefore, the ordinary of CCD can be used as the high frame rate image sensor with small amount of pixels. The multi-output high-speed image sensor has the deficiencies of low accuracy, and high cost, while the high-speed image senor with small number of pixels by using this technique can overcome theses faults. The light intensity varying with time was measured by using the image sensor. The frame rate was LIP to 1600 frame per second (f/s), and the size of every frame and the frame rate were adjustable. The correlation coefficient between the measurement result and the standard values were higher than 0.98026, and the relative error was lower than 0.53%. The experimental results show that this sensor is fit to the measurements of sinusoidal phase modulating interferometer technique. (c) 2007 Elsevier GmbH. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A programmable vision chip with variable resolution and row-pixel-mixed parallel image processors is presented. The chip consists of a CMOS sensor array, with row-parallel 6-bit Algorithmic ADCs, row-parallel gray-scale image processors, pixel-parallel SIMD Processing Element (PE) array, and instruction controller. The resolution of the image in the chip is variable: high resolution for a focused area and low resolution for general view. It implements gray-scale and binary mathematical morphology algorithms in series to carry out low-level and mid-level image processing and sends out features of the image for various applications. It can perform image processing at over 1,000 frames/s (fps). A prototype chip with 64 x 64 pixels resolution and 6-bit gray-scale image is fabricated in 0.18 mu m Standard CMOS process. The area size of chip is 1.5 mm x 3.5 mm. Each pixel size is 9.5 mu m x 9.5 mu m and each processing element size is 23 mu m x 29 mu m. The experiment results demonstrate that the chip can perform low-level and mid-level image processing and it can be applied in the real-time vision applications, such as high speed target tracking.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A full-ring PET insert device should be able to enhance the image resolution of existing small-animal PET scanners. Methods: The device consists of 18 high-resolution PET detectors in a cylindric enclosure. Each detector contains a cerium-doped lutetium oxyorthosilicate array (12 x 12 crystals, 0.72 x 1.51 x 3.75 mm each) coupled to a position-sensitive photomultiplier tube via an optical fiber bundle made of 8 x 16 square multiclad fibers. Signals from the insert detectors are connected to the scanner through the electronics of the disabled first ring of detectors, which permits coincidence detection between the 2 systems. Energy resolution of a detector was measured using a Ge-68 point source, and a calibrated 68Ge point source stepped across the axial field of view (FOV) provided the sensitivity profile of the system. A Na-22 point source imaged at different offsets from the center characterized the in-plane resolution of the insert system. Imaging was then performed with a Derenzo phantom filled with 19.5 MBq of F-18-fluoride and imaged for 2 h; a 24.3-g mouse injected with 129.5 MBq of F-18-fluoride and imaged in 5 bed positions at 3.5 h after injection; and a 22.8-g mouse injected with 14.3 MBq of F-18-FDG and imaged for 2 h with electrocardiogram gating. Results: The energy resolution of a typical detector module at 511 keV is 19.0% +/- 3.1 %. The peak sensitivity of the system is approximately 2.67%. The image resolution of the system ranges from 1.0- to 1.8-mm full width at half maximum near the center of the FOV, depending on the type of coincidence events used for image reconstruction. Derenzo phantom and mouse bone images showed significant improvement in transaxial image resolution using the insert device. Mouse heart images demonstrated the gated imaging capability of the device. Conclusion: We have built a prototype full-ring insert device for a small-animal PET scanner to provide higher-resolution PET images within a reduced imaging FOV. Development of additional correction techniques are needed to achieve quantitative imaging with such an insert.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Poster is based on the following paper: C. Kwan and M. Betke. Camera Canvas: Image editing software for people with disabilities. In Proceedings of the 14th International Conference on Human Computer Interaction (HCI International 2011), Orlando, Florida, July 2011.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method is proposed that can generate a ranked list of plausible three-dimensional hand configurations that best match an input image. Hand pose estimation is formulated as an image database indexing problem, where the closest matches for an input hand image are retrieved from a large database of synthetic hand images. In contrast to previous approaches, the system can function in the presence of clutter, thanks to two novel clutter-tolerant indexing methods. First, a computationally efficient approximation of the image-to-model chamfer distance is obtained by embedding binary edge images into a high-dimensional Euclide an space. Second, a general-purpose, probabilistic line matching method identifies those line segment correspondences between model and input images that are the least likely to have occurred by chance. The performance of this clutter-tolerant approach is demonstrated in quantitative experiments with hundreds of real hand images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ImageRover is a search by image content navigation tool for the world wide web. To gather images expediently, the image collection subsystem utilizes a distributed fleet of WWW robots running on different computers. The image robots gather information about the images they find, computing the appropriate image decompositions and indices, and store this extracted information in vector form for searches based on image content. At search time, users can iteratively guide the search through the selection of relevant examples. Search performance is made efficient through the use of an approximate, optimized k-d tree algorithm. The system employs a novel relevance feedback algorithm that selects the distance metrics appropriate for a particular query.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ImageRover is a search by image content navigation tool for the world wide web. The staggering size of the WWW dictates certain strategies and algorithms for image collection, digestion, indexing, and user interface. This paper describes two key components of the ImageRover strategy: image digestion and relevance feedback. Image digestion occurs during image collection; robots digest the images they find, computing image decompositions and indices, and storing this extracted information in vector form for searches based on image content. Relevance feedback occurs during index search; users can iteratively guide the search through the selection of relevant examples. ImageRover employs a novel relevance feedback algorithm to determine the weighted combination of image similarity metrics appropriate for a particular query. ImageRover is available and running on the web site.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a novel image registration framework which uses classifiers trained from examples of aligned images to achieve registration. Our approach is designed to register images of medical data where the physical condition of the patient has changed significantly and image intensities are drastically different. We use two boosted classifiers for each degree of freedom of image transformation. These two classifiers can both identify when two images are correctly aligned and provide an efficient means of moving towards correct registration for misaligned images. The classifiers capture local alignment information using multi-pixel comparisons and can therefore achieve correct alignments where approaches like correlation and mutual-information which rely on only pixel-to-pixel comparisons fail. We test our approach using images from CT scans acquired in a study of acute respiratory distress syndrome. We show significant increase in registration accuracy in comparison to an approach using mutual information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Some WWW image engines allow the user to form a query in terms of text keywords. To build the image index, keywords are extracted heuristically from HTML documents containing each image, and/or from the image URL and file headers. Unfortunately, text-based image engines have merely retro-fitted standard SQL database query methods, and it is difficult to include images cues within such a framework. On the other hand, visual statistics (e.g., color histograms) are often insufficient for helping users find desired images in a vast WWW index. By truly unifying textual and visual statistics, one would expect to get better results than either used separately. In this paper, we propose an approach that allows the combination of visual statistics with textual statistics in the vector space representation commonly used in query by image content systems. Text statistics are captured in vector form using latent semantic indexing (LSI). The LSI index for an HTML document is then associated with each of the images contained therein. Visual statistics (e.g., color, orientedness) are also computed for each image. The LSI and visual statistic vectors are then combined into a single index vector that can be used for content-based search of the resulting image database. By using an integrated approach, we are able to take advantage of possible statistical couplings between the topic of the document (latent semantic content) and the contents of images (visual statistics). This allows improved performance in conducting content-based search. This approach has been implemented in a WWW image search engine prototype.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Classifying novel terrain or objects front sparse, complex data may require the resolution of conflicting information from sensors working at different times, locations, and scales, and from sources with different goals and situations. Information fusion methods can help resolve inconsistencies, as when evidence variously suggests that an object's class is car, truck, or airplane. The methods described here consider a complementary problem, supposing that information from sensors and experts is reliable though inconsistent, as when evidence suggests that an object's class is car, vehicle, and man-made. Underlying relationships among objects are assumed to be unknown to the automated system or the human user. The ARTMAP information fusion system used distributed code representations that exploit the neural network's capacity for one-to-many learning in order to produce self-organizing expert systems that discover hierarchical knowledge structures. The system infers multi-level relationships among groups of output classes, without any supervised labeling of these relationships.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Air Force Office of Scientific Research (F49620-01-1-0423); National Geospatial-Intelligence Agency (NMA 201-01-1-2016); National Science Foundation (SBE-035437, DEG-0221680); Office of Naval Research (N00014-01-1-0624)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The What-and-Where filter forms part of a neural network architecture for spatial mapping, object recognition, and image understanding. The Where fllter responds to an image figure that has been separated from its background. It generates a spatial map whose cell activations simultaneously represent the position, orientation, ancl size of all tbe figures in a scene (where they are). This spatial map may he used to direct spatially localized attention to these image features. A multiscale array of oriented detectors, followed by competitve and interpolative interactions between position, orientation, and size scales, is used to define the Where filter. This analysis discloses several issues that need to be dealt with by a spatial mapping system that is based upon oriented filters, such as the role of cliff filters with and without normalization, the double peak problem of maximum orientation across size scale, and the different self-similar interpolation properties across orientation than across size scale. Several computationally efficient Where filters are proposed. The Where filter rnay be used for parallel transformation of multiple image figures into invariant representations that are insensitive to the figures' original position, orientation, and size. These invariant figural representations form part of a system devoted to attentive object learning and recognition (what it is). Unlike some alternative models where serial search for a target occurs, a What and Where representation can he used to rapidly search in parallel for a desired target in a scene. Such a representation can also be used to learn multidimensional representations of objects and their spatial relationships for purposes of image understanding. The What-and-Where filter is inspired by neurobiological data showing that a Where processing stream in the cerebral cortex is used for attentive spatial localization and orientation, whereas a What processing stream is used for attentive object learning and recognition.