68 resultados para segmentation and reverberation

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Grey Level Co-occurrence Matrix (GLCM), one of the best known tool for texture analysis, estimates image properties related to second-order statistics. These image properties commonly known as Haralick texture features can be used for image classification, image segmentation, and remote sensing applications. However, their computations are highly intensive especially for very large images such as medical ones. Therefore, methods to accelerate their computations are highly desired. This paper proposes the use of programmable hardware to accelerate the calculation of GLCM and Haralick texture features. Further, as an example of the speedup offered by programmable logic, a multispectral computer vision system for automatic diagnosis of prostatic cancer has been implemented. The performance is then compared against a microprocessor based solution.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The importance and use of text extraction from camera based coloured scene images is rapidly increasing with time. Text within a camera grabbed image can contain a huge amount of meta data about that scene. Such meta data can be useful for identification, indexing and retrieval purposes. While the segmentation and recognition of text from document images is quite successful, detection of coloured scene text is a new challenge for all camera based images. Common problems for text extraction from camera based images are the lack of prior knowledge of any kind of text features such as colour, font, size and orientation as well as the location of the probable text regions. In this paper, we document the development of a fully automatic and extremely robust text segmentation technique that can be used for any type of camera grabbed frame be it single image or video. A new algorithm is proposed which can overcome the current problems of text segmentation. The algorithm exploits text appearance in terms of colour and spatial distribution. When the new text extraction technique was tested on a variety of camera based images it was found to out perform existing techniques (or something similar). The proposed technique also overcomes any problems that can arise due to an unconstraint complex background. The novelty in the works arises from the fact that this is the first time that colour and spatial information are used simultaneously for the purpose of text extraction.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider the problem of segmenting text documents that have a
two-part structure such as a problem part and a solution part. Documents
of this genre include incident reports that typically involve
description of events relating to a problem followed by those pertaining
to the solution that was tried. Segmenting such documents
into the component two parts would render them usable in knowledge
reuse frameworks such as Case-Based Reasoning. This segmentation
problem presents a hard case for traditional text segmentation
due to the lexical inter-relatedness of the segments. We develop
a two-part segmentation technique that can harness a corpus
of similar documents to model the behavior of the two segments
and their inter-relatedness using language models and translation
models respectively. In particular, we use separate language models
for the problem and solution segment types, whereas the interrelatedness
between segment types is modeled using an IBM Model
1 translation model. We model documents as being generated starting
from the problem part that comprises of words sampled from
the problem language model, followed by the solution part whose
words are sampled either from the solution language model or from
a translation model conditioned on the words already chosen in the
problem part. We show, through an extensive set of experiments on
real-world data, that our approach outperforms the state-of-the-art
text segmentation algorithms in the accuracy of segmentation, and
that such improved accuracy translates well to improved usability
in Case-based Reasoning systems. We also analyze the robustness
of our technique to varying amounts and types of noise and empirically
illustrate that our technique is quite noise tolerant, and
degrades gracefully with increasing amounts of noise

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One habitat management requirement forced by 21st century relative sea-level rise (RSLR), will be the need to re-comprehend the dimensions of long-term transgressive behaviour of coastal systems being forced by such RSLR. Fresh approaches to the conceptual modelling and subsequent implementation of new coastal and peri-marine habitats will be required. There is concern that existing approaches to forecasting coastal systems development (and by implication their associated scarce coastal habitats) over the next century depend on a certain premise of orderly spatial succession of habitats. This assumption is shown to be questionable given the possible future rates of RSLR, magnitude of shoreline retreat and the lack of coastal sediment to maintain the protective morphologies to low-energy coastal habitats. Of these issues, sediment deficiency is regarded as one of the major problem for future habitat development. Examples of contemporary behaviour of UK coasts show evidence of coastal sediment starvation resulting from relatively stable RSLR, anthropogenic sealing of coastal sources, and intercepted coastal sediment pathways, which together force segmentation of coastal systems. From these examples key principles are deduced which may prejudice the existence of future habitats: accelerated future sediment demand due to RSLR may not be met by supply and, if short- to medium-term hold-the-line policies predominate, long-term strategies for managed realignment and habitat enhancement may prove impossible goals. Methods of contemporary sediment husbandry may help sustain some habitats in place but otherwise, instead of integrated coastal organization, managers may need to consider coastal breakdown, segmentation and habitat reduction as the basis of 21st century coastal evolution and planning.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Oscillations in network bright points (NBPs) are studied at a variety of chromospheric heights. In particular, the three-dimensional variation of NBP oscillations is studied using image segmentation and cross-correlation analysis between images taken in light of Ca II K3, Ha core, Mg I b2, and Mg I b1-0.4 Å. Wavelet analysis is used to isolate wave packets in time and to search for height-dependent time delays that result from upward- or downward-directed traveling waves. In each NBP studied, we find evidence for kink-mode waves (1.3, 1.9 mHz), traveling up through the chromosphere and coupling with sausage-mode waves (2.6, 3.8 mHz). This provides a means for depositing energy in the upper chromosphere. We also find evidence for other upward- and downward-propagating waves in the 1.3-4.6 mHz range. Some oscillations do not correspond to traveling waves, and we attribute these to waves generated in neighboring regions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper addresses the pose recovery problem of a particular articulated object: the human body. In this model-based approach, the 2D-shape is associated to the corresponding stick figure allowing the joint segmentation and pose recovery of the subject observed in the scene. The main disadvantage of 2D-models is their restriction to the viewpoint. To cope with this limitation, local spatio-temporal 2D-models corresponding to many views of the same sequences are trained, concatenated and sorted in a global framework. Temporal and spatial constraints are then considered to build the probabilistic transition matrix (PTM) that gives a frame to frame estimation of the most probable local models to use during the fitting procedure, thus limiting the feature space. This approach takes advantage of 3D information avoiding the use of a complex 3D human model. The experiments carried out on both indoor and outdoor sequences have demonstrated the ability of this approach to adequately segment pedestrians and estimate their poses independently of the direction of motion during the sequence. (c) 2008 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent renewed interest in computational writer identification has resulted in an increased number of publications. In relation to historical musicology its application has so far been limited. One of the obstacles seems to be that the clarity of the images from the scans available for computational analysis is often not sufficient. In this paper, the use of the Hinge feature is proposed to avoid segmentation and staff-line removal for effective feature extraction from low quality scans. The use of an auto encoder in Hinge feature space is suggested as an alternative to staff-line removal by image processing, and their performance is compared. The result of the experiment shows an accuracy of 87 % for the dataset containing 84 writers’ samples, and superiority of our segmentation and staff-line removal free approach. Practical analysis on Bach’s autograph manuscript of the Well-Tempered Clavier II (Additional MS. 35021 in the British Library, London) is also presented and the extensive applicability of our approach is demonstrated.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We have examined the ability of observers to parse bimodal local-motion distributions into two global motion surfaces, either overlapping (yielding transparent motion) or spatially segregated (yielding a motion boundary). The stimuli were random dot kinematograms in which the direction of motion of each dot was drawn from one of two rectangular probability distributions. A wide range of direction distribution widths and separations was tested. The ability to discriminate the direction of motion of one of the two motion surfaces from the direction of a comparison stimulus was used as an objective test of the perception of two discrete surfaces. Performance for both transparent and spatially segregated motion was remarkably good, being only slightly inferior to that achieved with a single global motion surface. Performance was consistently better for segregated motion than for transparency. Whereas transparent motion was only perceived with direction distributions which were separated by a significant gap, segregated motion could be seen with abutting or even partially overlapping direction distributions. For transparency, the critical gap increased with the range of directions in the distribution. This result does not support models in which transparency depends on detection of a minimum size of gap defining a bimodal direction distribution. We suggest, instead, that the operations which detect bimodality are scaled (in the direction domain) with the overall range of distributions. This yields a flexible, adaptive system that determines whether a gap in the direction distribution serves as a segmentation cue or is smoothed as part of a unitary computation of global motion.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The mechanisms underlying the parsing of a spatial distribution of velocity vectors into two adjacent (spatially segregated) or overlapping (transparent) motion surfaces were examined using random dot kinematograms. Parsing might occur using either of two principles. Surfaces might be defined on the basis of similarity of motion vectors and then sharp perceptual boundaries drawn between different surfaces (continuity-based segmentation). Alternatively, detection of a high gradient of direction or speed separating the motion surfaces might drive the process (discontinuity-based segmentation). To establish which method is used, we examined the effect of blurring the motion direction gradient. In the case of a sharp direction gradient, each dot had one of two directions differing by 135°. With a shallow gradient, most dots had one of two directions but the directions of the remainder spanned the range between one motion-defined surface and the other. In the spatial segregation case the gradient defined a central boundary separating two regions. In the transparent version the dots were randomly positioned. In both cases all dots moved with the same speed and existed for only two frames before being randomly replaced. The ability of observers to parse the motion distribution was measured in terms of their ability to discriminate the direction of one of the two surfaces. Performance was hardly affected by spreading the gradient over at least 25% of the dots (corresponding to a 1° strip in the segregation case). We conclude that detection of sharp velocity gradients is not necessary for distinguishing different motion surfaces.