941 resultados para Enunciation scene


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes a new method of color text localization from generic scene images containing text of different scripts and with arbitrary orientations. A representative set of colors is first identified using the edge information to initiate an unsupervised clustering algorithm. Text components are identified from each color layer using a combination of a support vector machine and a neural network classifier trained on a set of low-level features derived from the geometric, boundary, stroke and gradient information. Experiments on camera-captured images that contain variable fonts, size, color, irregular layout, non-uniform illumination and multiple scripts illustrate the robustness of the method. The proposed method yields precision and recall of 0.8 and 0.86 respectively on a database of 100 images. The method is also compared with others in the literature using the ICDAR 2003 robust reading competition dataset.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we describe a method for feature extraction and classification of characters manually isolated from scene or natural images. Characters in a scene image may be affected by low resolution, uneven illumination or occlusion. We propose a novel method to perform binarization on gray scale images by minimizing energy functional. Discrete Cosine Transform and Angular Radial Transform are used to extract the features from characters after normalization for scale and translation. We have evaluated our method on the complete test set of Chars74k dataset for English and Kannada scripts consisting of handwritten and synthesized characters, as well as characters extracted from camera captured images. We utilize only synthesized and handwritten characters from this dataset as training set. Nearest neighbor classification is used in our experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we report a breakthrough result on the difficult task of segmentation and recognition of coloured text from the word image dataset of ICDAR robust reading competition challenge 2: reading text in scene images. We split the word image into individual colour, gray and lightness planes and enhance the contrast of each of these planes independently by a power-law transform. The discrimination factor of each plane is computed as the maximum between-class variance used in Otsu thresholding. The plane that has maximum discrimination factor is selected for segmentation. The trial version of Omnipage OCR is then used on the binarized words for recognition. Our recognition results on ICDAR 2011 and ICDAR 2003 word datasets are compared with those reported in the literature. As baseline, the images binarized by simple global and local thresholding techniques were also recognized. The word recognition rate obtained by our non-linear enhancement and selection of plance method is 72.8% and 66.2% for ICDAR 2011 and 2003 word datasets, respectively. We have created ground-truth for each image at the pixel level to benchmark these datasets using a toolkit developed by us. The recognition rate of benchmarked images is 86.7% and 83.9% for ICDAR 2011 and 2003 datasets, respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The report looks at trends developing in the area of the Lancashire River Authority which will, by the turn of the century, bring tremendous pressures to bear on its natural resources, particularly land and water. It looks at difficulties maintaining an environment suitable for all, human or otherwise, including construction of energy plants and increasing population. It explores the scheme of harnessing water on Morecambe Bay, including fishery advantages and disadvantages. The report looks at fish deaths and diseases in Morecambe Bay and the Lancashire area, providing statistics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A presente pesquisa tem como objetivo suscitar questões acerca da tríade política, currículo e tecnologia nas escolas públicas do Município do Rio de Janeiro. Os objetos de estudo são uma plataforma online, chamada Educopédia, e os professores da Rede que se candidatam à função de Embaixadores da Educopédia. Discuto a plataforma e os professores Embaixadores como estrangeiros, tal como utiliza Bhabha (2013) para discutir sujeitos diaspóricos e o irrompimento do novo nos processos de tradução cultural. Assim, em diálogo com os autores Stephen Ball, Homi Bhabha, Jacques Derrida, Arjun Appadurai e Ernesto Laclau, discuto a Educopédia e seus Embaixadores, tomando-os como estrangeiros nos processos de produção curricular, contribuindo para o irromper do novo que não se caracteriza por um ineditismo de sentidos, ideias, concepções, mas sentidos híbridos num contexto político marcado pela articulação entre currículo e tecnologia como indicativo de qualidade. Entendo esse movimento de articulação, que se performatiza com a participação dos estrangeiros, como tecno-curricular em que a tecnologia promove fissuras nas concepções do currículo. Defendo uma perspectiva de currículo entendido como processo de enunciação cultural, produção de sentidos que se hibridizam em função das disputas por significação, numa política que é cíclica, não verticalizada, que transita e é traduzida em todos os espaços, produzindo múltiplos significados sobre as possibilidades da tecnologia para o processo de ensino-aprendizagem

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Holistic representations of natural scenes is an effective and powerful source of information for semantic classification and analysis of arbitrary images. Recently, the frequency domain has been successfully exploited to holistically encode the content of natural scenes in order to obtain a robust representation for scene classification. In this paper, we present a new approach to naturalness classification of scenes using frequency domain. The proposed method is based on the ordering of the Discrete Fourier Power Spectra. Features extracted from this ordering are shown sufficient to build a robust holistic representation for Natural vs. Artificial scene classification. Experiments show that the proposed frequency domain method matches the accuracy of other state-of-the-art solutions. © 2008 Springer Berlin Heidelberg.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automating the model generation process of infrastructure can substantially reduce the modeling time and cost. This paper presents a method to generate a sparse point cloud of an infrastructure scene using a single video camera under practical constraints. It is the first step towards establishing an automatic framework for object-oriented as-built modeling. Motion blur and key frame selection criteria are considered. Structure from motion and bundle adjustment are explored. The method is demonstrated in a case study where the scene of a reinforced concrete bridge is videotaped, reconstructed, and metrically validated. The result indicates the applicability, efficiency, and accuracy of the proposed method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We demonstrate a new method for extracting high-level scene information from the type of data available from simultaneous localisation and mapping systems. We model the scene with a collection of primitives (such as bounded planes), and make explicit use of both visible and occluded points in order to refine the model. Since our formulation allows for different kinds of primitives and an arbitrary number of each, we use Bayesian model evidence to compare very different models on an even footing. Additionally, by making use of Bayesian techniques we can also avoid explicitly finding the optimal assignment of map landmarks to primitives. The results show that explicit reasoning about occlusion improves model accuracy and yields models which are suitable for aiding data association. © 2011. The copyright of this document resides with its authors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The broadcast soccer video is usually recorded by one main camera, which is constantly gazing somewhere of playfield where a highlight event is happening. So the camera parameters and their variety have close relationship with semantic information of soccer video, and much interest has been caught in camera calibration for soccer video. The previous calibration methods either deal with goal scene, or have strict calibration conditions and high complexity. So, it does not properly handle the non-goal scene such as midfield or center-forward scene. In this paper, based on a new soccer field model, a field symbol extraction algorithm is proposed to extract the calibration information. Then a two-stage calibration approach is developed which can calibrate camera not only for goal scene but also for non-goal scene. The preliminary experimental results demonstrate its robustness and accuracy. (c) 2010 Elsevier B.V. All rights reserved.