926 resultados para Image recognition and processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main application area in this project, is to deploy image processing and segmentation techniques in computer vision through an omnidirectional vision system to agricultural mobile robots (AMR) used for trajectory navigation problems, as well as localization matters. Thereby, computational methods based on the JSEG algorithm were used to provide the classification and the characterization of such problems, together with Artificial Neural Networks (ANN) for image recognition. Hence, it was possible to run simulations and carry out analyses of the performance of JSEG image segmentation technique through Matlab/Octave computational platforms, along with the application of customized Back-propagation Multilayer Perceptron (MLP) algorithm and statistical methods as structured heuristics methods in a Simulink environment. Having the aforementioned procedures been done, it was practicable to classify and also characterize the HSV space color segments, not to mention allow the recognition of segmented images in which reasonably accurate results were obtained. © 2010 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

DNA interstrand crosslinks (ICLs) are among the most toxic type of damage to a cell. Many ICL-inducing agents are widely used as therapeutic agents, e.g. cisplatin, psoralen. A bettor understanding of the cellular mechanism that eliminates ICLs is important for the improvement of human health. However, ICL repair is still poorly understood in mammals. Using a triplex-directed site-specific ICL model, we studied the roles of mismatch repair (MMR) proteins in ICL repair in human cells. We are also interested in using psoralen-conjugated triplex-forming oligonucleotides (TFOs) to direct ICLs to a specific site in targeted DNA and in the mammalian genomes. ^ MSH2 protein is the common subunit of two MMR recognition complexes, and MutSα and MutSβ. We showed that MSH2 deficiency renders human cell hypersensitive to psoralen ICLs. MMR recognition complexes bind specifically to triplex-directed psoralen ICLs in vitro. Together with the fact that psoralen ICL-induced repair synthesis is dramatically decreased in MSH2 deficient cell extracts, we demonstrated that MSH2 function is critical for the recognition and processing of psoralen ICLs in human cells. Interestingly, lack of MSH2 does not reduce the level of psoralen ICL-induced mutagenesis in human cells, suggesting that MSH2 does not contribute to error-generating repair of psoralen ICLs, and therefore, may represent a novel error-free mechanism for repairing ICLs. We also studied the role of MLH1, anther key protein in MMR, in the processing of psoralen ICLs. MLH1-deficient human cells are more resistant to psoralen plus UVA treatment. Importantly, MLH1 function is not required for the mutagenic repair of psoralen ICLs, suggesting that it is not involved in the error-generating repair of this type of DNA damage in human cells. ^ These are the first data indicating mismatch repair proteins may participate in a relatively error-free mechanism for processing psoralen ICL in human cells. Enhancement of MMR protein function relative to nucleotide excision repair proteins may reduce the mutagenesis caused by DNA ICLs in humans. ^ In order to specifically target ICLs to mammalian genes, we identified novel TFO target sequences in mouse and human genomes. Using this information, many critical mammalian genes can now be targeted by TFOs.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The What-and-Where filter forms part of a neural network architecture for spatial mapping, object recognition, and image understanding. The Where fllter responds to an image figure that has been separated from its background. It generates a spatial map whose cell activations simultaneously represent the position, orientation, ancl size of all tbe figures in a scene (where they are). This spatial map may he used to direct spatially localized attention to these image features. A multiscale array of oriented detectors, followed by competitve and interpolative interactions between position, orientation, and size scales, is used to define the Where filter. This analysis discloses several issues that need to be dealt with by a spatial mapping system that is based upon oriented filters, such as the role of cliff filters with and without normalization, the double peak problem of maximum orientation across size scale, and the different self-similar interpolation properties across orientation than across size scale. Several computationally efficient Where filters are proposed. The Where filter rnay be used for parallel transformation of multiple image figures into invariant representations that are insensitive to the figures' original position, orientation, and size. These invariant figural representations form part of a system devoted to attentive object learning and recognition (what it is). Unlike some alternative models where serial search for a target occurs, a What and Where representation can he used to rapidly search in parallel for a desired target in a scene. Such a representation can also be used to learn multidimensional representations of objects and their spatial relationships for purposes of image understanding. The What-and-Where filter is inspired by neurobiological data showing that a Where processing stream in the cerebral cortex is used for attentive spatial localization and orientation, whereas a What processing stream is used for attentive object learning and recognition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data registration refers to a series of techniques for matching or bringing similar objects or datasets together into alignment. These techniques enjoy widespread use in a diverse variety of applications, such as video coding, tracking, object and face detection and recognition, surveillance and satellite imaging, medical image analysis and structure from motion. Registration methods are as numerous as their manifold uses, from pixel level and block or feature based methods to Fourier domain methods.

This book is focused on providing algorithms and image and video techniques for registration and quality performance metrics. The authors provide various assessment metrics for measuring registration quality alongside analyses of registration techniques, introducing and explaining both familiar and state-of-the-art registration methodologies used in a variety of targeted applications.

Key features:
- Provides a state-of-the-art review of image and video registration techniques, allowing readers to develop an understanding of how well the techniques perform by using specific quality assessment criteria
- Addresses a range of applications from familiar image and video processing domains to satellite and medical imaging among others, enabling readers to discover novel methodologies with utility in their own research
- Discusses quality evaluation metrics for each application domain with an interdisciplinary approach from different research perspectives

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data registration refers to a series of techniques for matching or bringing similar objects or datasets together into alignment. These techniques enjoy widespread use in a diverse variety of applications, such as video coding, tracking, object and face detection and recognition, surveillance and satellite imaging, medical image analysis and structure from motion. Registration methods are as numerous as their manifold uses, from pixel level and block or feature based methods to Fourier domain methods. This book is focused on providing algorithms and image and video techniques for registration and quality performance metrics. The authors provide various assessment metrics for measuring registration quality alongside analyses of registration techniques, introducing and explaining both familiar and state–of–the–art registration methodologies used in a variety of targeted applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Vision extracts useful information from images. Reconstructing the three-dimensional structure of our environment and recognizing the objects that populate it are among the most important functions of our visual system. Computer vision researchers study the computational principles of vision and aim at designing algorithms that reproduce these functions. Vision is difficult: the same scene may give rise to very different images depending on illumination and viewpoint. Typically, an astronomical number of hypotheses exist that in principle have to be analyzed to infer a correct scene description. Moreover, image information might be extracted at different levels of spatial and logical resolution dependent on the image processing task. Knowledge of the world allows the visual system to limit the amount of ambiguity and to greatly simplify visual computations. We discuss how simple properties of the world are captured by the Gestalt rules of grouping, how the visual system may learn and organize models of objects for recognition, and how one may control the complexity of the description that the visual system computes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While researchers strive to improve automatic face recognition performance, the relationship between image resolution and face recognition performance has not received much attention. This relationship is examined systematically and a framework is developed such that results from super-resolution techniques can be compared. Three super-resolution techniques are compared with the Eigenface and Elastic Bunch Graph Matching face recognition engines. Parameter ranges over which these techniques provide better recognition performance than interpolated images is determined.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Huelse, M, Barr, D R W, Dudek, P: Cellular Automata and non-static image processing for embodied robot systems on a massively parallel processor array. In: Adamatzky, A et al. (eds) AUTOMATA 2008, Theory and Applications of Cellular Automata. Luniver Press, 2008, pp. 504-510. Sponsorship: EPSRC

Relevância:

100.00% 100.00%

Publicador:

Resumo:

British Petroleum (89A-1204); Defense Advanced Research Projects Agency (N00014-92-J-4015); National Science Foundation (IRI-90-00530); Office of Naval Research (N00014-91-J-4100); Air Force Office of Scientific Research (F49620-92-J-0225)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present an improved model for line and edge detection in cortical area V1. This model is based on responses of simple and complex cells, and it is multi-scale with no free parameters. We illustrate the use of the multi-scale line/edge representation in different processes: visual reconstruction or brightness perception, automatic scale selection and object segregation. A two-level object categorization scenario is tested in which pre-categorization is based on coarse scales only and final categorization on coarse plus fine scales. We also present a multi-scale object and face recognition model. Processing schemes are discussed in the framework of a complete cortical architecture. The fact that brightness perception and object recognition may be based on the same symbolic image representation is an indication that the entire (visual) cortex is involved in consciousness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional methods of object recognition are reliant on shape and so are very difficult to apply in cluttered, wideangle and low-detail views such as surveillance scenes. To address this, a method of indirect object recognition is proposed, where human activity is used to infer both the location and identity of objects. No shape analysis is necessary. The concept is dubbed 'interaction signatures', since the premise is that a human will interact with objects in ways characteristic of the function of that object - for example, a person sits in a chair and drinks from a cup. The human-centred approach means that recognition is possible in low-detail views and is largely invariant to the shape of objects within the same functional class. This paper implements a Bayesian network for classifying region patches with object labels, building upon our previous work in automatically segmenting and recognising a human's interactions with the objects. Experiments show that interaction signatures can successfully find and label objects in low-detail views and are equally effective at recognising test objects that differ markedly in appearance from the training objects.