925 resultados para Visual Speech Recognition, Multiple Views, Frontal View, Profile View


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A method for reconstructing 3D rational B-spline surfaces from multiple views is proposed. The method takes advantage of the projective invariance properties of rational B-splines. Given feature correspondences in multiple views, the 3D surface is reconstructed via a four step framework. First, corresponding features in each view are given an initial surface parameter value (s; t), and a 2D B-spline is fitted in each view. After this initialization, an iterative minimization procedure alternates between updating the 2D B-spline control points and re-estimating each feature's (s; t). Next, a non-linear minimization method is used to upgrade the 2D B-splines to 2D rational B-splines, and obtain a better fit. Finally, a factorization method is used to reconstruct the 3D B-spline surface given 2D B-splines in each view. This surface recovery method can be applied in both the perspective and orthographic case. The orthographic case allows the use of additional constraints in the recovery. Experiments with real and synthetic imagery demonstrate the efficacy of the approach for the orthographic case.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A method for reconstruction of 3D rational B-spline surfaces from multiple views is proposed. Given corresponding features in multiple views, though not necessarily visible in all views, the surface is reconstructed. First 2D B-spline patches are fitted to each view. The 3D B-splines and projection matricies can then be extracted from the 2D B-splines using factorization methods. The surface fit is then further refined via an iterative procedure. Finally, a hierarchal fitting scheme is proposed to allow modeling of complex surfaces by means of knot insertion. Experiments with real imagery demonstrate the efficacy of the approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a multi-object multi-camera framework for tracking large numbers of tightly-spaced objects that rapidly move in three dimensions. We formulate the problem of finding correspondences across multiple views as a multidimensional assignment problem and use a greedy randomized adaptive search procedure to solve this NP-hard problem efficiently. To account for occlusions, we relax the one-to-one constraint that one measurement corresponds to one object and iteratively solve the relaxed assignment problem. After correspondences are established, object trajectories are estimated by stereoscopic reconstruction using an epipolar-neighborhood search. We embedded our method into a tracker-to-tracker multi-view fusion system that not only obtains the three-dimensional trajectories of closely-moving objects but also accurately settles track uncertainties that could not be resolved from single views due to occlusion. We conducted experiments to validate our greedy assignment procedure and our technique to recover from occlusions. We successfully track hundreds of flying bats and provide an analysis of their group behavior based on 150 reconstructed 3D trajectories.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Working memory neural networks are characterized which encode the invariant temporal order of sequential events. Inputs to the networks, called Sustained Temporal Order REcurrent (STORE) models, may be presented at widely differing speeds, durations, and interstimulus intervals. The STORE temporal order code is designed to enable all emergent groupings of sequential events to be stably learned and remembered in real time, even as new events perturb the system. Such a competence is needed in neural architectures which self-organize learned codes for variable-rate speech perception, sensory-motor planning, or 3-D visual object recognition. Using such a working memory, a self-organizing architecture for invariant 3-D visual object recognition is described. The new model is based on the model of Seibert and Waxman (1990a), which builds a 3-D representation of an object from a temporally ordered sequence of its 2-D aspect graphs. The new model, called an ARTSTORE model, consists of the following cascade of processing modules: Invariant Preprocessor --> ART 2 --> STORE Model --> ART 2 --> Outstar Network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, three main questions were addressed using event-related potentials (ERPs): (1) the timing of lexical semantic access, (2) the influence of "top-down" processes on visual word processing, and (3) the influence of "bottom-up" factors on visual word processing. The timing of lexical semantic access was investigated in two studies using different designs. In Study 1,14 participants completed two tasks: a standard lexical decision (LD) task which required a word/nonword decision to each target stimulus, and a semantically primed version (LS) of it using the same category of words (e.g., animal) within each block following which participants made a category judgment. In Study 2, another 12 participants performed a standard semantic priming task, where target stimulus words (e.g., nurse) could be either semantically related or unrelated to their primes (e.g., doctor, tree) but the order of presentation was randomized. We found evidence in both ERP studies that lexical semantic access might occur early within the first 200 ms (at about 170 ms for Study 1 and at about 160 ms for Study 2). Our results were consistent with more recent ERP and eye-tracking studies and are in contrast with the traditional research focus on the N400 component. "Top-down" processes, such as a person's expectation and strategic decisions, were possible in Study 1 because of the blocked design, but they were not for Study 2 with a randomized design. Comparing results from two studies, we found that visual word processing could be affected by a person's expectation and the effect occurred early at a sensory/perceptual stage: a semantic task effect in the PI component at about 100 ms in the ERP was found in Study 1 , but not in Study 2. Furthermore, we found that such "top-down" influence on visual word processing might be mediated through separate mechanisms depending on whether the stimulus was a word or a nonword. "Bottom-up" factors involve inherent characteristics of particular words, such as bigram frequency (the total frequency of two-letter combinations of a word), word frequency (the frequency of the written form of a word), and neighborhood density (the number of words that can be generated by changing one letter of an original word or nonword). A bigram frequency effect was found when comparing the results from Studies 1 and 2, but it was examined more closely in Study 3. Fourteen participants performed a similar standard lexical decision task but the words and nonwords were selected systematically to provide a greater range in the aforementioned factors. As a result, a total of 18 word conditions were created with 18 nonword conditions matched on neighborhood density and neighborhood frequency. Using multiple regression analyses, we foimd that the PI amplitude was significantly related to bigram frequency for both words and nonwords, consistent with results from Studies 1 and 2. In addition, word frequency and neighborhood frequency were also able to influence the PI amplitude separately for words and for nonwords and there appeared to be a spatial dissociation between the two effects: for words, the word frequency effect in PI was found at the left electrode site; for nonwords, the neighborhood frequency effect in PI was fovind at the right elecfrode site. The implications of otir findings are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The HMAX model has recently been proposed by Riesenhuber & Poggio as a hierarchical model of position- and size-invariant object recognition in visual cortex. It has also turned out to model successfully a number of other properties of the ventral visual stream (the visual pathway thought to be crucial for object recognition in cortex), and particularly of (view-tuned) neurons in macaque inferotemporal cortex, the brain area at the top of the ventral stream. The original modeling study only used ``paperclip'' stimuli, as in the corresponding physiology experiment, and did not explore systematically how model units' invariance properties depended on model parameters. In this study, we aimed at a deeper understanding of the inner workings of HMAX and its performance for various parameter settings and ``natural'' stimulus classes. We examined HMAX responses for different stimulus sizes and positions systematically and found a dependence of model units' responses on stimulus position for which a quantitative description is offered. Interestingly, we find that scale invariance properties of hierarchical neural models are not independent of stimulus class, as opposed to translation invariance, even though both are affine transformations within the image plane.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This workshop paper reports recent developments to a vision system for traffic interpretation which relies extensively on the use of geometrical and scene context. Firstly, a new approach to pose refinement is reported, based on forces derived from prominent image derivatives found close to an initial hypothesis. Secondly, a parameterised vehicle model is reported, able to represent different vehicle classes. This general vehicle model has been fitted to sample data, and subjected to a Principal Component Analysis to create a deformable model of common car types having 6 parameters. We show that the new pose recovery technique is also able to operate on the PCA model, to allow the structure of an initial vehicle hypothesis to be adapted to fit the prevailing context. We report initial experiments with the model, which demonstrate significant improvements to pose recovery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this letter, a speech recognition algorithm based on the least-squares method is presented. Particularly, the intention is to exemplify how such a traditional numerical technique can be applied to solve a signal processing problem that is usually treated by using more elaborated formulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN] In the last years we have developed some methods for 3D reconstruction. First we began with the problem of reconstructing a 3D scene from a stereoscopic pair of images. We developed some methods based on energy functionals which produce dense disparity maps by preserving discontinuities from image boundaries. Then we passed to the problem of reconstructing a 3D scene from multiple views (more than 2). The method for multiple view reconstruction relies on the method for stereoscopic reconstruction. For every pair of consecutive images we estimate a disparity map and then we apply a robust method that searches for good correspondences through the sequence of images. Recently we have proposed several methods for 3D surface regularization. This is a postprocessing step necessary for smoothing the final surface, which could be afected by noise or mismatch correspondences. These regularization methods are interesting because they use the information from the reconstructing process and not only from the 3D surface. We have tackled all these problems from an energy minimization approach. We investigate the associated Euler-Lagrange equation of the energy functional, and we approach the solution of the underlying partial differential equation (PDE) using a gradient descent method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We used magnetoencephalography (MEG) to map the spatiotemporal evolution of cortical activity for visual word recognition. We show that for five-letter words, activity in the left hemisphere (LH) fusiform gyrus expands systematically in both the posterior-anterior and medial-lateral directions over the course of the first 500 ms after stimulus presentation. Contrary to what would be expected from cognitive models and hemodynamic studies, the component of this activity that spatially coincides with the visual word form area (VWFA) is not active until around 200 ms post-stimulus, and critically, this activity is preceded by and co-active with activity in parts of the inferior frontal gyrus (IFG, BA44/6). The spread of activity in the VWFA for words does not appear in isolation but is co-active in parallel with spread of activity in anterior middle temporal gyrus (aMTG, BA 21 and 38), posterior middle temporal gyrus (pMTG, BA37/39), and IFG. © 2004 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Speech perception routinely takes place in noisy or degraded listening environments, leading to ambiguity in the identity of the speech token. Here, I present one review paper and two experimental papers that highlight cognitive and visual speech contributions to the listening process, particularly in challenging listening environments. First, I survey the literature linking audiometric age-related hearing loss and cognitive decline and review the four proposed causal mechanisms underlying this link. I argue that future research in this area requires greater consideration of the functional overlap between hearing and cognition. I also present an alternative framework for understanding causal relationships between age-related declines in hearing and cognition, with emphasis on the interconnected nature of hearing and cognition and likely contributions from multiple causal mechanisms. I also provide a number of testable hypotheses to examine how impairments in one domain may affect the other. In my first experimental study, I examine the direct contribution of working memory (through a cognitive training manipulation) on speech in noise comprehension in older adults. My results challenge the efficacy of cognitive training more generally, and also provide support for the contribution of sentence context in reducing working memory load. My findings also challenge the ubiquitous use of the Reading Span test as a pure test of working memory. In a second experimental (fMRI) study, I examine the role of attention in audiovisual speech integration, particularly when the acoustic signal is degraded. I demonstrate that attentional processes support audiovisual speech integration in the middle and superior temporal gyri, as well as the fusiform gyrus. My results also suggest that the superior temporal sulcus is sensitive to intelligibility enhancement, regardless of how this benefit is obtained (i.e., whether it is obtained through visual speech information or speech clarity). In addition, I also demonstrate that both the cingulo-opercular network and motor speech areas are recruited in difficult listening conditions. Taken together, these findings augment our understanding of cognitive contributions to the listening process and demonstrate that memory, working memory, and executive control networks may flexibly be recruited in order to meet listening demands in challenging environments.