20 resultados para Émotion

em Boston University Digital Common


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the motion of ballistic electrons within a superlattice miniband under the influence of an alternating electric field. We show that the interaction of electrons with the self-consistent electromagnetic field generated by the electron current may lead to the transition from regular to chaotic dynamics. We estimate the conditions for the experimental observation of this deterministic chaos and discuss the similarities of the superlattice system with the other condensed matter and quantum optical systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nonrigid motion can be described as morphing or blending between extremal shapes, e.g., heart motion can be described as transitioning between the systole and diastole states. Using physically-based modeling techniques, shape similarity can be measured in terms of forces and strain. This provides a physically-based coordinate system in which motion is characterized in terms of physical similarity to a set of extremal shapes. Having such a low-dimensional characterization of nonrigid motion allows for the recognition and the comparison of different types of nonrigid motion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Malignant or benign tumors may be ablated with high‐intensity focused ultrasound (HIFU). This technique, known as focused ultrasound surgery (FUS), has been actively investigated for decades, but slow to be implemented and difficult to control due to lack of real‐time feedback during ablation. Two methods of imaging and monitoring HIFU lesions during formation were implemented simultaneously, in order to investigate the efficacy of each and to increase confidence in the detection of the lesion. The first, Acousto‐Optic Imaging (AOI) detects the increasing optical absorption and scattering in the lesion. The intensity of a diffuse optical field in illuminated tissue is mapped at the spatial resolution of an ultrasound focal spot, using the acousto‐optic effect. The second, Harmonic Motion Imaging (HMI), detects the changing stiffness in the lesion. The HIFU beam is modulated to force oscillatory motion in the tissue, and the amplitude of this motion, measured by ultrasound pulse‐echo techniques, is influenced by the stiffness. Experiments were performed on store‐bought chicken breast and freshly slaughtered bovine liver. The AOI results correlated with the onset and relative size of forming lesions much better than prior knowledge of the HIFU power and duration. For HMI, a significant artifact was discovered due to acoustic nonlinearity. The artifact was mitigated by adjusting the phase of the HIFU and imaging pulses. A more detailed model of the HMI process than previously published was made using finite element analysis. The model showed that the amplitude of harmonic motion was primarily affected by increases in acoustic attenuation and stiffness as the lesion formed and the interaction of these effects was complex and often counteracted each other. Further biological variability in tissue properties meant that changes in motion were masked by sample‐to‐sample variation. The HMI experiments predicted lesion formation in only about a quarter of the lesions made. In simultaneous AOI/HMI experiments it appeared that AOI was a more robust method for lesion detection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new approach is proposed for clustering time-series data. The approach can be used to discover groupings of similar object motions that were observed in a video collection. A finite mixture of hidden Markov models (HMMs) is fitted to the motion data using the expectation-maximization (EM) framework. Previous approaches for HMM-based clustering employ a k-means formulation, where each sequence is assigned to only a single HMM. In contrast, the formulation presented in this paper allows each sequence to belong to more than a single HMM with some probability, and the hard decision about the sequence class membership can be deferred until a later time when such a decision is required. Experiments with simulated data demonstrate the benefit of using this EM-based approach when there is more "overlap" in the processes generating the data. Experiments with real data show the promising potential of HMM-based motion clustering in a number of applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This technical report presents a combined solution for two problems, one: tracking objects in 3D space and estimating their trajectories and second: computing the similarity between previously estimated trajectories and clustering them using the similarities that we just computed. For the first part, trajectories are estimated using an EKF formulation that will provide the 3D trajectory up to a constant. To improve accuracy, when occlusions appear, multiple hypotheses are followed. For the second problem we compute the distances between trajectories using a similarity based on LCSS formulation. Similarities are computed between projections of trajectories on coordinate axes. Finally we group trajectories together based on previously computed distances, using a clustering algorithm. To check the validity of our approach, several experiments using real data were performed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A system is described that tracks moving objects in a video dataset so as to extract a representation of the objects' 3D trajectories. The system then finds hierarchical clusters of similar trajectories in the video dataset. Objects' motion trajectories are extracted via an EKF formulation that provides each object's 3D trajectory up to a constant factor. To increase accuracy when occlusions occur, multiple tracking hypotheses are followed. For trajectory-based clustering and retrieval, a modified version of edit distance, called longest common subsequence (LCSS) is employed. Similarities are computed between projections of trajectories on coordinate axes. Trajectories are grouped based, using an agglomerative clustering algorithm. To check the validity of the approach, experiments using real data were performed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel technique to detect and localize periodic movements in video is presented. The distinctive feature of the technique is that it requires neither feature tracking nor object segmentation. Intensity patterns along linear sample paths in space-time are used in estimation of period of object motion in a given sequence of frames. Sample paths are obtained by connecting (in space-time) sample points from regions of high motion magnitude in the first and last frames. Oscillations in intensity values are induced at time instants when an object intersects the sample path. The locations of peaks in intensity are determined by parameters of both cyclic object motion and orientation of the sample path with respect to object motion. The information about peaks is used in a least squares framework to obtain an initial estimate of these parameters. The estimate is further refined using the full intensity profile. The best estimate for the period of cyclic object motion is obtained by looking for consensus among estimates from many sample paths. The proposed technique is evaluated with synthetic videos where ground-truth is known, and with American Sign Language videos where the goal is to detect periodic hand motions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hand signals are commonly used in applications such as giving instructions to a pilot for airplane take off or direction of a crane operator by a foreman on the ground. A new algorithm for recognizing hand signals from a single camera is proposed. Typically, tracked 2D feature positions of hand signals are matched to 2D training images. In contrast, our approach matches the 2D feature positions to an archive of 3D motion capture sequences. The method avoids explicit reconstruction of the 3D articulated motion from 2D image features. Instead, the matching between the 2D and 3D sequence is done by backprojecting the 3D motion capture data onto 2D. Experiments demonstrate the effectiveness of the approach in an example application: recognizing six classes of basketball referee hand signals in video.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Intelligent assistive technology can greatly improve the daily lives of people with severe paralysis, who have limited communication abilities. People with motion impairments often prefer camera-based communication interfaces, because these are customizable, comfortable, and do not require user-borne accessories that could draw attention to their disability. We present an overview of assistive software that we specifically designed for camera-based interfaces such as the Camera Mouse, which serves as a mouse-replacement input system. The applications include software for text-entry, web browsing, image editing, animation, and music therapy. Using this software, people with severe motion impairments can communicate with friends and family and have a medium to explore their creativity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A specialized formulation of Azarbayejani and Pentland's framework for recursive recovery of motion, structure and focal length from feature correspondences tracked through an image sequence is presented. The specialized formulation addresses the case where all tracked points lie on a plane. This planarity constraint reduces the dimension of the original state vector, and consequently the number of feature points needed to estimate the state. Experiments with synthetic data and real imagery illustrate the system performance. The experiments confirm that the specialized formulation provides improved accuracy, stability to observation noise, and rate of convergence in estimation for the case where the tracked points lie on a plane.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

How do visual form and motion processes cooperate to compute object motion when each process separately is insufficient? A 3D FORMOTION model specifies how 3D boundary representations, which separate figures from backgrounds within cortical area V2, capture motion signals at the appropriate depths in MT; how motion signals in MT disambiguate boundaries in V2 via MT-to-Vl-to-V2 feedback; how sparse feature tracking signals are amplified; and how a spatially anisotropic motion grouping process propagates across perceptual space via MT-MST feedback to integrate feature-tracking and ambiguous motion signals to determine a global object motion percept. Simulated data include: the degree of motion coherence of rotating shapes observed through apertures, the coherent vs. element motion percepts separated in depth during the chopsticks illusion, and the rigid vs. non-rigid appearance of rotating ellipses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Log-polar image architectures, motivated by the structure of the human visual field, have long been investigated in computer vision for use in estimating motion parameters from an optical flow vector field. Practical problems with this approach have been: (i) dependence on assumed alignment of the visual and motion axes; (ii) sensitivity to occlusion form moving and stationary objects in the central visual field, where much of the numerical sensitivity is concentrated; and (iii) inaccuracy of the log-polar architecture (which is an approximation to the central 20°) for wide-field biological vision. In the present paper, we show that an algorithm based on generalization of the log-polar architecture; termed the log-dipolar sensor, provides a large improvement in performance relative to the usual log-polar sampling. Specifically, our algorithm: (i) is tolerant of large misalignmnet of the optical and motion axes; (ii) is insensitive to significant occlusion by objects of unknown motion; and (iii) represents a more correct analogy to the wide-field structure of human vision. Using the Helmholtz-Hodge decomposition to estimate the optical flow vector field on a log-dipolar sensor, we demonstrate these advantages, using synthetic optical flow maps as well as natural image sequences.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

How does the brain make decisions? Speed and accuracy of perceptual decisions covary with certainty in the input, and correlate with the rate of evidence accumulation in parietal and frontal cortical "decision neurons." A biophysically realistic model of interactions within and between Retina/LGN and cortical areas V1, MT, MST, and LIP, gated by basal ganglia, simulates dynamic properties of decision-making in response to ambiguous visual motion stimuli used by Newsome, Shadlen, and colleagues in their neurophysiological experiments. The model clarifies how brain circuits that solve the aperture problem interact with a recurrent competitive network with self-normalizing choice properties to carry out probablistic decisions in real time. Some scientists claim that perception and decision-making can be described using Bayesian inference or related general statistical ideas, that estimate the optimal interpretation of the stimulus given priors and likelihoods. However, such concepts do not propose the neocortical mechanisms that enable perception, and make decisions. The present model explains behavioral and neurophysiological decision-making data without an appeal to Bayesian concepts and, unlike other existing models of these data, generates perceptual representations and choice dynamics in response to the experimental visual stimuli. Quantitative model simulations include the time course of LIP neuronal dynamics, as well as behavioral accuracy and reaction time properties, during both correct and error trials at different levels of input ambiguity in both fixed duration and reaction time tasks. Model MT/MST interactions compute the global direction of random dot motion stimuli, while model LIP computes the stochastic perceptual decision that leads to a saccadic eye movement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When brain mechanism carry out motion integration and segmentation processes that compute unambiguous global motion percepts from ambiguous local motion signals? Consider, for example, a deer running at variable speeds behind forest cover. The forest cover is an occluder that creates apertures through which fragments of the deer's motion signals are intermittently experienced. The brain coherently groups these fragments into a trackable percept of the deer in its trajectory. Form and motion processes are needed to accomplish this using feedforward and feedback interactions both within and across cortical processing streams. All the cortical areas V1, V2, MT, and MST are involved in these interactions. Figure-ground processes in the form stream through V2, such as the seperation of occluding boundaries of the forest cover from the boundaries of the deer, select the motion signals which determine global object motion percepts in the motion stream through MT. Sparse, but unambiguous, feauture tracking signals are amplified before they propogate across position and are intergrated with far more numerous ambiguous motion signals. Figure-ground and integration processes together determine the global percept. A neural model predicts the processing stages that embody these form and motion interactions. Model concepts and data are summarized about motion grouping across apertures in response to a wide variety of displays, and probabilistic decision making in parietal cortex in response to random dot displays.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Studies of perceptual learning have focused on aspects of learning that are related to early stages of sensory processing. However, conclusions that perceptual learning results in low-level sensory plasticity are of great controversy, largely because such learning can often be attributed to plasticity in later stages of sensory processing or in the decision processes. To address this controversy, we developed a novel random dot motion (RDM) stimulus to target motion cells selective to contrast polarity, by ensuring the motion direction information arises only from signal dot onsets and not their offsets, and used these stimuli in conjunction with the paradigm of task-irrelevant perceptual learning (TIPL). In TIPL, learning is achieved in response to a stimulus by subliminally pairing that stimulus with the targets of an unrelated training task. In this manner, we are able to probe learning for an aspect of motion processing thought to be a function of directional V1 simple cells with a learning procedure that dissociates the learned stimulus from the decision processes relevant to the training task. Our results show learning for the exposed contrast polarity and that this learning does not transfer to the unexposed contrast polarity. These results suggest that TIPL for motion stimuli may occur at the stage of directional V1 simple cells.