931 resultados para TIME-MOTION
Resumo:
Malignant or benign tumors may be ablated with high‐intensity focused ultrasound (HIFU). This technique, known as focused ultrasound surgery (FUS), has been actively investigated for decades, but slow to be implemented and difficult to control due to lack of real‐time feedback during ablation. Two methods of imaging and monitoring HIFU lesions during formation were implemented simultaneously, in order to investigate the efficacy of each and to increase confidence in the detection of the lesion. The first, Acousto‐Optic Imaging (AOI) detects the increasing optical absorption and scattering in the lesion. The intensity of a diffuse optical field in illuminated tissue is mapped at the spatial resolution of an ultrasound focal spot, using the acousto‐optic effect. The second, Harmonic Motion Imaging (HMI), detects the changing stiffness in the lesion. The HIFU beam is modulated to force oscillatory motion in the tissue, and the amplitude of this motion, measured by ultrasound pulse‐echo techniques, is influenced by the stiffness. Experiments were performed on store‐bought chicken breast and freshly slaughtered bovine liver. The AOI results correlated with the onset and relative size of forming lesions much better than prior knowledge of the HIFU power and duration. For HMI, a significant artifact was discovered due to acoustic nonlinearity. The artifact was mitigated by adjusting the phase of the HIFU and imaging pulses. A more detailed model of the HMI process than previously published was made using finite element analysis. The model showed that the amplitude of harmonic motion was primarily affected by increases in acoustic attenuation and stiffness as the lesion formed and the interaction of these effects was complex and often counteracted each other. Further biological variability in tissue properties meant that changes in motion were masked by sample‐to‐sample variation. The HMI experiments predicted lesion formation in only about a quarter of the lesions made. In simultaneous AOI/HMI experiments it appeared that AOI was a more robust method for lesion detection.
Resumo:
The goal of this work is to learn a parsimonious and informative representation for high-dimensional time series. Conceptually, this comprises two distinct yet tightly coupled tasks: learning a low-dimensional manifold and modeling the dynamical process. These two tasks have a complementary relationship as the temporal constraints provide valuable neighborhood information for dimensionality reduction and conversely, the low-dimensional space allows dynamics to be learnt efficiently. Solving these two tasks simultaneously allows important information to be exchanged mutually. If nonlinear models are required to capture the rich complexity of time series, then the learning problem becomes harder as the nonlinearities in both tasks are coupled. The proposed solution approximates the nonlinear manifold and dynamics using piecewise linear models. The interactions among the linear models are captured in a graphical model. By exploiting the model structure, efficient inference and learning algorithms are obtained without oversimplifying the model of the underlying dynamical process. Evaluation of the proposed framework with competing approaches is conducted in three sets of experiments: dimensionality reduction and reconstruction using synthetic time series, video synthesis using a dynamic texture database, and human motion synthesis, classification and tracking on a benchmark data set. In all experiments, the proposed approach provides superior performance.
Resumo:
The goal of this work is to learn a parsimonious and informative representation for high-dimensional time series. Conceptually, this comprises two distinct yet tightly coupled tasks: learning a low-dimensional manifold and modeling the dynamical process. These two tasks have a complementary relationship as the temporal constraints provide valuable neighborhood information for dimensionality reduction and conversely, the low-dimensional space allows dynamics to be learnt efficiently. Solving these two tasks simultaneously allows important information to be exchanged mutually. If nonlinear models are required to capture the rich complexity of time series, then the learning problem becomes harder as the nonlinearities in both tasks are coupled. The proposed solution approximates the nonlinear manifold and dynamics using piecewise linear models. The interactions among the linear models are captured in a graphical model. The model structure setup and parameter learning are done using a variational Bayesian approach, which enables automatic Bayesian model structure selection, hence solving the problem of over-fitting. By exploiting the model structure, efficient inference and learning algorithms are obtained without oversimplifying the model of the underlying dynamical process. Evaluation of the proposed framework with competing approaches is conducted in three sets of experiments: dimensionality reduction and reconstruction using synthetic time series, video synthesis using a dynamic texture database, and human motion synthesis, classification and tracking on a benchmark data set. In all experiments, the proposed approach provides superior performance.
Resumo:
How does the brain make decisions? Speed and accuracy of perceptual decisions covary with certainty in the input, and correlate with the rate of evidence accumulation in parietal and frontal cortical "decision neurons." A biophysically realistic model of interactions within and between Retina/LGN and cortical areas V1, MT, MST, and LIP, gated by basal ganglia, simulates dynamic properties of decision-making in response to ambiguous visual motion stimuli used by Newsome, Shadlen, and colleagues in their neurophysiological experiments. The model clarifies how brain circuits that solve the aperture problem interact with a recurrent competitive network with self-normalizing choice properties to carry out probablistic decisions in real time. Some scientists claim that perception and decision-making can be described using Bayesian inference or related general statistical ideas, that estimate the optimal interpretation of the stimulus given priors and likelihoods. However, such concepts do not propose the neocortical mechanisms that enable perception, and make decisions. The present model explains behavioral and neurophysiological decision-making data without an appeal to Bayesian concepts and, unlike other existing models of these data, generates perceptual representations and choice dynamics in response to the experimental visual stimuli. Quantitative model simulations include the time course of LIP neuronal dynamics, as well as behavioral accuracy and reaction time properties, during both correct and error trials at different levels of input ambiguity in both fixed duration and reaction time tasks. Model MT/MST interactions compute the global direction of random dot motion stimuli, while model LIP computes the stochastic perceptual decision that leads to a saccadic eye movement.
Resumo:
This article describes neural network models for adaptive control of arm movement trajectories during visually guided reaching and, more generally, a framework for unsupervised real-time error-based learning. The models clarify how a child, or untrained robot, can learn to reach for objects that it sees. Piaget has provided basic insights with his concept of a circular reaction: As an infant makes internally generated movements of its hand, the eyes automatically follow this motion. A transformation is learned between the visual representation of hand position and the motor representation of hand position. Learning of this transformation eventually enables the child to accurately reach for visually detected targets. Grossberg and Kuperstein have shown how the eye movement system can use visual error signals to correct movement parameters via cerebellar learning. Here it is shown how endogenously generated arm movements lead to adaptive tuning of arm control parameters. These movements also activate the target position representations that are used to learn the visuo-motor transformation that controls visually guided reaching. The AVITE model presented here is an adaptive neural circuit based on the Vector Integration to Endpoint (VITE) model for arm and speech trajectory generation of Bullock and Grossberg. In the VITE model, a Target Position Command (TPC) represents the location of the desired target. The Present Position Command (PPC) encodes the present hand-arm configuration. The Difference Vector (DV) population continuously.computes the difference between the PPC and the TPC. A speed-controlling GO signal multiplies DV output. The PPC integrates the (DV)·(GO) product and generates an outflow command to the arm. Integration at the PPC continues at a rate dependent on GO signal size until the DV reaches zero, at which time the PPC equals the TPC. The AVITE model explains how self-consistent TPC and PPC coordinates are autonomously generated and learned. Learning of AVITE parameters is regulated by activation of a self-regulating Endogenous Random Generator (ERG) of training vectors. Each vector is integrated at the PPC, giving rise to a movement command. The generation of each vector induces a complementary postural phase during which ERG output stops and learning occurs. Then a new vector is generated and the cycle is repeated. This cyclic, biphasic behavior is controlled by a specialized gated dipole circuit. ERG output autonomously stops in such a way that, across trials, a broad sample of workspace target positions is generated. When the ERG shuts off, a modulator gate opens, copying the PPC into the TPC. Learning of a transformation from TPC to PPC occurs using the DV as an error signal that is zeroed due to learning. This learning scheme is called a Vector Associative Map, or VAM. The VAM model is a general-purpose device for autonomous real-time error-based learning and performance of associative maps. The DV stage serves the dual function of reading out new TPCs during performance and reading in new adaptive weights during learning, without a disruption of real-time operation. YAMs thus provide an on-line unsupervised alternative to the off-line properties of supervised error-correction learning algorithms. YAMs and VAM cascades for learning motor-to-motor and spatial-to-motor maps are described. YAM models and Adaptive Resonance Theory (ART) models exhibit complementary matching, learning, and performance properties that together provide a foundation for designing a total sensory-cognitive and cognitive-motor autonomous system.
Resumo:
How do human observers perceive a coherent pattern of motion from a disparate set of local motion measures? Our research has examined how ambiguous motion signals along straight contours are spatially integrated to obtain a globally coherent perception of motion. Observers viewed displays containing a large number of apertures, with each aperture containing one or more contours whose orientations and velocities could be independently specified. The total pattern of the contour trajectories across the individual apertures was manipulated to produce globally coherent motions, such as rotations, expansions, or translations. For displays containing only straight contours extending to the circumferences of the apertures, observers' reports of global motion direction were biased whenever the sampling of contour orientations was asymmetric relative to the direction of motion. Performance was improved by the presence of identifiable features, such as line ends or crossings, whose trajectories could be tracked over time. The reports of our observers were consistent with a pooling process involving a vector average of measures of the component of velocity normal to contour orientation, rather than with the predictions of the intersection-of-constraints analysis in velocity space.
Resumo:
New compensation methods are presented that can greatly reduce the slit errors (i.e. transition location errors) and interval errors induced due to non-idealities in optical incremental encoders (square-wave). An M/T-type, constant sample-time digital tachometer (CSDT) is selected for measuring the velocity of the sensor drives. Using this data, three encoder compensation techniques (two pseudoinverse based methods and an iterative method) are presented that improve velocity measurement accuracy. The methods do not require precise knowledge of shaft velocity. During the initial learning stage of the compensation algorithm (possibly performed in-situ), slit errors/interval errors are calculated through pseudoinversebased solutions of simple approximate linear equations, which can provide fast solutions, or an iterative method that requires very little memory storage. Subsequent operation of the motion system utilizes adjusted slit positions for more accurate velocity calculation. In the theoretical analysis of the compensation of encoder errors, encoder error sources such as random electrical noise and error in estimated reference velocity are considered. Initially, the proposed learning compensation techniques are validated by implementing the algorithms in MATLAB software, showing a 95% to 99% improvement in velocity measurement. However, it is also observed that the efficiency of the algorithm decreases with the higher presence of non-repetitive random noise and/or with the errors in reference velocity calculations. The performance improvement in velocity measurement is also demonstrated experimentally using motor-drive systems, each of which includes a field-programmable gate array (FPGA) for CSDT counting/timing purposes, and a digital-signal-processor (DSP). Results from open-loop velocity measurement and closed-loop servocontrol applications, on three optical incremental square-wave encoders and two motor drives, are compiled. While implementing these algorithms experimentally on different drives (with and without a flywheel) and on encoders of different resolutions, slit error reductions of 60% to 86% are obtained (typically approximately 80%).
Resumo:
Context. This paper is the last in a series devoted to the analysis of the binary content of the Hipparcos Catalogue. Aims. The comparison of the proper motions constructed from positions spanning a short (Hipparcos) or long time (Tycho-2) makes it possible to uncover binaries with periods of the order of or somewhat larger than the short time span (in this case, the 3 yr duration of the Hipparcos mission), since the unrecognised orbital motion will then add to the proper motion. Methods. A list of candidate proper motion binaries is constructed from a carefully designed χ2 test evaluating the statistical significance of the difference between the Tycho-2 and Hipparcos proper motions for 103 134 stars in common between the two catalogues (excluding components of visual systems). Since similar lists of proper-motion binaries have already been constructed, the present paper focuses on the evaluation of the detection efficiency of proper-motion binaries, using different kinds of control data (mostly radial velocities). The detection rate for entries from the Ninth Catalogue of Spectroscopic Binary Orbits (SB9) is evaluated, as well as for stars like barium stars, which are known to be all binaries, and finally for spectroscopic binaries identified from radial velocity data in the Geneva-Copenhagen survey of F and G dwarfs in the solar neighbourhood. Results. Proper motion binaries are efficiently detected for systems with parallaxes in excess of ∼20 mas, and periods in the range 1000-30 000 d. The shortest periods in this range (1000-2000 d, i.e. once to twice the duration of the Hipparcos mission) may appear only as DMSA/G binaries (accelerated proper motion in the Hipparcos Double and Multiple System Annex). Proper motion binaries detected among SB9 systems having periods shorter than about 400 d hint at triple systems, the proper-motion binary involving a component with a longer orbital period. A list of 19 candidate triple systems is provided. Binaries suspected of having low-mass (brown-dwarf-like) companions are listed as well. Among the 37 barium stars with parallaxes larger than 5 mas, only 7 exhibit no evidence for duplicity whatsoever (be it spectroscopic or astrometric). Finally, the fraction of proper-motion binaries shows no significant variation among the various (regular) spectral classes, when due account is taken for the detection biases. © ESO 2007.
Resumo:
Solder is often used as an adhesive to attach optical fibers to a circuit board. In this proceeding we will discuss efforts to model the motion of an optical fiber during the wetting and solidification of the adhesive solder droplet. The extent of motion is determined by several competing forces, during three “stages” of solder joint formation. First, capillary forces of the liquid phase control the fiber position. Second, during solidification, the presence of the liquid-solid-vapor triple line as well as a reduced liquid solder volume leads to a change in the net capillary force on the optical fiber. Finally, the solidification front itself impinges on the fiber. Publicly-available finite element models are used to calculate the time-dependent position of the solidification front and shape of the free surface.
Resumo:
Vacuum arc remelting (VAR) aims at production of high quality, segregation-free alloys. The quality of the produced ingots depends on the operating conditions which could be monitored and analyzed using numerical modelling. The remelting process uniformity is controlled by critical medium scale time variations of the order 1-100 s, which are physically initiated by the droplet detachment and the large scale arc motion at the top of liquid pool [1,2]. The newly developed numerical modelling tools are addressing the 3-dimensional magnetohydrodynamic and thermal behaviour in the liquid zone and the adjacent ingot, electrode and crucible.
Resumo:
First-order time remaining until a moving observer will pass an environmental element is optically specified in two different ways. The specification provided by global tau (based on the pattern of change of angular bearing) requires that the element is stationary and that the direction of motion is accurately detected, whereas the specification provided by composite tau (based on the patterns of change of optical size and optical distance) does not require either of these. We obtained converging evidence,for our hypothesis. that observers are sensitive to composite tau in four experiments involving, relative judgments of, time to, passage with forced-choice methodology. Discrimination performance was enhanced in the presence of a local expansion component, while being unaffected when the detection of the direction of heading was impaired. Observers relied on the information carried in composite tau rather than on the information carried in its constituent components. Finally, performance was similar under conditions of observer motion and conditions of object motion. Because composite tau specifies first-order time remaining for a large number of situations, the different ways in which it may be detected are discussed.
Resumo:
Few-cycle laser pulses are used to "pump and probe" image the vibrational wavepacket dynamics of a HD+ molecular ion. The quantum dephasing and revival structure of the wavepacket are mapped experimentally with time-resolved photodissociation imaging. The motion of the molecule is simulated using a quantum-mechanical model predicting the observed structure. The coherence of the wavepacket is controlled by varying the duration of the intense laser pulses. By means of a Fourier transform analysis both the periodicity and relative population of the vibrational states of the excited molecular ion have been characterized.
Resumo:
It is by mapping an area that the geographer comes to understand the contours and formations of a place. The “place” in this case is the prison world. This article serves to map moments in prison demonstrating how “old” female bodies are performed under the prison gaze. In this article I will illustrate how older women subvert, negotiate, or invoke discourse as a means of reinscribing the normalizing discourses that serve to confine and define older women's experiences in prison. Female elders in prison become defined and confined by regimes of femininity and ageism. They have to endure symbolic and actual intrusions of physical privacy, which serve to remind them of what they were, where they are, and what they have become. This article will critically explore the complexity and contradictions of time use in prison and how they impact on embodied identities. By incorporating the voices of elders, I hope to draw out the contradictions and dilemmas which they experience, thereby illustrating the relationship between time, their involvement in doing time, and the performance of time in a total institution (see Goffman, 1961), and the relationship between temporality and existence. The stories of the women show how their identities are caught within the movement and motion of time and space, both in terms of the time of “the real” on the outside and within prison time. This is the in-between space of carceral time within which women live and which they negotiate. It is by being caught in this network of carceral time that they are constantly being “remade” as their body/performance of identities alters within it. While only a small percentage of the female prison population in the United Kingdom are in later life, one has to question why criminological and gerontological literature fail to address the needs of a growing significant minority.
Resumo:
A novel, fast automatic motion segmentation approach is presented. It differs from conventional pixel or edge based motion segmentation approaches in that the proposed method uses labelled regions (facets) to segment various video objects from the background. Facets are clustered into objects based on their motion and proximity details using Bayesian logic. Because the number of facets is usually much lower than the number of edges and points, using facets can greatly reduce the computational complexity of motion segmentation. The proposed method can tackle efficiently the complexity of video object motion tracking, and offers potential for real-time content-based video annotation.
Resumo:
A new kind of photographic representation, called movement-image is proposed and discussed to record the visual experience of the journey through urban highways. It consists of performing long exposure photographic shots while the track is traversed, thus registering a time-panorama which includes landscape signs and inner spaces of the ways involved. This proposal is linked to the limitations of representing these expressways, if they are understood as structures of instrumental origin, where the resulting experience comes from moving at high speed through the territory. In al almost all cases the aesthetic approach or urban integration with the city and landscape are excluded. In this sense, although such structures may be an opportunity to collect, build and colonize the urban landscape, the lack of adequate representation of the phenomenon causes a difficulty in its understanding and transformation. The options for representation using photography is assumed, knowing its own particular tradition in the use of long exposures, for the expression of the mobile, and the multiple visual attention, divided or weakened.