975 resultados para 3D motion trajectory
Resumo:
Two new metal-organic based polymeric complexes, [Cu-4(O2CCH2CO2)(4)(L)].7H(2)O (1) and [CO2(O2CCH2CO2)(2)(L)].2H(2)O (2) [L = hexamethylenetetramine (urotropine)], have been synthesized and characterized by X-ray crystal structure determination and magnetic studies. Complex 1 is a 1D coordination polymer comprising a carboxylato, bridged Cu-4 moiety linked by a tetradentate bridging urotropine. Complex 2 is a 3D coordination polymer made of pseudo-two-dimensional layers of Co(II) ions linked by malonate anions in syn-anticonformation which are bridged by bidentate urotropine in trans fashion, Complex 1 crystallizes in the orthothombic system, space group Pmmn, with a = 14,80(2) Angstrom, b = 14.54(2) Angstrom, c = 7.325(10) Angstrom, beta = 90degrees, and Z = 4. Complex 2 crystallizes in the orthorhombic system, space group Imm2, a = 7.584(11) Angstrom, b = 15.80(2) Angstrom, c = 6.939(13) Angstrom, beta = 90.10degrees(1), and Z = 4. Variable temperature (300-2 K) magnetic behavior reveals the existence of ferro- and antiferromagnetic interactions in 1 and only antiferromagnetic interactions in 2. The best fitted parameters for complex 1 are J = 13.5 cm(-1), J = -18.1 cm(-1), and g = 2.14 considering only intra-Cu-4 interactions through carboxylate and urotropine pathways. In case of complex 2, the fit of the magnetic data considering intralayer interaction through carboxylate pathway as well as interlayer interaction via urotropine pathway gave no satisfactory result at this moment using any model known due to considerable orbital contribution of Co(II) ions to the magnetic moment and its complicated structure. Assuming isolated Co(II) ions (without any coupling, J = 0) the shape of the chi(M)T curve fits well with experimental data except at very low temperatures.
Resumo:
Information technology in construction (ITC) has been gaining wide acceptance and is being implemented in the construction research domains as a tool to assist decision makers. Most of the research into visualization technologies (VT) has been on the wide range of 3D and simulation applications suitable for construction processes. Despite its development with interoperability and standardization of products, VT usage has remained very low when it comes to communicating and addressing the needs of building end-users (BEU). This paper argues that building end users are a source of experience and expertise that can be brought into the briefing stage for the evaluation of design proposals. It also suggests that the end user is a source of new ideas promoting innovation. In this research a positivistic methodology that includes the comparison of 3D models and the traditional 2D methods is proposed. It will help to identify "how much", if anything, a non-spatial specialist can gain in terms Of "understanding" of a particular design proposal presented, using both methods.
Resumo:
Do we view the world differently if it is described to us in figurative rather than literal terms? An answer to this question would reveal something about both the conceptual representation of figurative language and the scope of top-down influences oil scene perception. Previous work has shown that participants will look longer at a path region of a picture when it is described with a type of figurative language called fictive motion (The road goes through the desert) rather than without (The road is in the desert). The current experiment provided evidence that such fictive motion descriptions affect eye movements by evoking mental representations of motion. If participants heard contextual information that would hinder actual motion, it influenced how they viewed a picture when it was described with fictive motion. Inspection times and eye movements scanning along the path increased during fictive motion descriptions when the terrain was first described as difficult (The desert is hilly) as compared to easy (The desert is flat); there were no such effects for descriptions without fictive motion. It is argued that fictive motion evokes a mental simulation of motion that is immediately integrated with visual processing, and hence figurative language can have a distinct effect on perception. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Selecting a stimulus as the target for a goal-directed movement involves inhibiting other competing possible responses. Both target and distractor stimuli activate populations of neurons in topographic oculomotor maps such as the superior colliculus. Local inhibitory interconnections between these populations ensure only one saccade target is selected. Suppressing saccades to distractors may additionally involve inhibiting corresponding map regions to bias the local competition. Behavioral evidence of these inhibitory processes comes from the effects of distractors on oculomotor and manual trajectories. Individual saccades may initially deviate either toward or away from a distractor, but the source of this variability has not been investigated. Here we investigate the relation between distractor-related deviation of trajectory and saccade latency. Targets were presented with, or without, distractors, and the deviation of saccade trajectories arising from the presence of distractors was measured. A fixation gap paradigm was used to manipulate latency independently of the influence of competing distractors. Shorter- latency saccades deviated toward distractors and longer-latency saccades deviated away from distractors. The transition between deviation toward or away from distractors occurred at a saccade latency of around 200 ms. This shows that the time course of the inhibitory process involved in distractor related suppression is relatively slow.
Resumo:
Static movement aftereffects (MAEs) were measured after adaptation to vertical square-wave luminance gratings drifting horizontally within a central window in a surrounding stationary vertical grating. The relationship between the stationary test grating and the surround was manipulated by varying the alignment of the stationary stripes in the window and those in the surround, and the type of outline separating the window and the surround [no outline, black outline (invisible on black stripes), and red outline (visible throughout its length)]. Offsetting the stripes in the window significantly increased both the duration and ratings of the strength of MAEs. Manipulating the outline had no significant effect on either measure of MAE strength. In a second experiment, in which the stationary test fields alone were presented, participants judged how segregated the test field appeared from its surround. In contrast to the MAE measures, outline as well as offset contributed to judged segregation. In a third experiment, in which test-stripe offset wits systematically manipulated, segregation ratings rose with offset. However, MAE strength was greater at medium than at either small or large (180 degrees phase shift) offsets. The effects of these manipulations on the MAE are interpreted in terms of a spatial mechanism which integrates motion signals along collinear contours of the test field and surround, and so causes a reduction of motion contrast at the edges of the test field.
Resumo:
As we move through the world, our eyes acquire a sequence of images. The information from this sequence is sufficient to determine the structure of a three-dimensional scene, up to a scale factor determined by the distance that the eyes have moved [1, 2]. Previous evidence shows that the human visual system accounts for the distance the observer has walked [3,4] and the separation of the eyes [5-8] when judging the scale, shape, and distance of objects. However, in an immersive virtual-reality environment, observers failed to notice when a scene expanded or contracted, despite having consistent information about scale from both distance walked and binocular vision. This failure led to large errors in judging the size of objects. The pattern of errors cannot be explained by assuming a visual reconstruction of the scene with an incorrect estimate of interocular separation or distance walked. Instead, it is consistent with a Bayesian model of cue integration in which the efficacy of motion and disparity cues is greater at near viewing distances. Our results imply that observers are more willing to adjust their estimate of interocular separation or distance walked than to accept that the scene has changed in size.
Resumo:
An increasing number of neuroscience experiments are using virtual reality to provide a more immersive and less artificial experimental environment. This is particularly useful to navigation and three-dimensional scene perception experiments. Such experiments require accurate real-time tracking of the observer's head in order to render the virtual scene. Here, we present data on the accuracy of a commonly used six degrees of freedom tracker (Intersense IS900) when it is moved in ways typical of virtual reality applications. We compared the reported location of the tracker with its location computed by an optical tracking method. When the tracker was stationary, the root mean square error in spatial accuracy was 0.64 mm. However, we found that errors increased over ten-fold (up to 17 mm) when the tracker moved at speeds common in virtual reality applications. We demonstrate that the errors we report here are predominantly due to inaccuracies of the IS900 system rather than the optical tracking against which it was compared. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
The perceived displacement of motion-defined contours in peripheral vision was examined in four experiments. In Experiment 1, in line with Ramachandran and Anstis' finding [Ramachandran, V. S., & Anstis, S. M. (1990). Illusory displacement of equiluminous kinetic edges. Perception, 19, 611-616], the border between a field of drifting dots and a static dot pattern was apparently displaced in the same direction as the movement of the dots. When a uniform dark area was substituted for the static dots, a similar displacement was found, but this was smaller and statistically insignificant. In Experiment 2, the border between two fields of dots moving in opposite directions was displaced in the direction of motion of the dots in the more eccentric field, so that the location of a boundary defined by a diverging pattern is perceived as more eccentric, and that defined by a converging as less eccentric. Two explanations for this effect (that the displacement reflects a greater weight given to the more eccentric motion, or that the region containing stronger centripetal motion components expands perceptually into that containing centrifugal motion) were tested in Experiment 3, by varying the velocity of the more eccentric region. The results favoured the explanation based on the expansion of an area in centripetal motion. Experiment 4 showed that the difference in perceived location was unlikely to be due to differences in the discriminability of contours in diverging and converging pattems, and confirmed that this effect is due to a difference between centripetal and centrifugal motion rather than motion components in other directions. Our result provides new evidence for a bias towards centripetal motion in human vision, and suggests that the direction of motion-induced displacement of edges is not always in the direction of an adjacent moving pattern. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
We report rates of regression and associated findings in a population derived group of 255 children aged 9-14 years, participating in a prevalence study of autism spectrum disorders (ASD); 53 with narrowly defined autism, 105 with broader ASD and 97 with non-ASD neurodevelopmental problems, drawn from those with special educational needs within a population of 56,946 children. Language regression was reported in 30% with narrowly defined autism, 8% with broader ASD and less than 3% with developmental problems without ASD. A smaller group of children were identified who underwent a less clear setback. Regression was associated with higher rates of autistic symptoms and a deviation in developmental trajectory. Regression was not associated with epilepsy or gastrointestinal problems.
Resumo:
This paper describes the SIMULINK implementation of a constrained predictive control algorithm based on quadratic programming and linear state space models, and its application to a laboratory-scale 3D crane system. The algorithm is compatible with Real Time. Windows Target and, in the case of the crane system, it can be executed with a sampling period of 0.01 s and a prediction horizon of up to 300 samples, using a linear state space model with 3 inputs, 5 outputs and 13 states.
Resumo:
This paper describes a new method for reconstructing 3D surface using a small number, e.g. 10, of 2D photographic images. The images are taken at different viewing directions by a perspective camera with full prior knowledge of the camera configurations. The reconstructed object's surface is represented a set of triangular facets. We empirically demonstrate that if the viewing directions are uniformly distributed around the object's viewing sphere, then the reconstructed 3D points optimally cluster closely on a highly curved part of the surface and are widely, spread on smooth or fat parts. The advantage of this property is that the reconstructed points along a surface or a contour generator are not undersampled or underrepresented because surfaces or contours should be sampled or represented with more densely points where their curvatures are high. The more complex the contour's shape, the greater is the number of points required, but the greater the number of points is automatically generated by the proposed method Given that the viewing directions are uniformly distributed, the number and distribution of the reconstructed points depend on the shape or the curvature of the surface regardless of the size of the surface or the size of the object.
Resumo:
This paper presents a paralleled Two-Pass Hexagonal (TPA) algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS) for motion estimation. In the TPA., Motion Vectors (MV) are generated from the first-pass LHMEA and are used as predictors for second-pass HEXBS motion estimation, which only searches a small number of Macroblocks (MBs). We introduced hashtable into video processing and completed parallel implementation. We propose and evaluate parallel implementations of the LHMEA of TPA on clusters of workstations for real time video compression. It discusses how parallel video coding on load balanced multiprocessor systems can help, especially on motion estimation. The effect of load balancing for improved performance is discussed. The performance or the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms.
Resumo:
This paper presents a novel two-pass algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS). compensation. for block base motion On the basis of research from previous algorithms, especially an on-the-edge motion estimation algorithm called hexagonal search (HEXBS), we propose the LHMEA and the Two-Pass Algorithm (TPA). We introduce hashtable into video compression. In this paper we employ LHMEA for the first-pass search in all the Macroblocks (MB) in the picture. Motion Vectors (MV) are then generated from the first-pass and are used as predictors for second-pass HEXBS motion estimation, which only searches a small number of MBs. The evaluation of the algorithm considers the three important metrics being time, compression rate and PSNR. The performance of the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms. Experimental results show that the proposed algorithm can offer the same compression rate as the Full Search. LHMEA with TPA has significant improvement on HEXBS and shows a direction for improving other fast motion estimation algorithms, for example Diamond Search.
Resumo:
This paper presents a paralleled Two-Pass Hexagonal (TPA) algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS) for motion estimation. In the TPA, Motion Vectors (MV) are generated from the first-pass LHMEA and are used as predictors for second-pass HEXBS motion estimation, which only searches a small number of Macroblocks (MBs). We introduced hashtable into video processing and completed parallel implementation. We propose and evaluate parallel implementations of the LHMEA of TPA on clusters of workstations for real time video compression. It discusses how parallel video coding on load balanced multiprocessor systems can help, especially on motion estimation. The effect of load balancing for improved performance is discussed. The performance of the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms.
Resumo:
This paper presents an improved Two-Pass Hexagonal (TPA) algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS) for motion estimation. In the TPA, Motion Vectors (MV) are generated from the first-pass LHMEA and are used as predictors for second-pass HEXBS motion estimation, which only searches a small number of Macroblocks (MBs). The hashtable structure of LHMEA is improved compared to the original TPA and LHMEA. The evaluation of the algorithm considers the three important metrics being processing time, compression rate and PSNR. The performance of the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms.