2 resultados para Augmented reality systems

em Abertay Research Collections - Abertay University’s repository


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Artist David Lyons and computer scientist David Flatla work collaboratively to create art that intentionally targets audiences of varying visual abilities mediated through smart device interfaces. Conceived as an investigation into theories and practices of visual perception, they explore the idea that artwork can be intentionally created to be experienced differently dependent on one’s visual abilities. They have created motion graphics and supporting recolouring and colour vision deficiency (CVD) simulation software. Some of the motion graphics communicate details specifically to those with colour blindness/CVD by containing moving imagery only seen by those with CVD. Others will contain moving images that those with typical colour vision can experience but appear to be unchanging to people with CVD. All the artwork is revealed for both audiences through the use of specially programmed smart devices, fitted with augmented reality recolouring and CVD simulation software. The visual elements come from various sources, including the Ishihara Colour Blind Test, movie marques, and game shows. The software created reflects the perceptual capabilities of most individuals with reduced colour vision. The development of the simulation software and the motion graphic series are examined and discussed from both computer science and artistic positions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fully articulated hand tracking promises to enable fundamentally new interactions with virtual and augmented worlds, but the limited accuracy and efficiency of current systems has prevented widespread adoption. Today's dominant paradigm uses machine learning for initialization and recovery followed by iterative model-fitting optimization to achieve a detailed pose fit. We follow this paradigm, but make several changes to the model-fitting, namely using: (1) a more discriminative objective function; (2) a smooth-surface model that provides gradients for non-linear optimization; and (3) joint optimization over both the model pose and the correspondences between observed data points and the model surface. While each of these changes may actually increase the cost per fitting iteration, we find a compensating decrease in the number of iterations. Further, the wide basin of convergence means that fewer starting points are needed for successful model fitting. Our system runs in real-time on CPU only, which frees up the commonly over-burdened GPU for experience designers. The hand tracker is efficient enough to run on low-power devices such as tablets. We can track up to several meters from the camera to provide a large working volume for interaction, even using the noisy data from current-generation depth cameras. Quantitative assessments on standard datasets show that the new approach exceeds the state of the art in accuracy. Qualitative results take the form of live recordings of a range of interactive experiences enabled by this new approach.