761 resultados para Virtual and Augmented Reality
Resumo:
In this paper, we propose the use of specific system architecture, based on mobile device, for navigation in urban environments. The aim of this work is to assess how virtual and augmented reality interface paradigms can provide enhanced location based services using real-time techniques in the context of these two different technologies. The virtual reality interface is based on faithful graphical representation of the localities of interest, coupled with sensory information on the location and orientation of the user, while the augmented reality interface uses computer vision techniques to capture patterns from the real environment and overlay additional way-finding information, aligned with real imagery, in real-time. The knowledge obtained from the evaluation of the virtual reality navigational experience has been used to inform the design of the augmented reality interface. Initial results of the user testing of the experimental augmented reality system for navigation are presented.
Resumo:
Multimedia Interactive Book (miBook) reflects the development of a new concept of virtual interpretation of traditional text books and audio-visual content. By encompassing new technological approaches, using augmented reality technology, allows the final user to experience a variety of sensorial stimuli while enjoying and interacting with the content; therefore enhancing the learning process. miBook stands for a global educational intention to enable people not only to access but also to appropriate intellectually valuable contents coming from different linguistic and cultural contexts.
Resumo:
Climate change and land use pressures are making environmental monitoring increasingly important. As environmental health is degrading at an alarming rate, ecologists have tried to tackle the problem by monitoring the composition and condition of environment. However, traditional monitoring methods using experts are manual and expensive; to address this issue government organisations designed a simpler and faster surrogate-based assessment technique for consultants, landholders and ordinary citizens. However, it remains complex, subjective and error prone. This makes collected data difficult to interpret and compare. In this paper we describe a work-in-progress mobile application designed to address these shortcomings through the use of augmented reality and multimedia smartphone technology.
Resumo:
Augmented Reality (AR) systems which use optical tracking with fiducial marker for registration have had an important role in popularizing this technology, since only a personal computer with a conventional webcam is required. However, in most these applications, the virtual elements are shown only in the foreground a real element does not occlude a virtual one. The method presented enables AR environments based on fiducial markers to support mutual occlusion between a real element and many virtual ones, according to the elements position (depth) in the environment. © 2012 IEEE.
Resumo:
Real-time remote sales assistance is an underdeveloped component of online sales services. Solutions involving web page text chat, telephony and video support prove problematic when seeking to remotely guide customers in their sales processes, especially with configurations of physically complex artefacts. Recently, there has been great interest in the application of virtual worlds and augmented reality to create synthetic environments for remote sales of physical artefacts. However, there is a lack of analysis and development of appropriate software services to support these processes. We extend our previous work with the detailed design of configuration context services to support the management of an interactive sales session using augmented reality. We detail the context and configuration services required, presenting a novel data service streaming configuration information to the vendor for business analytics. We expect that a fully implemented configuration management service, based on our design, will improve the remote sales experience for both customers and vendors alike via analysis of the streamed information.
Resumo:
Identifying, modelling and documenting business processes usually requires the collaboration of many stakeholders that may be spread across companies in inter-organizational business settings. While there are many process modelling tools available, the support they provide for remote collaboration is still limited. This demonstration showcases a novel prototype application that implements collaborative virtual environment and augmented reality technologies to improve remote collaborative process modelling, with an aim to assisting common collaboration tasks by providing an increased sense of immersion in an intuitive shared work and task space. Our tool is easily deployed using open source software, and commodity hardware, and is expected to assist with saving money on travel costs for large scale process modelling projects covering national and international centres within an enterprise.
Resumo:
Real-time sales assistant service is a problematic component of remote delivery of sales support for customers. Solutions involving web pages, telephony and video support prove problematic when seeking to remotely guide customers in their sales processes, especially with transactions revolving around physically complex artefacts. This process involves a number of services that are often complex in nature, ranging from physical compatibility and configuration factors, to availability and credit services. We propose the application of a combination of virtual worlds and augmented reality to create synthetic environments suitable for remote sales of physical artefacts, right in the home of the purchaser. A high level description of the service structure involved is shown, along with a use case involving the sale of electronic goods and services within an example augmented reality application. We expect this work to have application in many sales domains involving physical objects needing to be sold over the Internet.
Resumo:
Identifying, modelling and documenting business processes usually requires the collaboration of many stakeholders that may be spread across companies in inter-organizational business settings. While there are many process modelling tools available, the support they provide for remote collaboration is still limited. This paper investigates the application of virtual environment and augmented reality technologies to remote business process modelling, with an aim to assisting common collaboration tasks by providing an increased sense of immersion in a shared workspace. We report on the evaluation of a prototype system with five key informants. The results indicate that this approach to business process modelling is suited to remote collaborative task settings, and stakeholders may indeed benefit from using augmented reality interfaces.
Resumo:
Video presented as part of BPM2011 demonstration(France). In this video we show a prototype BPMN process modelling tool which uses Augmented Reality techniques to increase the sense of immersion when editing a process model. The avatar represents a remotely logged in user, and facilitates greater insight into the editing actions of the collaborator than present 2D web-based approaches in collaborative process modelling. We modified the Second Life client to integrate the ARToolkit in order to support pattern-based AR.
Resumo:
Due to the popularity of modern Collaborative Virtual Environments, there has been a related increase in their size and complexity. Developers therefore need visualisations that expose usage patterns from logged data, to understand the structures and dynamics of these complex environments. This chapter presents a new framework for the process of visualising virtual environment usage data. Major components, such as an event model, designer task model and data acquisition infrastructure are described. Interface and implementation factors are also developed, along with example visualisation techniques that make use of the new task and event model. A case study is performed to illustrate a typical scenario for the framework, and its benefits to the environment development team.
Resumo:
In this paper the software architecture of a framework which simplifies the development of applications in the area of Virtual and Augmented Reality is presented. It is based on VRML/X3D to enable rendering of audio-visual information. We extended our VRML rendering system by a device management system that is based on the concept of a data-flow graph. The aim of the system is to create Mixed Reality (MR) applications simply by plugging together small prefabricated software components, instead of compiling monolithic C++ applications. The flexibility and the advantages of the presented framework are explained on the basis of an exemplary implementation of a classic Augmented Realityapplication and its extension to a collaborative remote expert scenario.
Resumo:
In recent years, the continuous incorporation of new technologies in the learning process has been an important factor in the educational process (1). The Technical University of Madrid (UPM) promotes educational innovation processes and develops projects related to the improvement of the education quality. The experience that we present fits into the Educational Innovation Project (EIP) of the E.U. of Agricultural Engineering of Madrid. One of the main objectives of the EIP is to Take advantage of the new opportunities offered by the Learning and Knowledge Technologies in order to enrich the educational processes and teaching management (2).
Resumo:
In this work, we propose the use of the neural gas (NG), a neural network that uses an unsupervised Competitive Hebbian Learning (CHL) rule, to develop a reverse engineering process. This is a simple and accurate method to reconstruct objects from point clouds obtained from multiple overlapping views using low-cost sensors. In contrast to other methods that may need several stages that include downsampling, noise filtering and many other tasks, the NG automatically obtains the 3D model of the scanned objects. To demonstrate the validity of our proposal we tested our method with several models and performed a study of the neural network parameterization computing the quality of representation and also comparing results with other neural methods like growing neural gas and Kohonen maps or classical methods like Voxel Grid. We also reconstructed models acquired by low cost sensors that can be used in virtual and augmented reality environments for redesign or manipulation purposes. Since the NG algorithm has a strong computational cost we propose its acceleration. We have redesigned and implemented the NG learning algorithm to fit it onto Graphics Processing Units using CUDA. A speed-up of 180× faster is obtained compared to the sequential CPU version.