923 resultados para Augmented Reality, Location Awareness, CSCW, Cooperation,Distributed System


Relevância:

100.00% 100.00%

Publicador:

Resumo:

La tesi presenta una panoramica sull'augmented, virtual e mixed reality, descrivendone le caratteristiche e le modalità di sviluppo. Come caso di studio viene analizzato il dispositivo Microsoft Hololens, descrivendone le caratteristiche concettuali, hardware e software. Per le applicazioni di questo dispositivo viene effettuata una riprogettazione della gestione e del concetto di ologramma all'interno di un'applicazione olografica, analizzandone i motivi e i vantaggi. E' fornita una overview sui dettagli implementativi della riprogettazione al fine di chiarire ogni aspetto dell'applicazione.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mobile Cloud Computing promises to overcome the physical limitations of mobile devices by executing demanding mobile applications on cloud infrastructure. In practice, implementing this paradigm is difficult; network disconnection often occurs, bandwidth may be limited, and a large power draw is required from the battery, resulting in a poor user experience. This thesis presents a mobile cloud middleware solution, Context Aware Mobile Cloud Services (CAMCS), which provides cloudbased services to mobile devices, in a disconnected fashion. An integrated user experience is delivered by designing for anticipated network disconnection, and low data transfer requirements. CAMCS achieves this by means of the Cloud Personal Assistant (CPA); each user of CAMCS is assigned their own CPA, which can complete user-assigned tasks, received as descriptions from the mobile device, by using existing cloud services. Service execution is personalised to the user's situation with contextual data, and task execution results are stored with the CPA until the user can connect with his/her mobile device to obtain the results. Requirements for an integrated user experience are outlined, along with the design and implementation of CAMCS. The operation of CAMCS and CPAs with cloud-based services is presented, specifically in terms of service description, discovery, and task execution. The use of contextual awareness to personalise service discovery and service consumption to the user's situation is also presented. Resource management by CAMCS is also studied, and compared with existing solutions. Additional application models that can be provided by CAMCS are also presented. Evaluation is performed with CAMCS deployed on the Amazon EC2 cloud. The resource usage of the CAMCS Client, running on Android-based mobile devices, is also evaluated. A user study with volunteers using CAMCS on their own mobile devices is also presented. Results show that CAMCS meets the requirements outlined for an integrated user experience.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The emerging technologies have expanded a new dimension of self – ‘technoself’ driven by socio-technical innovations and taken an important step forward in pervasive learning. Technology Enhanced Learning (TEL) research has increasingly focused on emergent technologies such as Augmented Reality (AR) for augmented learning, mobile learning, and game-based learning in order to improve self-motivation and self-engagement of the learners in enriched multimodal learning environments. These researches take advantage of technological innovations in hardware and software across different platforms and devices including tablets, phoneblets and even game consoles and their increasing popularity for pervasive learning with the significant development of personalization processes which place the student at the center of the learning process. In particular, augmented reality (AR) research has matured to a level to facilitate augmented learning, which is defined as an on-demand learning technique where the learning environment adapts to the needs and inputs from learners. In this paper we firstly study the role of Technology Acceptance Model (TAM) which is one of the most influential theories applied in TEL on how learners come to accept and use a new technology. Then we present the design methodology of the technoself approach for pervasive learning and introduce technoself enhanced learning as a novel pedagogical model to improve student engagement by shaping personal learning focus and setting. Furthermore we describe the design and development of an AR-based interactive digital interpretation system for augmented learning and discuss key features. By incorporating mobiles, game simulation, voice recognition, and multimodal interaction through Augmented Reality, the learning contents can be geared toward learner's needs and learners can stimulate discovery and gain greater understanding. The system demonstrates that Augmented Reality can provide rich contextual learning environment and contents tailored for individuals. Augment learning via AR can bridge this gap between the theoretical learning and practical learning, and focus on how the real and virtual can be combined together to fulfill different learning objectives, requirements, and even environments. Finally, we validate and evaluate the AR-based technoself enhanced learning approach to enhancing the student motivation and engagement in the learning process through experimental learning practices. It shows that Augmented Reality is well aligned with constructive learning strategies, as learners can control their own learning and manipulate objects that are not real in augmented environment to derive and acquire understanding and knowledge in a broad diversity of learning practices including constructive activities and analytical activities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper describes the design and implementation of a novel low cost virtual rugby decision making interactive for use in a visitor centre. Original laboratory-based experimental work in decision making in rugby, using a virtual reality headset [1] is adapted for use in a public visitor centre, with consideration given to usability, costs, practicality and health and safety. Movement of professional rugby players was captured and animated within a virtually recreated stadium. Users then interact with these virtual representations via use of a lowcost sensor (Microsoft Kinect) to attempt to block them. Retaining the principles of perception and action, egocentric viewpoint, immersion, sense of presence, representative design and game design the system delivers an engaging and effective interactive to illustrate the underlying scientific principles of deceptive movement. User testing highlighted the need for usability, system robustness, fair and accurate scoring, appropriate level of difficulty and enjoyment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A inovação tecnológica e as facilidades que gera tem tido um impacto crescente em diversas área, inclusivamente na medicina. A rápida evolução por parte de algumas tecnologias, como é o caso da Realidade Aumentada (RA), criam excelentes oportunidades, nomeadamente para intervenções cirúrgicas laparoscópicas, que apresentam especialmente problemas ao nível da exposição do doente a radiação. O presente documento detalha todo o processo de investigação e desenvolvimento realizado com a pretensão de criar um sistema de navegação por RA que auxilie o procedimento cirúrgico laparoscópico de remoção de pedras nos rins. Com este objetivo em perspetiva, e numa parceria com a empresa ECmedica LTD, foram desenvolvidos quatro protótipo funcionais. Com o intuito de compreender as melhores práticas de sistemas de input, interface e sistema de registo a aplicar, estes integraram aspetos inovadores tais como a utilização de uma sonda ultra-som, como substituta do raioX, e um registo feito através de sensores magnéticos. Apoiados numa metodologia de design centrado no utilizador e em instrumentos de análise como entrevistas e observação natural, os protótipos foram testados, obtendo respostas esclarecedoras relativamente ao objetivos dos protótipos. Foi observado que a RA é vista pelos médicos como uma solução com potencial, com as soluções apresentadas ao nível de inputs, interface e registo a serem bem recebidas. A projeção bidimensional oferecida pela imagem ultra-som foi encarada como insuficientes, sendo sugerida a sua substituição por um aumento tridimensional capaz de facilitar a correta inserção da agulha.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tese pretende descrever o desenvolvimento e arquitectura do software que constitui o Miradouro Virtual@, mais especificamente do componente referente à interface. O Miradouro Virtual@ é um dispositivo cujo propósito à semelhança dos tradicionais binóculos turísticos, é observar a paisagem, mas cuja interacção não está limitada à simples observação individual. Recorre à realidade aumentada para sobrepôr imagens geradas por computador a imagens reais, capturadas por um dispositivo para aquisição de imagem real (tipicamente uma câmara de vídeo), e mostra-as num ecrã touchscreen, permitindo deste modo, combinar elementos virtuais e multimédia com a paisagem real. A imagem final, composta, dá ao utilizador uma nova dimensão do espaço envolvente, permitindo-lhe explorar uma nova camada de informação não visível anteriormente. Sendo sensíveis à orientação do Miradouro Virtual@, os elementos virtuais e multimédia adaptam-se de acordo com os movimentos do dispositivo. O Miradouro Virtual@ é um produto composto por diversos elementos de hardware e software. O foco desta tese recai apenas nos componentes de software, mais especificamente na interface. Pretende dar a conhecer as limitações da versão anterior do software e mostrar as soluções encontradas que permitiram ultrapassar algumas dessas limitações. ABSTRACT; This thesis focuses on the design and development of the Virtual Sightseeing™ software, more specifically on the interface component. The Virtual Sightseeing™ is a device similar to the traditional scenic viewers that takes advantage of its generally known and popularity to build an innovative system. It works by using augmented reality to superimpose, in real-time, images generated by a computer onto a live stream captured by a video camera and displaying them on a touchscreen display. It allows adding multimedia elements to the real scenery by composing them in the image that is presented to the user. The multimedia information and virtual elements that are displayed are sensitive to the orientation and position of the device. They change as the user manually changes the orientation of the device. The Virtual Sightseeing™ is comprised of several hardware and software components. The focus of this thesis is on the software part, more specifically on the interface component. It intends to show the known limitations of the previous software version and how they were overcome in this new version.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Augmented Reality (AR) applications often require knowledge of the user’s position in some global coordinate system in order to draw the augmented content to its correct position on the screen. The most common method for coarse positioning is the Global Positioning System (GPS). One of the advantages of GPS is that GPS receivers can be found in almost every modern mobile device. This research was conducted in order to determine the accuracies of different GPS receivers. The tests included seven consumer-grade tablets, three external GPS modules and one professional-grade GPS receiver. All of the devices were tested with both static and mobile measurements. It was concluded that even the cheaper external GPS receivers were notably more accurate than the GPS receivers of the tested tablets. The absolute accuracy of the tablets is difficult to determine from the test results, since the results vary by a large margin between different measurements. The accuracy of the tested tablets in static measurements were between 0.30 meters and 13.75 meters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

International audience

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La realtà aumentata (AR) è una nuova tecnologia adottata in chirurgia prostatica con l'obiettivo di migliorare la conservazione dei fasci neurovascolari (NVB) ed evitare i margini chirurgici positivi (PSM). Abbiamo arruolato prospetticamente pazienti con diagnosi di cancro alla prostata (PCa) sul base di biopsia di fusione mirata con mpMRI positiva. Prima dell'intervento, i pazienti arruolati sono stati indirizzati a sottoporsi a ricostruzione del modello virtuale 3D basato su mpMRI preoperatoria immagini. Infine, il chirurgo ha eseguito la RARP con l'ausilio del modello 3D proiettato in AR all'interno della console robotica (RARP guidata AR-3D). I pazienti sottoposti a AR RARP sono stati confrontati con quelli sottoposti a "RARP standard" nello stesso periodo. Nel complesso, i tassi di PSM erano comparabili tra i due gruppi; I PSM a livello della lesione indice erano significativamente più bassi nei pazienti riferiti al gruppo AR-3D (5%) rispetto a quelli nel gruppo di controllo (20%; p = 0,01). La nuova tecnica di guida AR-3D per l'analisi IFS può consentono di ridurre i PSM a livello della lesione dell'indice

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Siamo sempre stati abituati fin dal principio ad interagire con l’ambiente che ci circonda, utilizzando gli oggetti fisici presenti attorno a noi per soddisfare le nostre esigenze, ma se esistesse di più di questo? Se fosse possibile avere attorno a noi oggetti che non sono propriamente corpi fisici, ma che hanno un comportamento coerente con l’ambiente circostante e non venisse percepita la differenza tra essi e un vero e proprio oggetto? Ci si sta riferendo a quella che oggi viene chiamata Mixed Reality, una realtà mista resa visibile tramite appositi dispositivi, in cui è possibile interagire contemporaneamente con oggetti fisici e oggetti digitali che vengono chiamati ologrammi. Un aspetto fondamentale che riguarda questa tipologia di sistemi è sicuramente la collaborazione. In questa tesi viene esaminato il panorama delle tecnologie allo stato dell'arte che permettono di vivere esperienze di Collaborative Mixed Reality, ma soprattutto ci si concentra sulla progettazione di una vera e propria architettura in rete locale che consenta la realizzazione di un sistema condiviso. Successivamente all'applicazione di varie strategie vengono valutati i risultati ottenuti da rigorose misurazioni, per determinare scientificamente le prestazioni dell'architettura progettata e poter trarre delle conclusioni, considerando analogie e differenze rispetto ad altre possibili soluzioni.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L'obbiettivo della seguente tesi è quello di analizzare quali sono ad oggi i migliori framework per lo sviluppo di software in Mixed Reality e studiare i design pattern più utili ad uno sviluppatore in questo ambito. Nel primo capitolo vengono introdotti i concetti di realtà estesa, virtuale, aumentata e mista con le relative differenze. Inoltre vengono descritti i diversi dispositivi che consentono la realtà mista, in particolare i due visori più utilizzati: Microsoft Hololens 2 e Magic Leap 1. Nello stesso capitolo vengono presentati anche gli aspetti chiave nello sviluppo in realtà mista, cioè tutti gli elementi che consentono un'esperienza in Mixed Reality. Nel secondo capitolo vengono descritti i framework e i kit utili per lo sviluppo di applicazioni in realtà mista multi-piattaforma. In particolare vengono introdotti i due ambienti di sviluppo più utilizzati: Unity e Unreal Engine, già esistenti e non specifici per lo sviluppo in MR ma che diventano funzionali se integrati con kit specifici come Mixed Reality ToolKit. Nel terzo capitolo vengono trattati i design pattern, comuni o nativi per applicazioni in realtà estesa, utili per un buono sviluppo di applicazioni MR. Inoltre, vengono presi in esame alcuni dei principali pattern più utilizzati nella programmazione ad oggetti e si verifica se e come sono implementabili correttamente su Unity in uno scenario di realtà mista. Questa analisi risulta utile per capire se l'utilizzo dei framework di sviluppo, metodo comunemente più utilizzato, comporta dei limiti nella libertà di sviluppo del programmatore.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The idea of Grid Computing originated in the nineties and found its concrete applications in contexts like the SETI@home project where a lot of computers (offered by volunteers) cooperated, performing distributed computations, inside the Grid environment analyzing radio signals trying to find extraterrestrial life. The Grid was composed of traditional personal computers but, with the emergence of the first mobile devices like Personal Digital Assistants (PDAs), researchers started theorizing the inclusion of mobile devices into Grid Computing; although impressive theoretical work was done, the idea was discarded due to the limitations (mainly technological) of mobile devices available at the time. Decades have passed, and now mobile devices are extremely more performant and numerous than before, leaving a great amount of resources available on mobile devices, such as smartphones and tablets, untapped. Here we propose a solution for performing distributed computations over a Grid Computing environment that utilizes both desktop and mobile devices, exploiting the resources from day-to-day mobile users that alternatively would end up unused. The work starts with an introduction on what Grid Computing is, the evolution of mobile devices, the idea of integrating such devices into the Grid and how to convince device owners to participate in the Grid. Then, the tone becomes more technical, starting with an explanation on how Grid Computing actually works, followed by the technical challenges of integrating mobile devices into the Grid. Next, the model, which constitutes the solution offered by this study, is explained, followed by a chapter regarding the realization of a prototype that proves the feasibility of distributed computations over a Grid composed by both mobile and desktop devices. To conclude future developments and ideas to improve this project are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, there is a trend for industry reorganization in geographically dispersed systems, carried out of their activities with autonomy. These systems must maintain coordinated relationship among themselves in order to assure an expected performance of the overall system. Thus, a manufacturing system is proposed, based on ""web services"" to assure an effective orchestration of services in order to produce final products. In addition, it considers special functions, such as teleoperation and remote monitoring, users` online request, among others. Considering the proposed system as discrete event system (DES), techniques derived from Petri nets (PN), including the Production Flow Schema (PFS), can be used in a PFS/PN approach for modeling. The system is approached in different levels of abstraction: a conceptual model which is obtained by applying the PFS technique and a functional model which is obtained by applying PN. Finally, a particular example of the proposed system is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ap??s um longo per??odo de discuss??o sobre a reforma do Estado, a Fran??a, em 1990, implementou um sistema interdepartamental para a avalia????o das pol??ticas p??blicas. O aspecto interessante nesta reforma e o meio pelo qual um pequeno grupo de pessoas, de diferentes institui????es e com experi??ncias pol??ticas distintas, convenceu-se da import??ncia da avalia????o das pol??ticas p??blicas e se conscientizou dos diversos problemas relativos a esta quest??o. Finalmente, com uma atitude bipartid??ria, este grupo foi capaz de criar uma realidade que modifica profundamente o sistema legislativo, tanto no que se refere ao processo decis??rio quanto ao sistema de implementa????o das pol??ticas. Neste trabalho, o autor descreve os passos do debate e os recursos das diferentes propostas que acabaram por se transformar no projeto de reforma e na sua implementa????o.