861 resultados para Unity,Mixed Reality,Extended Reality,Augmented Reality,Virtual Reality,Desgin pattern
Resumo:
In the health domain, the field of rehabilitation suffers from a lack specialized staff while hospital costs only increase. Worse, almost no tools are dedicated to motivate patients or help the personnel to carry out monitoring of therapeutic exercises. This paper demonstrates the high potential that can bring the virtual reality with a platform of serious games for the rehabilitation of the legs involving a head-mounted display and haptic robot devices. We first introduce SG principles and the current context regarding rehabilitation interventions followed by the description of an original haptic device called Lambda Health System. The architecture of the model is then detailed, including communication specifications showing that lag is imperceptible for user (60Hz). Finally, four serious games for rehabilitation using haptic robots and/or HMD were tested by 33 health specialists.
Resumo:
Stroke is a leading cause of disability in particular affecting older people. Although the causes of stroke are well known and it is possible to reduce these risks, there is still a need to improve rehabilitation techniques. Early studies in the literature suggest that early intensive therapies can enhance a patient's recovery. According to physiotherapy literature, attention and motivation are key factors for motor relearning following stroke. Machine mediated therapy offers the potential to improve the outcome of stroke patients engaged on rehabilitation for upper limb motor impairment. Haptic interfaces are a particular group of robots that are attractive due to their ability to safely interact with humans. They can enhance traditional therapy tools, provide therapy "on demand" and can present accurate objective measurements of a patient's progression. Our recent studies suggest the use of tele-presence and VR-based systems can potentially motivate patients to exercise for longer periods of time. The creation of human-like trajectories is essential for retraining upper limb movements of people that have lost manipulation functions following stroke. By coupling models for human arm movement with haptic interfaces and VR technology it is possible to create a new class of robot mediated neuro rehabilitation tools. This paper provides an overview on different approaches to robot mediated therapy and describes a system based on haptics and virtual reality visualisation techniques, where particular emphasis is given to different control strategies for interaction derived from minimum jerk theory and the aid of virtual and mixed reality based exercises.
Resumo:
In this work, we propose a solution to solve the scalability problem found in collaborative, virtual and mixed reality environments of large scale, that use the hierarchical client-server model. Basically, we use a hierarchy of servers. When the capacity of a server is reached, a new server is created as a sun of the first one, and the system load is distributed between them (father and sun). We propose efficient tools and techniques for solving problems inherent to client-server model, as the definition of clusters of users, distribution and redistribution of users through the servers, and some mixing and filtering operations, that are necessary to reduce flow between servers. The new model was tested, in simulation, emulation and in interactive applications that were implemented. The results of these experimentations show enhancements in the traditional, previous models indicating the usability of the proposed in problems of all-to-all communications. This is the case of interactive games and other applications devoted to Internet (including multi-user environments) and interactive applications of the Brazilian Digital Television System, to be developed by the research group. Keywords: large scale virtual environments, interactive digital tv, distributed
Resumo:
The advent of the Internet stimulated the appearance of several services. An example is the communication ones present in the users day-by-day. Services as chat and e-mail reach an increasing number of users. This fact is turning the Net a powerful communication medium. The following work explores the use of communication conventional services into the Net infrastructure. We introduce the concept of communication social protocols applied to a shared virtual environment. We argue that communication tools have to be adapted to the Internet potentialities. To do that, we approach some theories of the Communication area and its applicability in a virtual environment context. We define multi-agent architecture to support the offer of these services, as well as, a software and hardware platform to support the accomplishment of experiments using Mixed Reality. Finally, we present the obtained results, experiments and products
Resumo:
In this work, we propose methodologies and computer tools to insert robots in cultural environments. The basic idea is to have a robot in a real context (a cultural space) that can represent an user connected to the system through Internet (visitor avatar in the real space) and that the robot also have its representation in a Mixed Reality space (robot avatar in the virtual space). In this way, robot and avatar are not simply real and virtual objects. They play a more important role in the scenery, interfering in the process and taking decisions. In order to have this service running, we developed a module composed by a robot, communication tools and ways to provide integration of these with the virtual environment. As welI we implemented a set of behaviors with the purpose of controlling the robot in the real space. We studied available software and hardware tools for the robotics platform used in the experiments, as welI we developed test routines to determine their potentialities. Finally, we studied the behavior-based control model, we planned and implemented alI the necessary behaviors for the robot integration to the real and virtual cultural spaces. Several experiments were conducted, in order to validate the developed methodologies and tools
Resumo:
The representation of real objects in virtual environments has applications in many areas, such as cartography, mixed reality and reverse engineering. The generation of these objects can be performed through two ways: manually, with CAD (Computer Aided Design) tools, or automatically, by means of surface reconstruction techniques. The simpler the 3D model, the easier it is to process and store it. However, this methods can generate very detailed virtual elements, that can result in some problems when processing the resulting mesh, because it has a lot of edges and polygons that have to be checked at visualization. Considering this context, it can be applied simplification algorithms to eliminate polygons from resulting mesh, without change its topology, generating a lighter mesh with less irrelevant details. The project aimed the study, implementation and comparative tests of simplification algorithms applied to meshes generated through a reconstruction pipeline based on point clouds. This work proposes the realization of the simplification step, like a complement to the pipeline developed by (ONO et al., 2012), that developed reconstruction through cloud points obtained by Microsoft Kinect, and then using Poisson algorithm
Resumo:
The representation of real objects in virtual environments has applications in many areas, such as cartography, mixed reality and reverse engineering. The generation of these objects can be performed in two ways: manually, with CAD (Computer Aided Design) tools, or automatically, by means of surface reconstruction techniques. The simpler the 3D model, the easier it is to process and store it. Multiresolution reconstruction methods can generate polygonal meshes in different levels of detail and, to improve the response time of a computer program, distant objects can be represented with few details, while more detailed models are used in closer objects. This work presents a new approach to multiresolution surface reconstruction, particularly interesting to noisy and low definition data, for example, point clouds captured with Kinect sensor
Resumo:
The Mixed Reality proposes scenes combining between virtual and real worlds offering to the user an intuitive way of interaction according to a specific application. This tutorial paper aims at presenting the fundamentals concepts of this emergent kind of human-computer interface.
Resumo:
The proliferation of video games and other applications of computer graphics in everyday life demands a much easier way to create animatable virtual human characters. Traditionally, this has been the job of highly skilled artists and animators that painstakingly model, rig and animate their avatars, and usually have to tune them for each application and transmission/rendering platform. The emergence of virtual/mixed reality environments also calls for practical and costeffective ways to produce custom models of actual people. The purpose of the present dissertation is bringing 3D human scanning closer to the average user. For this, two different techniques are presented, one passive and one active. The first one is a fully automatic system for generating statically multi-textured avatars of real people captured with several standard cameras. Our system uses a state-of-the-art shape from silhouette technique to retrieve the shape of subject. However, to deal with the lack of detail that is common in the facial region for these kind of techniques, which do not handle concavities correctly, our system proposes an approach to improve the quality of this region. This face enhancement technique uses a generic facial model which is transformed according to the specific facial features of the subject. Moreover, this system features a novel technique for generating view-independent texture atlases computed from the original images. This static multi-texturing system yields a seamless texture atlas calculated by combining the color information from several photos. We suppress the color seams due to image misalignments and irregular lighting conditions that multi-texturing approaches typically suffer from, while minimizing the blurring effect introduced by color blending techniques. The second technique features a system to retrieve a fully animatable 3D model of a human using a commercial depth sensor. Differently to other approaches in the current state of the art, our system does not require the user to be completely still through the scanning process, and neither the depth sensor is moved around the subject to cover all its surface. Instead, the depth sensor remains static and the skeleton tracking information is used to compensate the user’s movements during the scanning stage. RESUMEN La popularización de videojuegos y otras aplicaciones de los gráficos por ordenador en el día a día requiere una manera más sencilla de crear modelos virtuales humanos animables. Tradicionalmente, estos modelos han sido creados por artistas profesionales que cuidadosamente los modelan y animan, y que tienen que adaptar específicamente para cada aplicación y plataforma de transmisión o visualización. La aparición de los entornos de realidad virtual/mixta aumenta incluso más la demanda de técnicas prácticas y baratas para producir modelos 3D representando personas reales. El objetivo de esta tesis es acercar el escaneo de humanos en 3D al usuario medio. Para ello, se presentan dos técnicas diferentes, una pasiva y una activa. La primera es un sistema automático para generar avatares multi-texturizados de personas reales mediante una serie de cámaras comunes. Nuestro sistema usa técnicas del estado del arte basadas en shape from silhouette para extraer la forma del sujeto a escanear. Sin embargo, este tipo de técnicas no gestiona las concavidades correctamente, por lo que nuestro sistema propone una manera de incrementar la calidad en una región del modelo que se ve especialmente afectada: la cara. Esta técnica de mejora facial usa un modelo 3D genérico de una cara y lo modifica según los rasgos faciales específicos del sujeto. Además, el sistema incluye una novedosa técnica para generar un atlas de textura a partir de las imágenes capturadas. Este sistema de multi-texturización consigue un atlas de textura sin transiciones abruptas de color gracias a su manera de mezclar la información de color de varias imágenes sobre cada triángulo. Todas las costuras y discontinuidades de color debidas a las condiciones de iluminación irregulares son eliminadas, minimizando el efecto de desenfoque de la interpolación que normalmente introducen este tipo de métodos. La segunda técnica presenta un sistema para conseguir un modelo humano 3D completamente animable utilizando un sensor de profundidad. A diferencia de otros métodos del estado de arte, nuestro sistema no requiere que el usuario esté completamente quieto durante el proceso de escaneado, ni mover el sensor alrededor del sujeto para cubrir toda su superficie. Por el contrario, el sensor se mantiene estático y el esqueleto virtual de la persona, que se va siguiendo durante el proceso, se utiliza para compensar sus movimientos durante el escaneado.
Resumo:
Editorial: The 2015 BCLA annual conference was another fantastic affair. It was the first time the conference was held in the beautiful city of Liverpool. The venue was great and the programme was excellent. The venue overlooked the River Mersey and many of the hotels were local boutique hotels. I stayed in one which was formerly the offices of White Star Liners—where the RMS Titanic was originally registered. The hotel decor was consistent with its historic significance. The BCLA gala dinner was held in the hugely impressive Anglican Cathedral with entertainment from a Beatles tribute band. That will certainly be a hard act to follow at the next conference in 2017. Brian Tompkins took the reigns as the new BCLA president. Professor Fiona Stapleton was the recipient of the BCLA Gold Medal Award. The winner of the poster competition was Dorota Szczesna-Iskander with a poster entitled ‘Dry Contact lens poor wettability and visual performance’. Second place was Renee Reeder with her poster entitled ‘Abnormal Rosacea as a differential diagnosis in corneal scarring’. And third place was Maria Jesus Gonzalez-Garcia with her poster entitled ‘Dry Effect of the Environmental Conditions on Tear Inflammatory Mediators Concentration in Contact Lens Wearers’. The photographic competition winner was Professor Wolfgang Sickenberger from Jena in Germany. The Editorial Panel of CLAE met at the BCLA conference for their first biannual meeting. The journal metrics were discussed. In terms of number of submissions of new papers CLAE seems to have plateaued after seeing a rapid growth in the number of submissions over the last few years. The increase over the last few years could be attributed to the fact that CLAE was awarded an impact factor for the first time in 2012. This year it seems that impact factors across nearly all ophthalmic related journals has dropped. This could in part be due to the fact that last year was a ‘Research Exercise Framework (REF) year for UK universities, where they are judged on quality of their research output. The next REF is in 2020 so we may see changes nearing that time. Looking at article downloads, there seems to be a continued rise in figures. Currently CLAE attracts around 85,000 downloads per year (this is an increase of around 10,000 per year for the last few years) and the 2015 prediction is 120,000! With this in mind and with other contributing factors too, the BCLA has decided to move to online delivery of CLAE to its members starting from issue 5 of 2015. Some members do like to flick through the pages of a hard copy of the journal so members will still have the option of receiving a hard copy through the post but the default journal delivery method will now be online. The BCLA office will send various alerts and content details to members email addresses. To access CLAE online you will need to log in via the BCLA web page, currently you then click on ‘Resources’ and then under ‘Free and Discounted Publications’ you will see CLAE. This actually takes you to CLAE’s own webpage (www.contactlensjournal.com) but you need to log in via the BCLA web page. The BCLA plans to change these weblinks so that from the BCLA web page you can link to the journal website much more easily and you have the choice of going directly into the general website for CLAE or straight to the current issue. In 2016 you will see an even easier way of accessing CLAE online as the BCLA will launch a CLAE application for mobile devices where the journal can be downloaded as a ‘flick-book’. This is a great way of bringing CLAE into the modern era where people access their information in newer ways. For many the BCLA conference was part of a very busy conference week as it was preceded by the International Association of Contact Lens Educators’ (IACLE) Third World Congress, held in Manchester on the 4 days before the BCLA conference. The first and second IACE World Congresses were held in Waterloo, Canada in 1994 and 2000 respectively and hosted by Professor Des Fonn. Professor Fonn was the recipient of the first ever IACLE lifetime achievement award. The Third IACLE World Congress saw more than 100 contact lens educators and industry representatives from around 30 countries gather in the UK for the four-day event, hosted by The University of Manchester. Delegates gained hands-on experience of innovations in teaching, such as learning delivery systems, the use of iPads in the classroom and for creating ePub content, and augmented and virtual reality technologies. IACLE members around the world also took part via a live online broadcast. The Third IACLE World Congress was made possible by the generous support of Sponsors Alcon, CooperVision and Johnson & Johnson Vision Care., for more information look at the IACLE web page (www.iacle.org).
Resumo:
A major and growing problems faced by modern society is the high production of waste and related effects they produce, such as environmental degradation and pollution of various ecosystems, with direct effects on quality of life. The thermal treatment technologies have been widely used in the treatment of these wastes and thermal plasma is gaining importance in processing blanketing. This work is focused on developing an optimized system of supervision and control applied to a processing plant and petrochemical waste effluents using thermal plasma. The system is basically composed of a inductive plasma torch reactors washing system / exhaust gases and RF power used to generate plasma. The process of supervision and control of the plant is of paramount importance in the development of the ultimate goal. For this reason, various subsidies were created in the search for greater efficiency in the process, generating events, graphics / distribution and storage of data for each subsystem of the plant, process execution, control and 3D visualization of each subsystem of the plant between others. A communication platform between the virtual 3D plant architecture and a real control structure (hardware) was created. The goal is to use the concepts of mixed reality and develop strategies for different types of controls that allow manipulating 3D plant without restrictions and schedules, optimize the actual process. Studies have shown that one of the best ways to implement the control of generation inductively coupled plasma techniques is to use intelligent control, both for their efficiency in the results is low for its implementation, without requiring a specific model. The control strategy using Fuzzy Logic (Fuzzy-PI) was developed and implemented, and the results showed satisfactory condition on response time and viability
Resumo:
Televisions (TVs) and VR Head-Mounted Displays (VR HMDs) are used in shared and social spaces in the home. This thesis posits that these displays do not sufficiently reflect the collocated, social contexts in which they reside, nor do they sufficiently support shared experiences at-a-distance. This thesis explores how the role of TVs and VR HMDs can go beyond presenting a single entertainment experience, instead supporting social and shared use in both collocated and at-a-distance contexts. For collocated TV, this thesis demonstrates that the TV can be augmented to facilitate multi-user interaction, support shared and independent activities and multi-user use through multi-view display technology, and provide awareness of the multi-screen activity of those in the room, allowing the TV to reflect the social context in which it resides. For at-a-distance TV, existing smart TVs are shown to be capable of supporting synchronous at-a-distance activity, broadening the scope of media consumption beyond the four walls of the home. For VR HMDs, collocated proximate persons can be seamlessly brought into mixed reality VR experiences based on engagement, improving VR HMD usability. Applied to at-a-distance interactions, these shared mixed reality VR experiences can enable more immersive social experiences that approximate viewing together as if in person, compared to at-a-distance TV. Through an examination of TVs and VR HMDs, this thesis demonstrates that consumer display technology can better support users to interact, and share experiences and activities, with those they are close to.
Resumo:
Human activity is very dynamic and subtle, and most physical environments are also highly dynamic and support a vast range of social practices that do not map directly into any immediate ubiquitous computing functionally. Identifying what is valuable to people is very hard and obviously leads to great uncertainty regarding the type of support needed and the type of resources needed to create such support. We have addressed the issues of system development through the adoption of a Crowdsourced software development model [13]. We have designed and developed Anywhere places, an open and flexible system support infrastructure for Ubiquitous Computing that is based on a balanced combination between global services and applications and situated devices. Evaluation, however, is still an open problem. The characteristics of ubiquitous computing environments make their evaluation very complex: there are no globally accepted metrics and it is very difficult to evaluate large-scale and long-term environments in real contexts. In this paper, we describe a first proposal of an hybrid 3D simulated prototype of Anywhere places that combines simulated and real components to generate a mixed reality which can be used to assess the envisaged ubiquitous computing environments [17].
Resumo:
As ubiquitous systems have moved out of the lab and into the world the need to think more systematically about how there are realised has grown. This talk will present intradisciplinary work I have been engaged in with other computing colleagues on how we might develop more formal models and understanding of ubiquitous computing systems. The formal modelling of computing systems has proved valuable in areas as diverse as reliability, security and robustness. However, the emergence of ubiquitous computing raises new challenges for formal modelling due to their contextual nature and dependence on unreliable sensing systems. In this work we undertook an exploration of modelling an example ubiquitous system called the Savannah game using the approach of bigraphical rewriting systems. This required an unusual intra-disciplinary dialogue between formal computing and human- computer interaction researchers to model systematically four perspectives on Savannah: computational, physical, human and technical. Each perspective in turn drew upon a range of different modelling traditions. For example, the human perspective built upon previous work on proxemics, which uses physical distance as a means to understand interaction. In this talk I hope to show how our model explains observed inconsistencies in Savannah and ex- tend it to resolve these. I will then reflect on the need for intradisciplinary work of this form and the importance of the bigraph diagrammatic form to support this form of engagement. Speaker Biography Tom Rodden Tom Rodden (rodden.info) is a Professor of Interactive Computing at the University of Nottingham. His research brings together a range of human and technical disciplines, technologies and techniques to tackle the human, social, ethical and technical challenges involved in ubiquitous computing and the increasing used of personal data. He leads the Mixed Reality Laboratory (www.mrl.nott.ac.uk) an interdisciplinary research facility that is home of a team of over 40 researchers. He founded and currently co-directs the Horizon Digital Economy Research Institute (www.horizon.ac.uk), a university wide interdisciplinary research centre focusing on ethical use of our growing digital footprint. He has previously directed the EPSRC Equator IRC (www.equator.ac.uk) a national interdisciplinary research collaboration exploring the place of digital interaction in our everyday world. He is a fellow of the British Computer Society and the ACM and was elected to the ACM SIGCHI Academy in 2009 (http://www.sigchi.org/about/awards/).
Resumo:
Questo volume di tesi descrive lo sviluppo di un'applicazione cross-platform per la raccolta dati sull'accessibilità urbana. E' stato creato un Pervasive GWAP strutturato in modo tale da raccogliere, tramite un gioco in mixed-reality, dati sulla geolocalizzazione delle barriere/facility architettoniche in ambiente urbano, in modo da mappare il territorio. Il gioco è rivolto a bambini accompagnati da insegnanti/genitori e prevede l'utilizzo di dispositivi mobili quali tablet e smartphone con sistema operativo Andoid. E' stata utilizzata la funzione GPS dei dispositivi per la geolocalizzazione dei giocatori e delle barriere/facility segnalate e la CAMERA per lo scan dei Qr-Code utilizzati come incentivo per intrattenere gli utenti. L'applicazione è scritta utilizzando tecnologie web quali HTML, CSS, JavaScript, PHP, JSON e grazie all'utilizzo del framework Apache Cordova è stato possibile lo sviluppo multipiattaforma. Questo strumento permette di utilizzare tecnologie web per lo sviluppo di applicazioni mobile, generando codice nativo supportato da sistemi operativi quali Android, iOS, BlackBerry.