852 resultados para HumanComputer-Interaction Wearable Hands-free HealthCare Augmented-Reality Moverio Thalmic-Myo
Resumo:
Esta tese pretende descrever o desenvolvimento e arquitectura do software que constitui o Miradouro Virtual@, mais especificamente do componente referente à interface. O Miradouro Virtual@ é um dispositivo cujo propósito à semelhança dos tradicionais binóculos turísticos, é observar a paisagem, mas cuja interacção não está limitada à simples observação individual. Recorre à realidade aumentada para sobrepôr imagens geradas por computador a imagens reais, capturadas por um dispositivo para aquisição de imagem real (tipicamente uma câmara de vídeo), e mostra-as num ecrã touchscreen, permitindo deste modo, combinar elementos virtuais e multimédia com a paisagem real. A imagem final, composta, dá ao utilizador uma nova dimensão do espaço envolvente, permitindo-lhe explorar uma nova camada de informação não visível anteriormente. Sendo sensíveis à orientação do Miradouro Virtual@, os elementos virtuais e multimédia adaptam-se de acordo com os movimentos do dispositivo. O Miradouro Virtual@ é um produto composto por diversos elementos de hardware e software. O foco desta tese recai apenas nos componentes de software, mais especificamente na interface. Pretende dar a conhecer as limitações da versão anterior do software e mostrar as soluções encontradas que permitiram ultrapassar algumas dessas limitações. ABSTRACT; This thesis focuses on the design and development of the Virtual Sightseeing™ software, more specifically on the interface component. The Virtual Sightseeing™ is a device similar to the traditional scenic viewers that takes advantage of its generally known and popularity to build an innovative system. It works by using augmented reality to superimpose, in real-time, images generated by a computer onto a live stream captured by a video camera and displaying them on a touchscreen display. It allows adding multimedia elements to the real scenery by composing them in the image that is presented to the user. The multimedia information and virtual elements that are displayed are sensitive to the orientation and position of the device. They change as the user manually changes the orientation of the device. The Virtual Sightseeing™ is comprised of several hardware and software components. The focus of this thesis is on the software part, more specifically on the interface component. It intends to show the known limitations of the previous software version and how they were overcome in this new version.
Resumo:
Este documento describe una planta real dotada de un vehículo inteligente que le permite navegar por ambientes de interiores, responder a estímulos del ambiente, interactuar con seres humanos a través de realidad aumentada, detectar la presencia de fuego y solicitar ayuda por medio de Twitter. Los experimentos muestran que no hay falsos positivos en la detección de fuego, y que la detección de fuego es superior al 50% de las lecturas del sensor en distancias menores a 5 m, con línea de visión entre el sensor y la llama. La comunicación por radios XBee en ambientes de interiores es efectiva hasta por lo menos 25m de distancia entre los radios.
Resumo:
Augmented Reality (AR) applications often require knowledge of the user’s position in some global coordinate system in order to draw the augmented content to its correct position on the screen. The most common method for coarse positioning is the Global Positioning System (GPS). One of the advantages of GPS is that GPS receivers can be found in almost every modern mobile device. This research was conducted in order to determine the accuracies of different GPS receivers. The tests included seven consumer-grade tablets, three external GPS modules and one professional-grade GPS receiver. All of the devices were tested with both static and mobile measurements. It was concluded that even the cheaper external GPS receivers were notably more accurate than the GPS receivers of the tested tablets. The absolute accuracy of the tablets is difficult to determine from the test results, since the results vary by a large margin between different measurements. The accuracy of the tested tablets in static measurements were between 0.30 meters and 13.75 meters.
Resumo:
International audience
Resumo:
International audience
Resumo:
Chemical admixtures, when properly selected and quantified, play an important role in obtaining adequate slurry systems for quality primary cementing operations. They assure the proper operation of a well and reduce costs attributed to corrective cementing jobs. Controlling the amount lost by filtering through the slurry to permeable areas is one of the most important requirements in an operation, commonly controlled by chemical admixtures, such as carboxymethylcellulose (CMC). However, problems related to temperature, salttolerance and the secundary retarding effect are commonly reported in the literature. According to the scenario described above, the use of an aqueous dispersion of non-ionic poliurethane was proposed to control the filter loss, given its low ionic interaction with the free ions present in the slurries in humid state. Therefore, this study aims at assessing the efficiency of poliurethane to reduce filter loss in different temperature and pressure conditions as well as the synergistic effect with other admixtures. The temperatures and pressures used in laboratory tests simulate the same conditions of oil wells with depths of 500 to 1200 m. The poliurethane showed resistance to thermal degradation and stability in the presence of salts. With the increase in the concentration of the polymer there was a considerable decrease in the volume lost by filtration, and this has been effective even with the increase in temperature
Resumo:
O interesse pela análise de espaços urbanos através de dispositivos comunicacionais móveis (de meios digitais) – perspetivando a sua transformação contando com a participação colaborativa da população – traduz o enquadramento geral da investigação. As práticas artísticas, complementadas por métodos de estudo de cidades (capazes de abordarem a respetiva complexidade de modo multioperativo e flexível), foram integradas na pesquisa tendo em linha de conta o acentuado desenvolvimento que as mesmas têm conhecido para a compreensão da atual condição urbana. O resultado foca a hibridização de processos que permitem acrescentar conhecimento coletivo sobre estruturas urbanas, expresso em mapeamentos dinâmicos que promovem leituras aumentadas de realidades urbanas. Assente no enquadramento teórico e respetiva revisão da literatura (abrangendo áreas disciplinares da média-arte locativa, da análise urbana e dos mapeamentos dinâmicos), procedeu-se ao desenvolvimento de dois artefactos complementares, cuja elaboração implicou três fases: i) pré-produção; ii) produção; iii) pós-produção. Na primeira, conceptualizaram-se os artefactos, definindo-se critérios e especificando parâmetros de análise. Complementarmente, consideraram-se softwares a utilizar bem como a seleção de atores urbanos que participaram em experiências associadas à investigação. No que se refere à produção, encetaram-se diversas ações de apropriação e apreensão de espaços urbanos selecionados na Vila de Caminha, recorrendo à técnica do caminhar mediado pela tecnologia. Na terceira fase, a informação resultante foi analisada, comparada e sistematizada através de uma reflexão final. Registaram-se marcas e apropriações na Vila de Caminha, integrando abordagens da média-arte locativa, da morfologia urbana e tecnologias digitais. A partir desta metodologia, a resposta aos objetivos da investigação contribui para a colmatação da lacuna identificada no estado arte, dado demonstrar-se a relevância da convergência operativa entre a apreensão urbana e os fluxos informacionais e comunicacionais para o revelar de vivências espaciais urbanas quotidianas (passadas e presentes) que ocorrem sobre a estrutura física das cidades. Em síntese, as práticas artísticas, provindas da média-arte locativa, expressam – em mapeamentos dinâmicos e através de realidade aumentada – o devir coletivo de experiências individuais nos espaços das cidades, acrescentando histórias à sua memória e história urbana.
Resumo:
Artist David Lyons and computer scientist David Flatla work collaboratively to create art that intentionally targets audiences of varying visual abilities mediated through smart device interfaces. Conceived as an investigation into theories and practices of visual perception, they explore the idea that artwork can be intentionally created to be experienced differently dependent on one’s visual abilities. They have created motion graphics and supporting recolouring and colour vision deficiency (CVD) simulation software. Some of the motion graphics communicate details specifically to those with colour blindness/CVD by containing moving imagery only seen by those with CVD. Others will contain moving images that those with typical colour vision can experience but appear to be unchanging to people with CVD. All the artwork is revealed for both audiences through the use of specially programmed smart devices, fitted with augmented reality recolouring and CVD simulation software. The visual elements come from various sources, including the Ishihara Colour Blind Test, movie marques, and game shows. The software created reflects the perceptual capabilities of most individuals with reduced colour vision. The development of the simulation software and the motion graphic series are examined and discussed from both computer science and artistic positions.
Resumo:
Technology is increasingly infiltrating all aspects of our lives and the rapid uptake of devices that live near, on or in our bodies are facilitating radical new ways of working, relating and socialising. This distribution of technology into the very fabric of our everyday life creates new possibilities, but also raises questions regarding our future relationship with data and the quantified self. By embedding technology into the fabric of our clothes and accessories, it becomes ‘wearable’. Such ‘wearables’ enable the acquisition of and the connection to vast amounts of data about people and environments in order to provide life-augmenting levels of interactivity. Wearable sensors for example, offer the potential for significant benefits in the future management of our wellbeing. Fitness trackers such as ‘Fitbit’ and ‘Garmen’ provide wearers with the ability to monitor their personal fitness indicators while other wearables provide healthcare professionals with information that improves diagnosis. While the rapid uptake of wearables may offer unique and innovative opportunities, there are also concerns surrounding the high levels of data sharing that come as a consequence of these technologies. As more ‘smart’ devices connect to the Internet, and as technology becomes increasingly available (e.g. via Wi-Fi, Bluetooth), more products, artefacts and things are becoming interconnected. This digital connection of devices is called The ‘Internet of Things’ (IoT). IoT is spreading rapidly, with many traditionally non-online devices becoming increasingly connected; products such as mobile phones, fridges, pedometers, coffee machines, video cameras, cars and clothing. The IoT is growing at a rapid rate with estimates indicating that by 2020 there will be over 25 billion connected things globally. As the number of devices connected to the Internet increases, so too does the amount of data collected and type of information that is stored and potentially shared. The ability to collect massive amounts of data - known as ‘big data’ - can be used to better understand and predict behaviours across all areas of research from societal and economic to environmental and biological. With this kind of information at our disposal, we have a more powerful lens with which to perceive the world, and the resulting insights can be used to design more appropriate products, services and systems. It can however, also be used as a method of surveillance, suppression and coercion by governments or large organisations. This is becoming particularly apparent in advertising that targets audiences based on the individual preferences revealed by the data collected from social media and online devices such as GPS systems or pedometers. This type of technology also provides fertile ground for public debates around future fashion, identity and broader social issues such as culture, politics and the environment. The potential implications of these type of technological interactions via wearables, through and with the IoT, have never been more real or more accessible. But, as highlighted, this interconnectedness also brings with it complex technical, ethical and moral challenges. Data security and the protection of privacy and personal information will become ever more present in current and future ethical and moral debates of the 21st century. This type of technology is also a stepping-stone to a future that includes implantable technology, biotechnologies, interspecies communication and augmented humans (cyborgs). Technologies that live symbiotically and perpetually in our bodies, the built environment and the natural environment are no longer the stuff of science fiction; it is in fact a reality. So, where next?... The works exhibited in Wear Next_ provide a snapshot into the broad spectrum of wearables in design and in development internationally. This exhibition has been curated to serve as a platform for enhanced broader debate around future technology, our mediated future-selves and the evolution of human interactions. As you explore the exhibition, may we ask that you pause and think to yourself, what might we... Wear Next_? WEARNEXT ONLINE LISTINGS AND MEDIA COVERAGE: http://indulgemagazine.net/wear-next/ http://www.weekendnotes.com/wear-next-exhibition-gallery-artisan/ http://concreteplayground.com/brisbane/event/wear-next_/ http://www.nationalcraftinitiative.com.au/news_and_events/event/48/wear-next http://bneart.com/whats-on/wear-next_/ http://creativelysould.tumblr.com/post/124899079611/creative-weekend-art-edition http://www.abc.net.au/radionational/programs/breakfast/smartly-dressed-the-future-of-wearable-technology/6744374 http://couriermail.newspaperdirect.com/epaper/viewer.aspx RADIO COVERAGE http://www.abc.net.au/radionational/programs/breakfast/wear-next-exhibition-whats-next-for-wearable-technology/6745986 TELEVISION COVERAGE http://www.abc.net.au/radionational/programs/breakfast/wear-next-exhibition-whats-next-for-wearable-technology/6745986 https://au.news.yahoo.com/video/watch/29439742/how-you-could-soon-be-wearing-smart-clothes/#page1
Resumo:
Having to carry input devices can be inconvenient when interacting with wall-sized, high-resolution tiled displays. Such displays are typically driven by a cluster of computers. Running existing games on a cluster is non-trivial, and the performance attained using software solutions like Chromium is not good enough. This paper presents a touch-free, multi-user, humancomputer interface for wall-sized displays that enables completely device-free interaction. The interface is built using 16 cameras and a cluster of computers, and is integrated with the games Quake 3 Arena (Q3A) and Homeworld. The two games were parallelized using two different approaches in order to run on a 7x4 tile, 21 megapixel display wall with good performance. The touch-free interface enables interaction with a latency of 116 ms, where 81 ms are due to the camera hardware. The rendering performance of the games is compared to their sequential counterparts running on the display wall using Chromium. Parallel Q3A’s framerate is an order of magnitude higher compared to using Chromium. The parallel version of Homeworld performed on par with the sequential, which did not run at all using Chromium. Informal use of the touch-free interface indicates that it works better for controlling Q3A than Homeworld.
Resumo:
Mobile and wearable computers present input/output prob-lems due to limited screen space and interaction techniques. When mobile, users typically focus their visual attention on navigating their environment - making visually demanding interface designs hard to operate. This paper presents two multimodal interaction techniques designed to overcome these problems and allow truly mobile, 'eyes-free' device use. The first is a 3D audio radial pie menu that uses head gestures for selecting items. An evaluation of a range of different audio designs showed that egocentric sounds re-duced task completion time, perceived annoyance, and al-lowed users to walk closer to their preferred walking speed. The second is a sonically enhanced 2D gesture recognition system for use on a belt-mounted PDA. An evaluation of the system with and without audio feedback showed users' ges-tures were more accurate when dynamically guided by au-dio-feedback. These novel interaction techniques demon-strate effective alternatives to visual-centric interface de-signs on mobile devices.
Resumo:
Effective interaction with personal computers is a basic requirement for many of the functions that are performed in our daily lives. With the rapid emergence of the Internet and the World Wide Web, computers have become one of the premier means of communication in our society. Unfortunately, these advances have not become equally accessible to physically handicapped individuals. In reality, a significant number of individuals with severe motor disabilities, due to a variety of causes such as Spinal Cord Injury (SCI), Amyothrophic Lateral Sclerosis (ALS), etc., may not be able to utilize the computer mouse as a vital input device for computer interaction. The purpose of this research was to further develop and improve an existing alternative input device for computer cursor control to be used by individuals with severe motor disabilities. This thesis describes the development and the underlying principle for a practical hands-off human-computer interface based on Electromyogram (EMG) signals and Eye Gaze Tracking (EGT) technology compatible with the Microsoft Windows operating system (OS). Results of the software developed in this thesis show a significant improvement in the performance and usability of the EMG/EGT cursor control HCI.
A New Method for Modeling Free Surface Flows and Fluid-structure Interaction with Ocean Applications
Resumo:
The computational modeling of ocean waves and ocean-faring devices poses numerous challenges. Among these are the need to stably and accurately represent both the fluid-fluid interface between water and air as well as the fluid-structure interfaces arising between solid devices and one or more fluids. As techniques are developed to stably and accurately balance the interactions between fluid and structural solvers at these boundaries, a similarly pressing challenge is the development of algorithms that are massively scalable and capable of performing large-scale three-dimensional simulations on reasonable time scales. This dissertation introduces two separate methods for approaching this problem, with the first focusing on the development of sophisticated fluid-fluid interface representations and the second focusing primarily on scalability and extensibility to higher-order methods.
We begin by introducing the narrow-band gradient-augmented level set method (GALSM) for incompressible multiphase Navier-Stokes flow. This is the first use of the high-order GALSM for a fluid flow application, and its reliability and accuracy in modeling ocean environments is tested extensively. The method demonstrates numerous advantages over the traditional level set method, among these a heightened conservation of fluid volume and the representation of subgrid structures.
Next, we present a finite-volume algorithm for solving the incompressible Euler equations in two and three dimensions in the presence of a flow-driven free surface and a dynamic rigid body. In this development, the chief concerns are efficiency, scalability, and extensibility (to higher-order and truly conservative methods). These priorities informed a number of important choices: The air phase is substituted by a pressure boundary condition in order to greatly reduce the size of the computational domain, a cut-cell finite-volume approach is chosen in order to minimize fluid volume loss and open the door to higher-order methods, and adaptive mesh refinement (AMR) is employed to focus computational effort and make large-scale 3D simulations possible. This algorithm is shown to produce robust and accurate results that are well-suited for the study of ocean waves and the development of wave energy conversion (WEC) devices.
Resumo:
Quantum mechanics predicts that our physical reality is influenced by events that can potentially happen but factually do not occur. Interaction-free measurements (IFMs) exploit this counterintuitive influence to detect the presence of an object without requiring any interaction with it. Here we propose and realize an IFM concept based on an unstable many-particle system. In our experiments, we employ an ultracold gas in an unstable spin configuration, which can undergo a rapid decay. The object-realized by a laser beam-prevents this decay because of the indirect quantum Zeno effect and thus, its presence can be detected without interacting with a single atom. Contrary to existing proposals, our IFM does not require single-particle sources and is only weakly affected by losses and decoherence. We demonstrate confidence levels of 90%, well beyond previous optical experiments.