962 resultados para augmented, reality, metaio, framework
Resumo:
Este documento describe una planta real dotada de un vehículo inteligente que le permite navegar por ambientes de interiores, responder a estímulos del ambiente, interactuar con seres humanos a través de realidad aumentada, detectar la presencia de fuego y solicitar ayuda por medio de Twitter. Los experimentos muestran que no hay falsos positivos en la detección de fuego, y que la detección de fuego es superior al 50% de las lecturas del sensor en distancias menores a 5 m, con línea de visión entre el sensor y la llama. La comunicación por radios XBee en ambientes de interiores es efectiva hasta por lo menos 25m de distancia entre los radios.
Resumo:
Augmented Reality (AR) applications often require knowledge of the user’s position in some global coordinate system in order to draw the augmented content to its correct position on the screen. The most common method for coarse positioning is the Global Positioning System (GPS). One of the advantages of GPS is that GPS receivers can be found in almost every modern mobile device. This research was conducted in order to determine the accuracies of different GPS receivers. The tests included seven consumer-grade tablets, three external GPS modules and one professional-grade GPS receiver. All of the devices were tested with both static and mobile measurements. It was concluded that even the cheaper external GPS receivers were notably more accurate than the GPS receivers of the tested tablets. The absolute accuracy of the tablets is difficult to determine from the test results, since the results vary by a large margin between different measurements. The accuracy of the tested tablets in static measurements were between 0.30 meters and 13.75 meters.
Resumo:
International audience
Resumo:
International audience
Resumo:
Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.
Resumo:
Tese (doutorado)—Universidade de Brasília, Instituto de Artes, Programa de Pós-Graduação em Artes, 2015.
Resumo:
Artist David Lyons and computer scientist David Flatla work collaboratively to create art that intentionally targets audiences of varying visual abilities mediated through smart device interfaces. Conceived as an investigation into theories and practices of visual perception, they explore the idea that artwork can be intentionally created to be experienced differently dependent on one’s visual abilities. They have created motion graphics and supporting recolouring and colour vision deficiency (CVD) simulation software. Some of the motion graphics communicate details specifically to those with colour blindness/CVD by containing moving imagery only seen by those with CVD. Others will contain moving images that those with typical colour vision can experience but appear to be unchanging to people with CVD. All the artwork is revealed for both audiences through the use of specially programmed smart devices, fitted with augmented reality recolouring and CVD simulation software. The visual elements come from various sources, including the Ishihara Colour Blind Test, movie marques, and game shows. The software created reflects the perceptual capabilities of most individuals with reduced colour vision. The development of the simulation software and the motion graphic series are examined and discussed from both computer science and artistic positions.
Resumo:
La realtà aumentata (AR) è una nuova tecnologia adottata in chirurgia prostatica con l'obiettivo di migliorare la conservazione dei fasci neurovascolari (NVB) ed evitare i margini chirurgici positivi (PSM). Abbiamo arruolato prospetticamente pazienti con diagnosi di cancro alla prostata (PCa) sul base di biopsia di fusione mirata con mpMRI positiva. Prima dell'intervento, i pazienti arruolati sono stati indirizzati a sottoporsi a ricostruzione del modello virtuale 3D basato su mpMRI preoperatoria immagini. Infine, il chirurgo ha eseguito la RARP con l'ausilio del modello 3D proiettato in AR all'interno della console robotica (RARP guidata AR-3D). I pazienti sottoposti a AR RARP sono stati confrontati con quelli sottoposti a "RARP standard" nello stesso periodo. Nel complesso, i tassi di PSM erano comparabili tra i due gruppi; I PSM a livello della lesione indice erano significativamente più bassi nei pazienti riferiti al gruppo AR-3D (5%) rispetto a quelli nel gruppo di controllo (20%; p = 0,01). La nuova tecnica di guida AR-3D per l'analisi IFS può consentono di ridurre i PSM a livello della lesione dell'indice
Resumo:
Image-to-image (i2i) translation networks can generate fake images beneficial for many applications in augmented reality, computer graphics, and robotics. However, they require large scale datasets and high contextual understanding to be trained correctly. In this thesis, we propose strategies for solving these problems, improving performances of i2i translation networks by using domain- or physics-related priors. The thesis is divided into two parts. In Part I, we exploit human abstraction capabilities to identify existing relationships in images, thus defining domains that can be leveraged to improve data usage efficiency. We use additional domain-related information to train networks on web-crawled data, hallucinate scenarios unseen during training, and perform few-shot learning. In Part II, we instead rely on physics priors. First, we combine realistic physics-based rendering with generative networks to boost outputs realism and controllability. Then, we exploit naive physical guidance to drive a manifold reorganization, which allowed generating continuous conditions such as timelapses.
Resumo:
L’elaborato di tesi che segue si propone di ricercare una nuova linea di esperienza utente e d’interazione attraverso la tecnologia della realtà aumentata contestualizzata nel mondo della produzione musicale. La tesi analizza innanzitutto la tecnologia come strumento d’interazione, la sua storia e la sua evoluzione fino ai nostri giorni con un excursus sui campi applicativi e i device utili per avere un’esperienza completa. L’analisi prosegue attraverso un’attenta ricerca sullo stato dell’arte e sulle applicazioni di realtà aumentata nel campo della musica presenti sul mercato per giungere ad una dettagliata indagine sugli strumenti che hanno indirizzato il concept di progetto. L’output di progetto è rappresentato da un’interfaccia 2d per la parametrizzazione di alcuni settaggi fondamentali ed infine da un’interfaccia semplificata in realtà aumentata. Quest’ultima è composta prevalentemente da sliders con cui è possibile modificare dei parametri della traccia audio portando l’esperienza di produzione musicale verso una concezione democratica, semplificata e giocosa. L’obiettivo di progetto è stato quello di creare un sistema di facile utilizzo anche da parti di utenti poco esperti con i software Daw presenti sul mercato attualmente.
Resumo:
Electric cars are increasingly popular due to a transition of mobility towards more sustainable forms. From an increasingly green and pollution reduction perspective, there are more and more incentives that encourage customers to invest in electric cars. Using the Industrial Design and Structure (IDeS) research method, this project has the aim to design a new electric compact SUV suitable for all people who live in the city, and for people who move outside urban areas. In order to achieve the goal of developing a new car in the industrial automotive environment, the compact SUV segment was chosen because it is a vehicle very requested by the costumers and it is successful in the market due to its versatility. IDeS is a combination of innovative and advanced systematic approaches used to set up a new industrial project. The IDeS methodology is sequentially composed of Quality Function Deployment (QFD), Benchmarking (BM), Top-Flop analysis (TFA), Stylistic Design Engineering (SDE), Design for X, Prototyping, Testing, Budgeting, and Planning. The work is based on a series of steps and the sequence of these must be meticulously scheduled, imposing deadlines along the work. Starting from an analysis of the market and competitors, the study of the best and worst existing parameters in the competitor’s market is done, arriving at the idea of a better product in terms of numbers and innovation. After identifying the characteristics that the new car should have, the other step is the styling part, with the definition of the style and the design of the machine on a 3D CAD. Finally, it switches to the prototyping and testing phase to see if the product is able to work. Ultimately, intending to place the car on the market, it is essential to estimate the necessary budget for a possible investment in this project.
Resumo:
Gaze estimation has gained interest in recent years for being an important cue to obtain information about the internal cognitive state of humans. Regardless of whether it is the 3D gaze vector or the point of gaze (PoG), gaze estimation has been applied in various fields, such as: human robot interaction, augmented reality, medicine, aviation and automotive. In the latter field, as part of Advanced Driver-Assistance Systems (ADAS), it allows the development of cutting-edge systems capable of mitigating road accidents by monitoring driver distraction. Gaze estimation can be also used to enhance the driving experience, for instance, autonomous driving. It also can improve comfort with augmented reality components capable of being commanded by the driver's eyes. Although, several high-performance real-time inference works already exist, just a few are capable of working with only a RGB camera on computationally constrained devices, such as a microcontroller. This work aims to develop a low-cost, efficient and high-performance embedded system capable of estimating the driver's gaze using deep learning and a RGB camera. The proposed system has achieved near-SOTA performances with about 90% less memory footprint. The capabilities to generalize in unseen environments have been evaluated through a live demonstration, where high performance and near real-time inference were obtained using a webcam and a Raspberry Pi4.
Resumo:
Background: Attualmente, diversi approcci riabilitatavi vengono proposti per il miglioramento del dolore e della funzione a seguito di intervento chirurgico alla spalla. La teleriabilitazione si è rivelata una valida alternativa per l’erogazione dei servizi riabilitativi a distanza: grazie all’utilizzo delle tecnologie di telecomunicazione, è possibile fornire consulenza, valutazione, monitoraggio, intervento e educazione, superando le barriere geografiche, temporali, sociali ed economiche. Obiettivo: Lo scopo della revisione è valutare le prove di efficacia presenti in letteratura in merito all’utilizzo della teleriabilitazione per il miglioramento funzionale dei pazienti operati di spalla. Metodi: La revisione è stata redatta secondo la checklist del PRISMA statement. La ricerca è stata condotta da aprile a settembre 2022, consultando le banche dati Cochrane Library, PubMed e PEDro. La ricerca è stata limitata a studi primari sperimentali e quasi, con full-text reperibile in lingua italiana o inglese e senza limiti temporali, inerenti soggetti operati alla spalla trattati con diverse modalità di teleriabilitazione. Come elemento di confronto è stato incluso qualsiasi tipo di intervento riabilitativo convenzionale. La qualità metodologica è stata valutata con la PEDro scale. Risultati: sono stati inclusi 4 RCT e 1 CCT, che hanno indagato misure di outcome relative a: dolore, mobilità articolare, forza, abilità funzionale e qualità della vita. Dall’analisi qualitativa dei risultati degli studi si è osservato che in tutti i gruppi sperimentali si sono ottenuti miglioramenti significativi negli outcome d’interesse; in due studi è stata evidenziata una superiorità statisticamente significativa della teleriabilitazione. Conclusioni: Nonostante la revisione non sia giunta a risultati generalizzabili e di validità assoluta, la teleriabilitazione ha dimostrato di essere una modalità sicura ed efficace nel miglioramento clinico e funzionale dei soggetti operati alla spalla.
Resumo:
The aim of this study, conducted in collaboration with Lawrence Technological University in Detroit, is to create, through the method of the Industrial Design Structure (IDeS), a new concept for a sport-coupe car, based on a restyling of a retro model (Ford Mustang 1967). To date, vintage models of cars always arouse great interest both for the history behind them and for the classic and elegant style. Designing a model of a vehicle that can combine the charm of retro style with the innovation and comfort of modern cars would allow to meet the needs and desires of a large segment of the market that today is forced to choose between past and future. Thanks to a well-conceived concept car an automaker company is able to express its future policy, to make a statement of intent as, such a prototype, ticks all the boxes, from glamour and visual wow-factor to technical intrigue and design fascination. IDeS is an approach that makes use of many engineering tools to realize a study developed on several steps that must be meticulously organized and timed. With a deep analysis of the trends dominating the automotive industry it is possible to identify a series of product requirements using quality function deployment (QFD). The considerations from this first evaluation led to the definition of the technical specifications via benchmarking (BM) and top-flop analysis (TFA). Then, the structured methodology of stylistic design engineering (SDE) is applied through six phases: (1) stylistic trends analysis; (2) sketches; (3) 2D CAD drawings; (4) 3D CAD models; (5) virtual prototyping; (6) solid stylistic model. Finally, Developing the IDeS method up to the final stages of Prototypes and Testing you get a product as close as possible to the ideal vehicle conceptualized in the initial analysis.
Resumo:
Mixed Reality (MR) aims to link virtual entities with the real world and has many applications such as military and medical domains [JBL+00, NFB07]. In many MR systems and more precisely in augmented scenes, one needs the application to render the virtual part accurately at the right time. To achieve this, such systems acquire data related to the real world from a set of sensors before rendering virtual entities. A suitable system architecture should minimize the delays to keep the overall system delay (also called end-to-end latency) within the requirements for real-time performance. In this context, we propose a compositional modeling framework for MR software architectures in order to specify, simulate and validate formally the time constraints of such systems. Our approach is first based on a functional decomposition of such systems into generic components. The obtained elements as well as their typical interactions give rise to generic representations in terms of timed automata. A whole system is then obtained as a composition of such defined components. To write specifications, a textual language named MIRELA (MIxed REality LAnguage) is proposed along with the corresponding compilation tools. The generated output contains timed automata in UPPAAL format for simulation and verification of time constraints. These automata may also be used to generate source code skeletons for an implementation on a MR platform. The approach is illustrated first on a small example. A realistic case study is also developed. It is modeled by several timed automata synchronizing through channels and including a large number of time constraints. Both systems have been simulated in UPPAAL and checked against the required behavioral properties.