35 resultados para Auditory-visual Interaction

em Universidade do Minho


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação de mestrado em Ciências da Comunicação (área de especialização em Audiovisuais e Multimédia)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays, road accidents are a major public health problem, which increase is forecasted if road safety is not treated properly, dying about 1.2 million people every year around the globe. In 2012, Portugal recorded 573 fatalities in road accidents, on site, revealing the largest decreasing of the European Union for 2011, along with Denmark. Beyond the impact caused by fatalities, it was calculated that the economic and social costs of road accidents weighted about 1.17% of the Portuguese gross domestic product in 2010. Visual Analytics allows the combination of data analysis techniques with interactive visualizations, which facilitates the process of knowledge discovery in sets of large and complex data, while the Geovisual Analytics facilitates the exploration of space-time data through maps with different variables and parameters that are under analysis. In Portugal, the identification of road accident accumulation zones, in this work named black spots, has been restricted to annual fixed windows. In this work, it is presented a dynamic approach based on Visual Analytics techniques that is able to identify the displacement of black spots on sliding windows of 12 months. Moreover, with the use of different parameterizations in the formula usually used to detect black spots, it is possible to identify zones that are almost becoming black spots. Through the proposed visualizations, the study and identification of countermeasures to this social and economic problem can gain new grounds and thus the decision- making process is supported and improved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to advances in information technology (e.g., digital video cameras, ubiquitous sensors), the automatic detection of human behaviors from video is a very recent research topic. In this paper, we perform a systematic and recent literature review on this topic, from 2000 to 2014, covering a selection of 193 papers that were searched from six major scientific publishers. The selected papers were classified into three main subjects: detection techniques, datasets and applications. The detection techniques were divided into four categories (initialization, tracking, pose estimation and recognition). The list of datasets includes eight examples (e.g., Hollywood action). Finally, several application areas were identified, including human detection, abnormal activity detection, action recognition, player modeling and pedestrian detection. Our analysis provides a road map to guide future research for designing automatic visual human behavior detection systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study is evaluating the interaction between several base pen grade asphalt binders (35/50, 50/70, 70/100, 160/220) and two different plastic wastes (EVA and HDPE), for a set of new polymer modified binders produced with different amounts of both plastic wastes. After analysing the results obtained for the several polymer modified binders evaluated in this study, including a commercial modified binder, it can be concluded that the new PMBs produced with the base bitumen 70/100 and 5% of each plastic waste (HDPE or EVA) results in binders with very good performance, similar to that of the commercial modified binder.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hand gesture recognition for human computer interaction, being a natural way of human computer interaction, is an area of active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them to convey information or for device control. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. In this study we try to identify hand features that, isolated, respond better in various situations in human-computer interaction. The extracted features are used to train a set of classifiers with the help of RapidMiner in order to find the best learner. A dataset with our own gesture vocabulary consisted of 10 gestures, recorded from 20 users was created for later processing. Experimental results show that the radial signature and the centroid distance are the features that when used separately obtain better results, with an accuracy of 91% and 90,1% respectively obtained with a Neural Network classifier. These to methods have also the advantage of being simple in terms of computational complexity, which make them good candidates for real-time hand gesture recognition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for human-computer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of visionbased interaction systems could be the same for all applications and thus facilitate the implementation. For hand posture recognition, a SVM (Support Vector Machine) model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM (Hidden Markov Model) model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications. To validate the proposed framework two applications were implemented. The first one is a real-time system able to interpret the Portuguese Sign Language. The second one is an online system able to help a robotic soccer game referee judge a game in real time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for humancomputer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of vision-based interaction systems can be the same for all applications and thus facilitate the implementation. In order to test the proposed solutions, three prototypes were implemented. For hand posture recognition, a SVM model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Novel input modalities such as touch, tangibles or gestures try to exploit human's innate skills rather than imposing new learning processes. However, despite the recent boom of different natural interaction paradigms, it hasn't been systematically evaluated how these interfaces influence a user's performance or whether each interface could be more or less appropriate when it comes to: 1) different age groups; and 2) different basic operations, as data selection, insertion or manipulation. This work presents the first step of an exploratory evaluation about whether or not the users' performance is indeed influenced by the different interfaces. The key point is to understand how different interaction paradigms affect specific target-audiences (children, adults and older adults) when dealing with a selection task. 60 participants took part in this study to assess how different interfaces may influence the interaction of specific groups of users with regard to their age. Four input modalities were used to perform a selection task and the methodology was based on usability testing (speed, accuracy and user preference). The study suggests a statistically significant difference between mean selection times for each group of users, and also raises new issues regarding the “old” mouse input versus the “new” input modalities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Psicologia

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the major challenges in the development of an immersive system is handling the delay between the tracking of the user’s head position and the updated projection of a 3D image or auralised sound, also called end-to-end delay. Excessive end-to-end delay can result in the general decrement of the “feeling of presence”, the occurrence of motion sickness and poor performance in perception-action tasks. These latencies must be known in order to provide insights on the technological (hardware/software optimization) or psychophysical (recalibration sessions) strategies to deal with them. Our goal was to develop a new measurement method of end-to-end delay that is both precise and easily replicated. We used a Head and Torso simulator (HATS) as an auditory signal sensor, a fast response photo-sensor to detect a visual stimulus response from a Motion Capture System, and a voltage input trigger as real-time event. The HATS was mounted in a turntable which allowed us to precisely change the 3D sound relative to the head position. When the virtual sound source was at 90º azimuth, the correspondent HRTF would set all the intensity values to zero, at the same time a trigger would register the real-time event of turning the HATS 90º azimuth. Furthermore, with the HATS turned 90º to the left, the motion capture marker visualization would fell exactly in the photo-sensor receptor. This method allowed us to precisely measure the delay from tracking to displaying. Moreover, our results show that the method of tracking, its tracking frequency, and the rendering of the sound reflections are the main predictors of end-to-end delay.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[Excerpt] Synchronization of periodic movements like side-by-side walking [7] is frequently modeled by coupled oscillators [5] and the coupling strength is defined quantitatively [3]. In contrast, in most studies on sensorimotor synchronization (SMS), simple movements like finger taps are synchronized with simple stimuli like metronomes [4]. While the latter paradigm simplifies matters and allows for the assessment of the relative weights of sensory modalities through systematic variation of the stimuli [1], it might lack ecological validity. Conversely, using more complex movements and stimuli might complicate the specification of mechanisms underlying coupling. We merged the positive aspects of both approaches to study the contribution of auditory and visual information on synchronization during side-by-side walking. As stimuli, we used Point Light Walkers (PLWs) and auralized steps sound; both were constructed from previously captured walking individuals [2][6]. PLWs were retro-projected on a screen and matched according to gender, hip height, and velocity. The participant walked for 7.20m side by side with 1) a PLW, 2) steps sound, or 3) both displayed in temporal congruence. Instruction to participants was to synchronize with the available stimuli. [...]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of buffers to maintain the pH within a desired range is a very common practice in chemical, biochemical and biological studies. Among them, zwitterionic N-substituted aminosulfonic acids, usually known as Good's buffers, although widely used, can complex metals and interact with biological systems. The present work reviews, discusses and updates the metal complexation characteristics of thirty one commercially available buffers. In addition, their impact on biological systems is also presented. The influences of these buffers on the results obtained in biological, biochemical and environmental studies, with special focus on their interaction with metal ions, are highlighted and critically reviewed. Using chemical speciation simulations, based on the current knowledge of the metal-buffer stability constants, a proposal of the most adequate buffer to employ for a given metal ion is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To evaluate the impact of eye and head rotation in the measurement of peripheral refraction with an open-field autorefractometer in myopic eyes wearing two different center-distance designs of multifocal contact lenses (MFCLs). Methods: Nineteen right eyes from 19 myopic patients (average central M ± SD = −2.67 ± 1.66 D) aged 20–27 years (mean ± SD = 23.2 ± 3.3 years) were evaluated using a Grand-Seiko autorefractometer. Patients were fitted with one multifocal aspheric center-distance contact lens (Biofinity Multifocal D®) and with one multi-concentric MFCL (Acuvue Oasys for Presbyopia). Axial and peripheral refraction were evaluated by eye rotation and by head rotation under naked eye condition and with each MFCL fitted randomly and in independent sessions. Results: For the naked eye, refractive pattern (M, J0 and J45) across the central 60◦ of the horizontal visual field values did not show significant changes measured by rotating the eye or rotating the head (p > 0.05). Similar results were obtained wearing the Biofinity D, for both testing methods, no obtaining significant differences to M, J0 and J45 values (p > 0.05). For Acuvue Oasys for presbyopia, also no differences were found when comparing measurements obtained by eye and head rotation (p > 0.05). Multivariate analysis did not showed a significant interaction between testing method and lens type neither with measuring locations (MANOVA, p > 0.05). There were significant differences in M and J0 values between naked eyes and each MFCL. Conclusion: Measurements of peripheral refraction by rotating the eye or rotating the head in myopic patients wearing dominant design or multi-concentric multifocal silicone hydrogel contact lens are comparable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

METHODS: Refractive lens exchange was performed with implantation of an AT Lisa 839M (trifocal) or 909MP (bifocal toric) IOL, the latter if corneal astigmatism was more than 0.75 diopter (D). The postoperative visual and refractive outcomes were evaluated. A prototype light-distortion analyzer was used to quantify the postoperative light-distortion indices. A control group of eyes in which a Tecnis ZCB00 1-piece monofocal IOL was implanted had the same examinations. RESULTS: A trifocal or bifocal toric IOL was implanted in 66 eyes. The control IOL was implanted in 18 eyes. All 3 groups obtained a significant improvement in uncorrected distance visual acuity (UDVA) (P < .001) and corrected distance visual acuity (CDVA) (P Z .001). The mean uncorrected near visual acuity (UNVA) was 0.123 logMAR with the trifocal IOL and 0.130 logMAR with the bifocal toric IOL. The residual refractive cylinder was less than 1.00 D in 86.7% of cases with the toric IOL. The mean light-distortion index was significantly higher in the multifocal IOL groups than in the monofocal group (P < .001), although no correlation was found between the light-distortion index and CDVA. CONCLUSIONS: The multifocal IOLs provided excellent UDVA and functional UNVA despite increased light-distortion indices. The light-distortion analyzer reliably quantified a subjective component of vision distinct from visual acuity; it may become a useful adjunct in the evaluation of visual quality obtained with multifocal IOLs.