52 resultados para Video conferencing
em CentAUR: Central Archive University of Reading - UK
Resumo:
Collaborative software is usually thought of as providing audio-video conferencing services, application/desktop sharing, and access to large content repositories. However mobile device usage is characterized by users carrying out short and intermittent tasks sometimes referred to as 'micro-tasking'. Micro-collaborations are not well supported by traditional groupware systems and the work in this paper seeks out to address this. Mico is a system that provides a set of application level peer-to-peer services for the ad-hoc formation and facilitation of collaborative groups across a diverse mobile device domain. The system builds on the Java ME bindings of the JXTA P2P protocols, and is designed with an approach to use the lowest common denominators that are required for collaboration between varying degrees of mobile device capability. To demonstrate how our platform facilitates application development, we built an exemplary set of demonstration applications and include code examples here to illustrate the ease and speed afforded when developing collaborative software with Mico.
Resumo:
In collaborative situations, eye gaze is a critical element of behavior which supports and fulfills many activities and roles. In current computer-supported collaboration systems, eye gaze is poorly supported. Even in a state-of-the-art video conferencing system such as the access grid, although one can see the face of the user, much of the communicative power of eye gaze is lost. This article gives an overview of some preliminary work that looks towards integrating eye gaze into an immersive collaborative virtual environment and assessing the impact that this would have on interaction between the users of such a system. Three experiments were conducted to assess the efficacy of eye gaze within immersive virtual environments. In each experiment, subjects observed on a large screen the eye-gaze behavior of an avatar. The eye-gaze behavior of that avatar had previously been recorded from a user with the use of a head-mounted eye tracker. The first experiment was conducted to assess the difference between users' abilities to judge what objects an avatar is looking at with only head gaze being viewed and also with eye- and head-gaze data being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects, correctly identifying what a person is looking at in an immersive virtual environment. The second experiment examined whether a monocular or binocular eye-tracker would be required. This was examined by testing subjects' ability to identify where an avatar was looking from their eye direction alone, or by eye direction combined with convergence. This experiment showed that convergence had a significant impact on the subjects' ability to identify where the avatar was looking. The final experiment looked at the effects of stereo and mono-viewing of the scene, with the subjects being asked to identify where the avatar was looking. This experiment showed that there was no difference in the subjects' ability to detect where the avatar was gazing. This is followed by a description of how the eye-tracking system has been integrated into an immersive collaborative virtual environment and some preliminary results from the use of such a system.
Resumo:
For efficient collaboration between participants, eye gaze is seen as being critical for interaction. Video conferencing either does not attempt to support eye gaze (e.g. AcessGrid) or only approximates it in round table conditions (e.g. life size telepresence). Immersive collaborative virtual environments represent remote participants through avatars that follow their tracked movements. By additionally tracking people's eyes and representing their movement on their avatars, the line of gaze can be faithfully reproduced, as opposed to approximated. This paper presents the results of initial work that tested if the focus of gaze could be more accurately gauged if tracked eye movement was added to that of the head of an avatar observed in an immersive VE. An experiment was conducted to assess the difference between user's abilities to judge what objects an avatar is looking at with only head movements being displayed, while the eyes remained static, and with eye gaze and head movement information being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects correctly identifying what a person is looking at in an immersive virtual environment. This is followed by a description of the work that is now being undertaken following the positive results from the experiment. We discuss the integration of an eye tracker more suitable for immersive mobile use and the software and techniques that were developed to integrate the user's real-world eye movements into calibrated eye gaze in an immersive virtual world. This is to be used in the creation of an immersive collaborative virtual environment supporting eye gaze and its ongoing experiments. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
This article explores the way users of an online gay chat room negotiate the exchange of photographs and the conduct of video conferencing sessions and how this negotiation changes the way participants manage their interactions and claim and impute social identities. Different modes of communication provide users with different resources for the control of information, affecting not just what users are able to reveal, but also what they are able to conceal. Thus, the shift from a purely textual mode for interacting to one involving visual images fundamentally changes the kinds of identities and relationships available to users. At the same time, the strategies users employ to negotiate these shifts of mode can alter the resources available in different modes. The kinds of social actions made possible through different modes, it is argued, are not just a matter of the modes themselves but also of how modes are introduced into the ongoing flow of interaction.
Resumo:
Students may have difficulty in understanding some of the complex concepts which they have been taught in the general areas of science and engineering. Whilst practical work such as a laboratory based examination of the performance of structures has an important role in knowledge construction this does have some limitations. Blended learning supports different learning styles, hence further benefits knowledge building. This research involves an empirical study of how vodcasts (video-podcasts) can be used to enrich learning experience in the structural properties of materials laboratory of an undergraduate course. Students were given the opportunity of downloading and viewing the vodcasts on the theory before and after the experimental work. It is the choice of the students when (before or after, before and after) and how many times they would like to view the vodcasts. In blended learning, the combination of face-to-face teaching, vodcasts, printed materials, practical experiments, writing reports and instructors’ feedbacks benefits different learning styles of the learners. For the preparation of the practical, the students were informed about the availability of the vodcasts prior to the practical session. After the practical work, students submitted an individual laboratory report for the assessment of the structures laboratory. The data collection consisted of a questionnaire completed by the students, follow-up semi-structured interviews and the practical reports submitted by them for assessment. The results from the questionnaire were analysed quantitatively, whilst the data from the assessment reports were analysed qualitatively. The analysis shows that most of the students who have not fully grasped the theory after the practical, managed to gain the required knowledge by viewing the vodcasts. According to their feedbacks, the students felt that they have control over how to use the material and to view it as many times as they wish. Some students who have understood the theory may choose to view it once or not at all. Their understanding was demonstrated by their explanations in their reports, and was illustrated by the approach they took to explicate the results of their experimental work. The research findings are valuable to instructors who design, develop and deliver different types of blended learning, and are beneficial to learners who try different blended approaches. Recommendations were made on the role of the innovative application of vodcasts in the knowledge construction for structures laboratory and to guide future work in this area of research.
Resumo:
Students may have difficulty in understanding some of the complex concepts which they have been taught in the general areas of science and engineering. Whilst practical work such as a laboratory based examination of the performance of structures has an important role in knowledge construction this does have some limitations. Blended learning supports different learning styles, hence further benefits knowledge building. This research involves the empirical studies of how an innovative use of vodcasts (video-podcasts) can enrich learning experience in the structural properties of materials laboratory of an undergraduate course. Students were given the opportunity of downloading and viewing the vodcasts on the theory before and after the experimental work. It is the choice of the students when (before or after, before and after) and how many times they would like to view the vodcasts. In blended learning, the combination of face-to-face teaching, vodcasts, printed materials, practical experiments, writing reports and instructors’ feedbacks benefits different learning styles of the learners. For the preparation of the practical laboratory work, the students were informed about the availability of the vodcasts prior to the practical session. After the practical work, students submit an individual laboratory report for the assessment of the structures laboratory. The data collection consists of a questionnaire completed by the students, and the practical reports submitted by them for assessment. The results from the questionnaire were analysed quantitatively, whilst the data from the assessment reports were analysed qualitatively. The analysis shows that students who have not fully grasped the theory after the practical were successful in gaining the required knowledge by viewing the vodcasts. Some students who have understood the theory may choose to view it once or not at all. Their understanding was demonstrated by the quality of their explanations in their reports. This is illustrated by the approach they took to explicate the results of their experimental work, for example, they can explain how to calculate the Young’s Modulus properly and provided the correct value for it. The research findings are valuable to instructors who design, develop and deliver different types of blended learning, and beneficial to learners who try different blended approaches. Recommendations were made on the role of the innovative application of vodcasts in the knowledge construction for structures laboratory and to guide future work in this area of research.
Resumo:
This paper presents a paralleled Two-Pass Hexagonal (TPA) algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS) for motion estimation. In the TPA, Motion Vectors (MV) are generated from the first-pass LHMEA and are used as predictors for second-pass HEXBS motion estimation, which only searches a small number of Macroblocks (MBs). We introduced hashtable into video processing and completed parallel implementation. We propose and evaluate parallel implementations of the LHMEA of TPA on clusters of workstations for real time video compression. It discusses how parallel video coding on load balanced multiprocessor systems can help, especially on motion estimation. The effect of load balancing for improved performance is discussed. The performance of the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms.
Resumo:
This paper presents a parallel Linear Hashtable Motion Estimation Algorithm (LHMEA). Most parallel video compression algorithms focus on Group of Picture (GOP). Based on LHMEA we proposed earlier [1][2], we developed a parallel motion estimation algorithm focus inside of frame. We divide each reference frames into equally sized regions. These regions are going to be processed in parallel to increase the encoding speed significantly. The theory and practice speed up of parallel LHMEA according to the number of PCs in the cluster are compared and discussed. Motion Vectors (MV) are generated from the first-pass LHMEA and used as predictors for second-pass Hexagonal Search (HEXBS) motion estimation, which only searches a small number of Macroblocks (MBs). We evaluated distributed parallel implementation of LHMEA of TPA for real time video compression.
Resumo:
This paper presents an improved parallel Two-Pass Hexagonal (TPA) algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS) for motion estimation. Motion Vectors (MV) are generated from the first-pass LHMEA and used as predictors for second-pass HEXBS motion estimation, which only searches a small number of Macroblocks (MBs). We used bashtable into video processing and completed parallel implementation. The hashtable structure of LHMEA is improved compared to the original TPA and LHMEA. We propose and evaluate parallel implementations of the LHMEA of TPA on clusters of workstations for real time video compression. The implementation contains spatial and temporal approaches. The performance of the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms.
Resumo:
In this paper, a forward-looking infrared (FLIR) video surveillance system is presented for collision avoidance of moving ships to bridge piers. An image pre-processing algorithm is proposed to reduce clutter noises by multi-scale fractal analysis, in which the blanket method is used for fractal feature computation. Then, the moving ship detection algorithm is developed from image differentials of the fractal feature in the region of surveillance between regularly interval frames. Experimental results have shown that the approach is feasible and effective. It has achieved real-time and reliable alert to avoid collisions of moving ships to bridge piers