3 resultados para work time tracking

em Repositório Digital da UNIVERSIDADE DA MADEIRA - Portugal


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Image stitching is the process of joining several images to obtain a bigger view of a scene. It is used, for example, in tourism to transmit to the viewer the sensation of being in another place. I am presenting an inexpensive solution for automatic real time video and image stitching with two web cameras as the video/image sources. The proposed solution relies on the usage of several markers in the scene as reference points for the stitching algorithm. The implemented algorithm is divided in four main steps, the marker detection, camera pose determination (in reference to the markers), video/image size and 3d transformation, and image translation. Wii remote controllers are used to support several steps in the process. The built‐in IR camera provides clean marker detection, which facilitates the camera pose determination. The only restriction in the algorithm is that markers have to be in the field of view when capturing the scene. Several tests where made to evaluate the final algorithm. The algorithm is able to perform video stitching with a frame rate between 8 and 13 fps. The joining of the two videos/images is good with minor misalignments in objects at the same depth of the marker,misalignments in the background and foreground are bigger. The capture process is simple enough so anyone can perform a stitching with a very short explanation. Although real‐time video stitching can be achieved by this affordable approach, there are few shortcomings in current version. For example, contrast inconsistency along the stitching line could be reduced by applying a color correction algorithm to every source videos. In addition, the misalignments in stitched images due to camera lens distortion could be eased by optical correction algorithm. The work was developed in Apple’s Quartz Composer, a visual programming environment. A library of extended functions was developed using Xcode tools also from Apple.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis aimed at designing and developing a system that can a) infer individuals’ need for a break from sedentary behaviour in the workplace, and b) persuade them to take a break through the use of different techniques from persuasive psychology. We postulated three variables, namely, individuals’ posture, stress levels and involvement in their computer mediated activity. We developed and field-studied a system that could infer these using a web camera and a key presses and mouse clicks log. We found that the system could predict posture from viewing depth and stress from the movement detected. We then created a general formula that predicts individuals’ need for a break using only the posture and stress predictors. Once the first objective was set, we built and field-studied a system that used three ways to communicate a recommendation for a break to a user: implicit, just-in time and ambient feedback. The implicit feedback was operationalized through changes in the users’ computer wallpaper that provided subtle visual cues. The just-in time feedback employed prompting at the bottom right side of the user’s screen. In addition, we implemented an intuitive behind-screen interaction technique where people can snooze a notification using simple gestures. The ambient feedback mechanism employed an origami sculpture sitting on the user’s desk. This prototype was continuously reflecting the user’s posture and performed rhythmic movements when to recommend breaks. A field study demonstrated the overall success of the system, with 69% of the break recommendations received by users were accepted. The study further revealed the strengths and weaknesses of the three persuasive mechanisms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis reports on research done for the integration of eye tracking technology into virtual reality environments, with the goal of using it in rehabilitation of patients who suffered from stroke. For the last few years, eye tracking has been a focus on medical research, used as an assistive tool  to help people with disabilities interact with new technologies  and as an assessment tool  to track the eye gaze during computer interactions. However, tracking more complex gaze behaviors and relating them to motor deficits in people with disabilities is an area that has not been fully explored, therefore it became the focal point of this research. During the research, two exploratory studies were performed in which eye tracking technology was integrated in the context of a newly created virtual reality task to assess the impact of stroke. Using an eye tracking device and a custom virtual task, the system developed is able to monitor the eye gaze pattern changes over time in patients with stroke, as well as allowing their eye gaze to function as an input for the task. Based on neuroscientific hypotheses of upper limb motor control, the studies aimed at verifying the differences in gaze patterns during the observation and execution of the virtual goal-oriented task in stroke patients (N=10), and also to assess normal gaze behavior in healthy participants (N=20). Results were found consistent and supported the hypotheses formulated, showing that eye gaze could be used as a valid assessment tool on these patients. However, the findings of this first exploratory approach are limited in order to fully understand the effect of stroke on eye gaze behavior. Therefore, a novel model-driven paradigm is proposed to further understand the relation between the neuronal mechanisms underlying goal-oriented actions and eye gaze behavior.