813 resultados para Computer and Video Games
Resumo:
The aim of this Master Thesis is the analysis, design and development of a robust and reliable Human-Computer Interaction interface, based on visual hand-gesture recognition. The implementation of the required functions is oriented to the simulation of a classical hardware interaction device: the mouse, by recognizing a specific hand-gesture vocabulary in color video sequences. For this purpose, a prototype of a hand-gesture recognition system has been designed and implemented, which is composed of three stages: detection, tracking and recognition. This system is based on machine learning methods and pattern recognition techniques, which have been integrated together with other image processing approaches to get a high recognition accuracy and a low computational cost. Regarding pattern recongition techniques, several algorithms and strategies have been designed and implemented, which are applicable to color images and video sequences. The design of these algorithms has the purpose of extracting spatial and spatio-temporal features from static and dynamic hand gestures, in order to identify them in a robust and reliable way. Finally, a visual database containing the necessary vocabulary of gestures for interacting with the computer has been created.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
It is proposed that games, which are designed to generate positive affect, are most successful when they facilitate flow (Csikszentmihalyi 1992). Flow is a state of concentration, deep enjoyment, and total absorption in an activity. The study of games, and a resulting understanding of flow in games can inform the design of non-leisure software for positive affect. The paper considers the ways in which computer games contravene Nielsen's guidelines for heuristic evaluation ( Nielsen and Molich 1990) and how these contraventions impact on flow. The paper also explores the implications for research that stem from the differences between games played on a personal computer and games played on a dedicated console. This research takes important initial steps towards de. ning how flow in computer games can inform affective design.
Resumo:
Objectives: To validate the WOMAC 3.1 in a touch screen computer format, which applies each question as a cartoon in writing and in speech (QUALITOUCH method), and to assess patient acceptance of the computer touch screen version. Methods: The paper and computer formats of WOMAC 3.1 were applied in random order to 53 subjects with hip or knee osteoarthritis. The mean age of the subjects was 64 years ( range 45 to 83), 60% were male, 53% were 65 years or older, and 53% used computers at home or at work. Agreement between formats was assessed by intraclass correlation coefficients (ICCs). Preferences were assessed with a supplementary questionnaire. Results: ICCs between formats were 0.92 (95% confidence interval, 0.87 to 0.96) for pain; 0.94 (0.90 to 0.97) for stiffness, and 0.96 ( 0.94 to 0.98) for function. ICCs were similar in men and women, in subjects with or without previous computer experience, and in subjects below or above age 65. The computer format was found easier to use by 26% of the subjects, the paper format by 8%, and 66% were undecided. Overall, 53% of subjects preferred the computer format, while 9% preferred the paper format, and 38% were undecided. Conclusion: The computer format of the WOMAC 3.1 is a reliable assessment tool. Agreement between computer and paper formats was independent of computer experience, age, or sex. Thus the computer format may help improve patient follow up by meeting patients' preferences and providing immediate results.
Resumo:
With the rapid increase in both centralized video archives and distributed WWW video resources, content-based video retrieval is gaining its importance. To support such applications efficiently, content-based video indexing must be addressed. Typically, each video is represented by a sequence of frames. Due to the high dimensionality of frame representation and the large number of frames, video indexing introduces an additional degree of complexity. In this paper, we address the problem of content-based video indexing and propose an efficient solution, called the Ordered VA-File (OVA-File) based on the VA-file. OVA-File is a hierarchical structure and has two novel features: 1) partitioning the whole file into slices such that only a small number of slices are accessed and checked during k Nearest Neighbor (kNN) search and 2) efficient handling of insertions of new vectors into the OVA-File, such that the average distance between the new vectors and those approximations near that position is minimized. To facilitate a search, we present an efficient approximate kNN algorithm named Ordered VA-LOW (OVA-LOW) based on the proposed OVA-File. OVA-LOW first chooses possible OVA-Slices by ranking the distances between their corresponding centers and the query vector, and then visits all approximations in the selected OVA-Slices to work out approximate kNN. The number of possible OVA-Slices is controlled by a user-defined parameter delta. By adjusting delta, OVA-LOW provides a trade-off between the query cost and the result quality. Query by video clip consisting of multiple frames is also discussed. Extensive experimental studies using real video data sets were conducted and the results showed that our methods can yield a significant speed-up over an existing VA-file-based method and iDistance with high query result quality. Furthermore, by incorporating temporal correlation of video content, our methods achieved much more efficient performance.
Resumo:
Processor emulators are a software tool for allowing legacy computer programs to be executed on a modern processor. In the past emulators have been used in trivial applications such as maintenance of video games. Now, however, processor emulation is being applied to safety-critical control systems, including military avionics. These applications demand utmost guarantees of correctness, but no verification techniques exist for proving that an emulated system preserves the original system’s functional and timing properties. Here we show how this can be done by combining concepts previously used for reasoning about real-time program compilation, coupled with an understanding of the new and old software architectures. In particular, we show how both the old and new systems can be given a common semantics, thus allowing their behaviours to be compared directly.
Resumo:
This paper describes methods used to support collaboration and communication between practitioners, designers and engineers when designing ubiquitous computing systems. We tested methods such as “Wizard of Oz” and design games in a real domain, the dental surgery, in an attempt to create a system that is: affordable; minimally disruptive of the natural flow of work; and improves human-computer interaction. In doing so we found that such activities allowed the practitioners to be on a ‘level playing ground’ with designers and engineers. The findings we present suggest that dentists are willing to engage in detailed exploration and constructive critique of technical design possibilities if the design ideas and prototypes are presented in the context of their work practice and are of a resolution and relevance that allow them to jointly explore and question with the design time. This paper is an extension of a short paper submitted to the Participatory Design Conference, 2004.
Resumo:
As an anomaly on the market of military shooters of the 21st century, Spec Ops: The Line entails a journey of undetermined realities and modern warfare consequences. In this study, the narrative is analyzed from the perspective of Jean Baudrillard’s idea that simulations have replaced our conception of reality. Both the protagonist and the player of Spec Ops will unavoidably descend into a state of the hyperreal. They experience multiple possible realities within the game narrative and end up unable to comprehend what has transpired. The hyperreal is defined as the state in which it is impossible to discern reality from simulation. The simulation of reality has proliferated itself into being the reality, and the original has been lost. The excessive use of violence, direct approach of the player through a break with the 4th wall and a deceitful narrator contribute to this loss of reality within the game. Although the game represents simulacra, being a simulation in itself, the object of study is the coexisting state of hyperreal shared between protagonist and player when comprehending events in the game. In the end, neither part can understand or discern with any certainty what transpired within the game.
Resumo:
This article aims to gain a greater understanding of relevant and successful methods of stimulating an ICT culture and skills development in rural areas. The paper distils good practice activities, utilizing criteria derived from a review of the rural dimensions of ICT learning, from a range of relevant initiatives and programmes. These good practice activities cover: community resource centres providing opportunities for ‘tasting’ ICTs; video games and Internet Cafe´s as tools removing ‘entry barriers’; emphasis on ‘user management’ as a means of creating ownership; service delivery beyond fixed locations; use of ICT capacities in the delivery of general services; and selected use of financial support.
Resumo:
The aim was to develop an archive containing detailed description of church bells. As an object of cultural heritage the bell has general properties such as geometric dimensions, weight, sound of each of the bells, the pitch of the tone as well as acoustical diagrams obtained using contemporary equipment. The audio, photo and video archive is developed by using advanced technologies for analysis, reservation and data protection.
Resumo:
In this work we deal with video streams over TCP networks and propose an alternative measurement to the widely used and accepted peak signal to noise ratio (PSNR) due to the limitations of this metric in the presence of temporal errors. A test-bed was created to simulate buffer under-run in scalable video streams and the pauses produced as a result of the buffer under-run were inserted into the video before being employed as the subject of subjective testing. The pause intensity metric proposed in [1] was compared with the subjective results and it was shown that in spite of reductions in frame rate and resolution, a correlation with pause intensity still exists. Due to these conclusions, the metric may be employed in layer selection in scalable video streams. © 2011 IEEE.
Resumo:
The main challenges of multimedia data retrieval lie in the effective mapping between low-level features and high-level concepts, and in the individual users' subjective perceptions of multimedia content. ^ The objectives of this dissertation are to develop an integrated multimedia indexing and retrieval framework with the aim to bridge the gap between semantic concepts and low-level features. To achieve this goal, a set of core techniques have been developed, including image segmentation, content-based image retrieval, object tracking, video indexing, and video event detection. These core techniques are integrated in a systematic way to enable the semantic search for images/videos, and can be tailored to solve the problems in other multimedia related domains. In image retrieval, two new methods of bridging the semantic gap are proposed: (1) for general content-based image retrieval, a stochastic mechanism is utilized to enable the long-term learning of high-level concepts from a set of training data, such as user access frequencies and access patterns of images. (2) In addition to whole-image retrieval, a novel multiple instance learning framework is proposed for object-based image retrieval, by which a user is allowed to more effectively search for images that contain multiple objects of interest. An enhanced image segmentation algorithm is developed to extract the object information from images. This segmentation algorithm is further used in video indexing and retrieval, by which a robust video shot/scene segmentation method is developed based on low-level visual feature comparison, object tracking, and audio analysis. Based on shot boundaries, a novel data mining framework is further proposed to detect events in soccer videos, while fully utilizing the multi-modality features and object information obtained through video shot/scene detection. ^ Another contribution of this dissertation is the potential of the above techniques to be tailored and applied to other multimedia applications. This is demonstrated by their utilization in traffic video surveillance applications. The enhanced image segmentation algorithm, coupled with an adaptive background learning algorithm, improves the performance of vehicle identification. A sophisticated object tracking algorithm is proposed to track individual vehicles, while the spatial and temporal relationships of vehicle objects are modeled by an abstract semantic model. ^
Resumo:
Today, the development of domain-specific communication applications is both time-consuming and error-prone because the low-level communication services provided by the existing systems and networks are primitive and often heterogeneous. Multimedia communication applications are typically built on top of low-level network abstractions such as TCP/UDP socket, SIP (Session Initiation Protocol) and RTP (Real-time Transport Protocol) APIs. The User-centric Communication Middleware (UCM) is proposed to encapsulate the networking complexity and heterogeneity of basic multimedia and multi-party communication for upper-layer communication applications. And UCM provides a unified user-centric communication service to diverse communication applications ranging from a simple phone call and video conferencing to specialized communication applications like disaster management and telemedicine. It makes it easier to the development of domain-specific communication applications. The UCM abstraction and API is proposed to achieve these goals. The dissertation also tries to integrate the formal method into UCM development process. The formal model is created for UCM using SAM methodology. Some design errors are found during model creation because the formal method forces to give the precise description of UCM. By using the SAM tool, formal UCM model is translated to Promela formula model. In the dissertation, some system properties are defined as temporal logic formulas. These temporal logic formulas are manually translated to promela formulas which are individually integrated with promela formula model of UCM and verified using SPIN tool. Formal analysis used here helps verify the system properties (for example multiparty multimedia protocol) and dig out the bugs of systems.
Resumo:
The number of overweight people has increased in the last few years. Factors such as attention to diet and changes in lifestyle are crucial in the prevention and control of obesity and diseases related to it. Experts believe that such actions are most effective when initiated during childhood, and that children raised in an environment that encourages physical activity ultimately become healthier adults. However, to arouse and maintain interest in such activities represent a major challenge, which are initially perceived as repetitive and boring, and, thus, soon abandoned. Computer games, traditionally seen as stimulants to a sedentary lifestyle are changing this perception using non-conventional controls that require constant movement of the player. Applications that combine the playfulness of such games to physical activity through devices, like Microsoft Kinect, might become interesting tools in this scenario, by using the familiarity of Natural User Interfaces along with the challenge and the fun of video games, in order to make attractive exercise routines for schoolchildren. The project carried out consists of an exergame composed of several activities designed and implemented with the participation of a Physical Educator, aimed at children between eight and ten years old, whose performance and progress can be remotely monitored by a professional via web interface. The application arising from this work was accompanied by tests with a group of graduating Physical Education students from the University of Rio Verde GO, and subsequently validated through questionnaires whose results are shown on this work.
Resumo:
Allocating resources optimally is a nontrivial task, especially when multiple
self-interested agents with conflicting goals are involved. This dissertation
uses techniques from game theory to study two classes of such problems:
allocating resources to catch agents that attempt to evade them, and allocating
payments to agents in a team in order to stabilize it. Besides discussing what
allocations are optimal from various game-theoretic perspectives, we also study
how to efficiently compute them, and if no such algorithms are found, what
computational hardness results can be proved.
The first class of problems is inspired by real-world applications such as the
TOEFL iBT test, course final exams, driver's license tests, and airport security
patrols. We call them test games and security games. This dissertation first
studies test games separately, and then proposes a framework of Catcher-Evader
games (CE games) that generalizes both test games and security games. We show
that the optimal test strategy can be efficiently computed for scored test
games, but it is hard to compute for many binary test games. Optimal Stackelberg
strategies are hard to compute for CE games, but we give an empirically
efficient algorithm for computing their Nash equilibria. We also prove that the
Nash equilibria of a CE game are interchangeable.
The second class of problems involves how to split a reward that is collectively
obtained by a team. For example, how should a startup distribute its shares, and
what salary should an enterprise pay to its employees. Several stability-based
solution concepts in cooperative game theory, such as the core, the least core,
and the nucleolus, are well suited to this purpose when the goal is to avoid
coalitions of agents breaking off. We show that some of these solution concepts
can be justified as the most stable payments under noise. Moreover, by adjusting
the noise models (to be arguably more realistic), we obtain new solution
concepts including the partial nucleolus, the multiplicative least core, and the
multiplicative nucleolus. We then study the computational complexity of those
solution concepts under the constraint of superadditivity. Our result is based
on what we call Small-Issues-Large-Team games and it applies to popular
representation schemes such as MC-nets.