863 resultados para Games and entertainment
Resumo:
The proliferation of video games and other applications of computer graphics in everyday life demands a much easier way to create animatable virtual human characters. Traditionally, this has been the job of highly skilled artists and animators that painstakingly model, rig and animate their avatars, and usually have to tune them for each application and transmission/rendering platform. The emergence of virtual/mixed reality environments also calls for practical and costeffective ways to produce custom models of actual people. The purpose of the present dissertation is bringing 3D human scanning closer to the average user. For this, two different techniques are presented, one passive and one active. The first one is a fully automatic system for generating statically multi-textured avatars of real people captured with several standard cameras. Our system uses a state-of-the-art shape from silhouette technique to retrieve the shape of subject. However, to deal with the lack of detail that is common in the facial region for these kind of techniques, which do not handle concavities correctly, our system proposes an approach to improve the quality of this region. This face enhancement technique uses a generic facial model which is transformed according to the specific facial features of the subject. Moreover, this system features a novel technique for generating view-independent texture atlases computed from the original images. This static multi-texturing system yields a seamless texture atlas calculated by combining the color information from several photos. We suppress the color seams due to image misalignments and irregular lighting conditions that multi-texturing approaches typically suffer from, while minimizing the blurring effect introduced by color blending techniques. The second technique features a system to retrieve a fully animatable 3D model of a human using a commercial depth sensor. Differently to other approaches in the current state of the art, our system does not require the user to be completely still through the scanning process, and neither the depth sensor is moved around the subject to cover all its surface. Instead, the depth sensor remains static and the skeleton tracking information is used to compensate the user’s movements during the scanning stage. RESUMEN La popularización de videojuegos y otras aplicaciones de los gráficos por ordenador en el día a día requiere una manera más sencilla de crear modelos virtuales humanos animables. Tradicionalmente, estos modelos han sido creados por artistas profesionales que cuidadosamente los modelan y animan, y que tienen que adaptar específicamente para cada aplicación y plataforma de transmisión o visualización. La aparición de los entornos de realidad virtual/mixta aumenta incluso más la demanda de técnicas prácticas y baratas para producir modelos 3D representando personas reales. El objetivo de esta tesis es acercar el escaneo de humanos en 3D al usuario medio. Para ello, se presentan dos técnicas diferentes, una pasiva y una activa. La primera es un sistema automático para generar avatares multi-texturizados de personas reales mediante una serie de cámaras comunes. Nuestro sistema usa técnicas del estado del arte basadas en shape from silhouette para extraer la forma del sujeto a escanear. Sin embargo, este tipo de técnicas no gestiona las concavidades correctamente, por lo que nuestro sistema propone una manera de incrementar la calidad en una región del modelo que se ve especialmente afectada: la cara. Esta técnica de mejora facial usa un modelo 3D genérico de una cara y lo modifica según los rasgos faciales específicos del sujeto. Además, el sistema incluye una novedosa técnica para generar un atlas de textura a partir de las imágenes capturadas. Este sistema de multi-texturización consigue un atlas de textura sin transiciones abruptas de color gracias a su manera de mezclar la información de color de varias imágenes sobre cada triángulo. Todas las costuras y discontinuidades de color debidas a las condiciones de iluminación irregulares son eliminadas, minimizando el efecto de desenfoque de la interpolación que normalmente introducen este tipo de métodos. La segunda técnica presenta un sistema para conseguir un modelo humano 3D completamente animable utilizando un sensor de profundidad. A diferencia de otros métodos del estado de arte, nuestro sistema no requiere que el usuario esté completamente quieto durante el proceso de escaneado, ni mover el sensor alrededor del sujeto para cubrir toda su superficie. Por el contrario, el sensor se mantiene estático y el esqueleto virtual de la persona, que se va siguiendo durante el proceso, se utiliza para compensar sus movimientos durante el escaneado.
Resumo:
Background: Studies suggest that expert performance in sport is the result of long-term engagement in a highly specialized form of training termed deliberate practice. The relationship between accumulated deliberate practice and performance predicts that those who begin deliberate practice at a young age accumulate more practice hours over time and would, therefore, have a significant performance advantage. However, qualitative studies have shown that a large amount of sport-specific practice at a young age may lead to negative consequences, such as dropout, and is not necessarily the only path to expert performance in sport. Studies have yet to investigate the activity context, such as the amount of early sport participation, deliberate play and deliberate practice within which dropout occurs. Purpose: To determine whether the nature and amount of childhood-organized sport, deliberate play and deliberate practice participation influence athletes' subsequent decisions to drop out or invest in organized sport. It was hypothesized that young athletes who drop out will have sampled fewer sports, spent less time in deliberate play activities and spent more time in deliberate practice activities during childhood sport involvement. Participants: The parents of eight current, high-level, male, minor ice hockey players formed an active group. The parents of four high-level, male, minor ice hockey players who had recently withdrawn from competitive hockey formed a dropout group. Data collection: Parents completed a structured retrospective survey designed to assess their sons' involvement in organized sport, deliberate play and deliberate practice activities from ages 6 to 13. Data analysis: A complete data-set was available for ages 6 through 13, resulting in a longitudinal data-set spanning eight years. This eight-year range was divided into three levels of development corresponding to the players' progress through the youth ice hockey system. Level one encompassed ages 6–9, level two included ages 10–11 and level three covered ages 12–13. Descriptive statistics were used to report the ages at which the active and dropout players first engaged in select hockey activities. ANOVA with repeated measures across the three levels of development was used to compare the number of sports the active and dropout players were involved in outside of hockey, the number of hours spent in these sports, and involvement in various hockey-related activities. Findings: Results indicated that both the active and dropout players enjoyed a diverse and playful introduction to sport. Furthermore, the active and dropout players invested similar amounts of time in organized hockey games, organized hockey practices, specialized hockey training activities (e.g. hockey camps) and hockey play. However, analysis revealed that the dropout players began off-ice training at a younger age and invested significantly more hours/year in off-ice training at ages 12–13, indicating that engaging in off-ice training activities at a younger age may have negative implications for long-term ice hockey participation. Conclusion: These results are consistent with previous research that has found that early diversification does not hinder sport-specific skill development and it may, in fact, be preferable to early specialization. The active and dropout players differed in one important aspect of deliberate practice: off-ice training activities. The dropout players began off-ice training at a younger age, and participated in more off-ice training at ages 12 and 13 than their active counterparts. This indicates a form of early specialization and supports the postulate that early involvement in practice activities that are not enjoyable may ultimately undermine the intrinsic motivation to continue in sport. Youth sport programs should not focus on developing athletic fitness through intense and routine training, but rather on sport-specific practice, games and play activities that foster fun and enjoyment.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
It is proposed that games, which are designed to generate positive affect, are most successful when they facilitate flow (Csikszentmihalyi 1992). Flow is a state of concentration, deep enjoyment, and total absorption in an activity. The study of games, and a resulting understanding of flow in games can inform the design of non-leisure software for positive affect. The paper considers the ways in which computer games contravene Nielsen's guidelines for heuristic evaluation ( Nielsen and Molich 1990) and how these contraventions impact on flow. The paper also explores the implications for research that stem from the differences between games played on a personal computer and games played on a dedicated console. This research takes important initial steps towards de. ning how flow in computer games can inform affective design.
Resumo:
This article aims to gain a greater understanding of relevant and successful methods of stimulating an ICT culture and skills development in rural areas. The paper distils good practice activities, utilizing criteria derived from a review of the rural dimensions of ICT learning, from a range of relevant initiatives and programmes. These good practice activities cover: community resource centres providing opportunities for ‘tasting’ ICTs; video games and Internet Cafe´s as tools removing ‘entry barriers’; emphasis on ‘user management’ as a means of creating ownership; service delivery beyond fixed locations; use of ICT capacities in the delivery of general services; and selected use of financial support.
Resumo:
Agent-based technology is playing an increasingly important role in today’s economy. Usually a multi-agent system is needed to model an economic system such as a market system, in which heterogeneous trading agents interact with each other autonomously. Two questions often need to be answered regarding such systems: 1) How to design an interacting mechanism that facilitates efficient resource allocation among usually self-interested trading agents? 2) How to design an effective strategy in some specific market mechanisms for an agent to maximise its economic returns? For automated market systems, auction is the most popular mechanism to solve resource allocation problems among their participants. However, auction comes in hundreds of different formats, in which some are better than others in terms of not only the allocative efficiency but also other properties e.g., whether it generates high revenue for the auctioneer, whether it induces stable behaviour of the bidders. In addition, different strategies result in very different performance under the same auction rules. With this background, we are inevitably intrigued to investigate auction mechanism and strategy designs for agent-based economics. The international Trading Agent Competition (TAC) Ad Auction (AA) competition provides a very useful platform to develop and test agent strategies in Generalised Second Price auction (GSP). AstonTAC, the runner-up of TAC AA 2009, is a successful advertiser agent designed for GSP-based keyword auction. In particular, AstonTAC generates adaptive bid prices according to the Market-based Value Per Click and selects a set of keyword queries with highest expected profit to bid on to maximise its expected profit under the limit of conversion capacity. Through evaluation experiments, we show that AstonTAC performs well and stably not only in the competition but also across a broad range of environments. The TAC CAT tournament provides an environment for investigating the optimal design of mechanisms for double auction markets. AstonCAT-Plus is the post-tournament version of the specialist developed for CAT 2010. In our experiments, AstonCAT-Plus not only outperforms most specialist agents designed by other institutions but also achieves high allocative efficiencies, transaction success rates and average trader profits. Moreover, we reveal some insights of the CAT: 1) successful markets should maintain a stable and high market share of intra-marginal traders; 2) a specialist’s performance is dependent on the distribution of trading strategies. However, typical double auction models assume trading agents have a fixed trading direction of either buy or sell. With this limitation they cannot directly reflect the fact that traders in financial markets (the most popular application of double auction) decide their trading directions dynamically. To address this issue, we introduce the Bi-directional Double Auction (BDA) market which is populated by two-way traders. Experiments are conducted under both dynamic and static settings of the continuous BDA market. We find that the allocative efficiency of a continuous BDA market mainly comes from rational selection of trading directions. Furthermore, we introduce a high-performance Kernel trading strategy in the BDA market which uses kernel probability density estimator built on historical transaction data to decide optimal order prices. Kernel trading strategy outperforms some popular intelligent double auction trading strategies including ZIP, GD and RE in the continuous BDA market by making the highest profit in static games and obtaining the best wealth in dynamic games.
Resumo:
Purpose - To study how the threats of terrorism are being handled by a variety of UK companies in the travel and leisure sector in the UK in the post 9/11 era. Design/methodology/approach - A review of the literature of risk management in a world that is perceived to be more risky as a result of the terrorist attacks on the US on 11 September 2001 (9/11) is presented. Describes the application of theories of organizational resilience and institutions to frame an understanding of how managers make sense of terrorism risk and comprehend uncertainty. Reports a qualitative analysis of themes in interviews conducted with 25 managers from 6 unnamed organizations in the aviation industry (3 organizations) and the UK travel and leisure industry (3 organizations), representing a catering supplier, an airport, an airline, a tour company, a convention centre, and an arts and entertainment centre. Findings - The results indicated that the three organizations in the aviation industry prioritize threats from terrorism, whilst the three organizations in the leisure and travel sector do not, suggesting that the managers in the travel and leisure industry apply a probabilistic type of thinking and believe the likelihood of terrorism to be low. Reports that they give precedence to economic concerns and numerous other threats to the industry. Concludes that managers fall prey to the 'ludic fallacy', which conceives all odds as being calculable and hence managers conceive the terrorism risk as low while also expecting institutional factors to pre-empt and control terrorism threats, a reaction which the authors believe to be rather complacent and dangerous. Originality/value - Contributes to the research literature on risk management by revealing the gap in the ability of existing management tools and methodologies to deal with current and uncertain threats facing organizations due to terrorism.
Resumo:
Educational games such as quizzes, quests, puzzles, mazes and logical problems may be modeled as multimedia board games. In the scope of the ADOPTA project1 being under development at the Faculty of Mathematics and Informatics at Sofia University, a formal model for presentation of such educational board games was invented and elaborated. Educational games can be modeled as special board mini-games, with a board of any form and any types of positions. Over defined positions, figures (objects) with certain properties are placed and, next, there are to be defined formal rules for manipulation of these figures and resulted effects. The model has been found to be general enough in order to allow description and execution control of more complex logical problems to be solved by several actions delivered to/by the player according some formal rules and context conditions and, in general, of any learning activities and their workflow. It is used as a base for creation of a software platform providing facilities for easy construction of multimedia board games and their execution. The platform consists of game designer (i.e., a game authoring tool) and game run-time controller communicating each other through game repository. There are created and modeled many examples of educational board games appropriate for didactic purposes, self evaluations, etc., which are supposed to be designed easily by authors with no IT skills and experience. By means of game metadata descriptions, these games are going be included into narrative storyboards and, next, delivered to learners with appropriate profile according their learning style, preferences, etc. Moreover, usage of artificial intelligence agents is planned as well – once as playing virtual opponents of the player or, otherwise, being virtual advisers of the gamer helping him/her in finding the right problem solution within given domain such as discovering a treasure using a location map, finding best tour in a virtual museum, guessing an unknown word in a hangman game, and many others.
Resumo:
The use of digital games and gamification has demonstrated potential to improve many aspects of how businesses provide training to staff, and communicate with consumers. However, there is still a need for better understanding of how the adoption of games and gasification would influence the process of decision-making in organisations across different industry. This article provides a structured review of existing literature on the use of games in the business environment, and seeks to consolidate findings to address research questions regarding their perception, proven efficacy, and identifies key areas for future work. The findings highlight that serious games can have positive and effective impacts in multiple areas of a business, including training, decision-support, and consumer outreach. They also emphasise the challenges and pitfalls of applying serious games and gamification principles within a business context, and discuss the implications of development and evaluation methodologies on the success of a game-based solution.
Resumo:
Changing demographics impact our schools as children come from more linguistically and culturally diverse backgrounds. The various social, cultural, and economic backgrounds of the students affect their early language learning experiences which expose them to the academic language needed to succeed in school. Teachers can help students acquire academic language by introducing words that are within their Zone of Proximal Development and increasing exposure to and use of academic language. This study investigated the effects of increasing structured activities for students to orally interact with informational text on their scientific academic language development and comprehension of expository text. ^ The Academic Text Talk activities, designed to scaffold verbalization of new words and ideas, included discussion, retelling, games, and sentence walls. This study also evaluated if there were differences in scientific language proficiency and comprehension between boys and girls, and between English language learners and native English speakers. ^ A quasi-experimental design was used to determine the relationship between increasing students' oral practice with academic language and their academic language proficiency. Second graders (n = 91) from an urban public school participated in two science units over an 8 week period and were pre and post tested using the Woodcock Muñoz Language Survey-Revised and vocabulary tests from the National Energy Education Project. Analysis of covariance was performed on the pre to post scores by treatment group to determine differences in academic language proficiency for students taught using Academic Text Talk compared to students taught using a text-centered method, using the initial Florida Assessment for Instruction in Reading test as a covariate. Students taught using Academic Text Talk multimodal strategies showed significantly greater increases in their pre to posttest means on the Woodcock Muñoz Language Survey-Revised Oral Language Totals and National Energy Education Development Project Vocabulary tests than students taught using the text-centered method, ps < .05. Boys did not show significantly greater increases than girls, nor did English language learners show significantly greater increases than the native English speakers. ^ This study informs the field of reading research by evaluating the effectiveness of a multimodal combination of strategies emphasizing discourse to build academic language.^
Resumo:
Allocating resources optimally is a nontrivial task, especially when multiple
self-interested agents with conflicting goals are involved. This dissertation
uses techniques from game theory to study two classes of such problems:
allocating resources to catch agents that attempt to evade them, and allocating
payments to agents in a team in order to stabilize it. Besides discussing what
allocations are optimal from various game-theoretic perspectives, we also study
how to efficiently compute them, and if no such algorithms are found, what
computational hardness results can be proved.
The first class of problems is inspired by real-world applications such as the
TOEFL iBT test, course final exams, driver's license tests, and airport security
patrols. We call them test games and security games. This dissertation first
studies test games separately, and then proposes a framework of Catcher-Evader
games (CE games) that generalizes both test games and security games. We show
that the optimal test strategy can be efficiently computed for scored test
games, but it is hard to compute for many binary test games. Optimal Stackelberg
strategies are hard to compute for CE games, but we give an empirically
efficient algorithm for computing their Nash equilibria. We also prove that the
Nash equilibria of a CE game are interchangeable.
The second class of problems involves how to split a reward that is collectively
obtained by a team. For example, how should a startup distribute its shares, and
what salary should an enterprise pay to its employees. Several stability-based
solution concepts in cooperative game theory, such as the core, the least core,
and the nucleolus, are well suited to this purpose when the goal is to avoid
coalitions of agents breaking off. We show that some of these solution concepts
can be justified as the most stable payments under noise. Moreover, by adjusting
the noise models (to be arguably more realistic), we obtain new solution
concepts including the partial nucleolus, the multiplicative least core, and the
multiplicative nucleolus. We then study the computational complexity of those
solution concepts under the constraint of superadditivity. Our result is based
on what we call Small-Issues-Large-Team games and it applies to popular
representation schemes such as MC-nets.
Resumo:
This paper examines the remarkable and unexplored correspondence between games (and board games in particular) and what is commonly understood as theory in the social sciences. It argues that games exhibit many if not most of the attributes of theory, but that theory is missing some of the features of games. As such, game provide a way of rethinking what we mean by theory and theorizing. Specifically, games and their relationship with the ‘real’ world, provide a way of thinking about theory and theorizing that is consistent with recent calls to frame social inquiry around the concept of phrónēsis.
Resumo:
The selected publications are focused on the relations between users, eGames and the educational context, and how they interact together, so that both learning and user performance are improved through feedback provision. A key part of this analysis is the identification of behavioural, anthropological patterns, so that users can be clustered based on their actions, and the steps taken in the system (e.g. social network, online community, or virtual campus). In doing so, we can analyse large data sets of information made by a broad user sample,which will provide more accurate statistical reports and readings. Furthermore, this research is focused on how users can be clustered based on individual and group behaviour, so that a personalized support through feedback is provided, and the personal learning process is improved as well as the group interaction. We take inputs from every person and from the group they belong to, cluster the contributions, find behavioural patterns and provide personalized feedback to the individual and the group, based on personal and group findings. And we do all this in the context of educational games integrated in learning communities and learning management systems. To carry out this research we design a set of research questions along the 10-year published work presented in this thesis. We ask if the users can be clustered together based on the inputs provided by them and their groups; if and how these data are useful to improve the learner performance and the group interaction; if and how feedback becomes a useful tool for such pedagogical goal; if and how eGames become a powerful context to deploy the pedagogical methodology and the various research methods and activities that make use of that feedback to encourage learning and interaction; if and how a game design and a learning design must be defined and implemented to achieve these objectives, and to facilitate the productive authoring and integration of eGames in pedagogical contexts and frameworks. We conclude that educational games are a resourceful tool to provide a user experience towards a better personalized learning performance and an enhance group interaction along the way. To do so, eGames, while integrated in an educational context, must follow a specific set of user and technical requirements, so that the playful context supports the pedagogical model underneath. We also conclude that, while playing, users can be clustered based on their personal behaviour and interaction with others, thanks to the pattern identification. Based on this information, a set of recommendations are provided Digital Anthropology and educational eGames 6 /216 to the user and the group in the form of personalized feedback, timely managed for an optimum impact on learning performance and group interaction level. In this research, Digital Anthropology is introduced as a concept at a late stage to provide a backbone across various academic fields including: Social Science, Cognitive Science, Behavioural Science, Educational games and, of course, Technology-enhance learning. Although just recently described as an evolution of traditional anthropology, this approach to digital behaviour and social structure facilitates the understanding amongst fields and a comprehensive view towards a combined approach. This research takes forward the already existing work and published research onusers and eGames for learning, and turns the focus onto the next step — the clustering of users based on their behaviour and offering proper, personalized feedback to the user based on that clustering, rather than just on isolated inputs from every user. Indeed, this pattern recognition in the described context of eGames in educational contexts, and towards the presented aim of personalized counselling to the user and the group through feedback, is something that has not been accomplished before.
Resumo:
Ageing of the population is a worldwide phenomenon. Numerous ICT-based solutions have been developed for elderly care but mainly connected to the physiological and nursing aspects in services for the elderly. Social work is a profession that should pay attention to the comprehensive wellbeing and social needs of the elderly. Many people experience loneliness and depression in their old age, either as a result of living alone or due to a lack of close family ties and reduced connections with their culture of origin, which results in an inability to participate actively in community activities (Singh & Misra, 2009). Participation in society would enhance the quality of life. With the development of information technology, the use of technology in social work practice has risen dramatically. The aim of this literature review is to map out the state of the art of knowledge about the usage of ICT in elderly care and to figure out research-based knowledge about the usability of ICT for the prevention of loneliness and social isolation of elderly people. The data for the current research comes from the core collection of the Web of Science and the data searching was performed using Boolean? The searching resulted in 216 published English articles. After going through the topics and abstracts, 34 articles were selected for the data analysis that is based on a multi approach framework. The analysis of the research approach is categorized according to some aspects of using ICT by older adults from the adoption of ICT to the impact of usage, and the social services for them. This literature review focused on the function of communication by excluding the applications that mainly relate to physical nursing. The results show that the so-called ‘digital divide’ still exists, but the older adults have the willingness to learn and utilise ICT in daily life, especially for communication. The data shows that the usage of ICT can prevent the loneliness and social isolation of older adults, and they are eager for technical support in using ICT. The results of data analysis on theoretical frames and concepts show that this research field applies different theoretical frames from various scientific fields, while a social work approach is lacking. However, a synergic frame of applied theories will be suggested from the perspective of social work.
Resumo:
Learning Analytics is an emerging field focused on analyzing learners’ interactions with educational content. One of the key open issues in learning analytics is the standardization of the data collected. This is a particularly challenging issue in serious games, which generate a diverse range of data. This paper reviews the current state of learning analytics, data standards and serious games, studying how serious games are tracking the interactions from their players and the metrics that can be distilled from them. Based on this review, we propose an interaction model that establishes a basis for applying Learning Analytics into serious games. This paper then analyzes the current standards and specifications used in the field. Finally, it presents an implementation of the model with one of the most promising specifications: Experience API (xAPI). The Experience API relies on Communities of Practice developing profiles that cover different use cases in specific domains. This paper presents the Serious Games xAPI Profile: a profile developed to align with the most common use cases in the serious games domain. The profile is applied to a case study (a demo game), which explores the technical practicalities of standardizing data acquisition in serious games. In summary, the paper presents a new interaction model to track serious games and their implementation with the xAPI specification.