890 resultados para Game on circle


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Perhaps no other patient safety intervention depends so acutely on effective interprofessional teamwork for patient survival than the hospital rapid response system (RRS). Yet little is known about nurse-physician relationships when rescuing at-risk patients. This study compared nursing and medical staff perceptions of a mature RRS at a large tertiary hospital. Findings indicate the RRS may be failing to address a hierarchical culture and systems-level barriers to early recognition and response to patient deterioration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper explores novel driving experiences that make use of gamification and augmented reality in the car. We discuss our design considerations, which are grounded in road safety psychology and video game design theory. We aim to address the tension between safe driving practices and player engagement. Specifically, we propose a holistic, iterative thinking process inspired by game design cognition and share our insights generated through the application of this process. We present preliminary game concepts that blend digital components with physical elements from the driving environment. We further highlight how this design process helped us to iteratively evolve these concepts towards being safer while maintaining fun. These insights and game design cognition itself will be useful to the AutomotiveUI community investigating similar novel driving experiences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study uses the reverse salient methodology to contrast subsystems in video game consoles in order to discover, characterize, and forecast the most significant technology gap. We build on the current methodologies (Performance Gap and Time Gap) for measuring the magnitude of Reverse Salience, by showing the effectiveness of Performance Gap Ratio (PGR). The three subject subsystems in this analysis are the CPU Score, GPU core frequency, and video memory bandwidth. CPU Score is a metric developed for this project, which is the product of the core frequency, number of parallel cores, and instruction size. We measure the Performance Gap of each subsystem against concurrently available PC hardware on the market. Using PGR, we normalize the evolution of these technologies for comparative analysis. The results indicate that while CPU performance has historically been the Reverse Salient, video memory bandwidth has taken over as the quickest growing technology gap in the current generation. Finally, we create a technology forecasting model that shows how much the video RAM bandwidth gap will grow through 2019 should the current trend continue. This analysis can assist console developers in assigning resources to the next generation of platforms, which will ultimately result in longer hardware life cycles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In his 1987 book, The Media Lab: Inventing the Future at MIT, Stewart Brand provides an insight into the visions of the future of the media in the 1970s and 1980s. 1 He notes that Nicolas Negroponte made a compelling case for the foundation of a media laboratory at MIT with diagrams detailing the convergence of three sectors of the media—the broadcast and motion picture industry; the print and publishing industry; and the computer industry. Stewart Brand commented: ‘If Negroponte was right and communications technologies really are converging, you would look for signs that technological homogenisation was dissolving old boundaries out of existence, and you would expect an explosion of new media where those boundaries used to be’. Two decades later, technology developers, media analysts and lawyers have become excited about the latest phase of media convergence. In 2006, the faddish Time Magazine heralded the arrival of various Web 2.0 social networking services: You can learn more about how Americans live just by looking at the backgrounds of YouTube videos—those rumpled bedrooms and toy‐strewn basement rec rooms—than you could from 1,000 hours of network television. And we didn’t just watch, we also worked. Like crazy. We made Facebook profiles and Second Life avatars and reviewed books at Amazon and recorded podcasts. We blogged about our candidates losing and wrote songs about getting dumped. We camcordered bombing runs and built open‐source software. America loves its solitary geniuses—its Einsteins, its Edisons, its Jobses—but those lonely dreamers may have to learn to play with others. Car companies are running open design contests. Reuters is carrying blog postings alongside its regular news feed. Microsoft is working overtime to fend off user‐created Linux. We’re looking at an explosion of productivity and innovation, and it’s just getting started, as millions of minds that would otherwise have drowned in obscurity get backhauled into the global intellectual economy. The magazine announced that Time’s Person of the Year was ‘You’, the everyman and everywoman consumer ‘for seizing the reins of the global media, for founding and framing the new digital democracy, for working for nothing and beating the pros at their own game’. This review essay considers three recent books, which have explored the legal dimensions of new media. In contrast to the unbridled exuberance of Time Magazine, this series of legal works displays an anxious trepidation about the legal ramifications associated with the rise of social networking services. In his tour de force, The Future of Reputation: Gossip, Rumor, and Privacy on the Internet, Daniel Solove considers the implications of social networking services, such as Facebook and YouTube, for the legal protection of reputation under privacy law and defamation law. Andrew Kenyon’s edited collection, TV Futures: Digital Television Policy in Australia, explores the intersection between media law and copyright law in the regulation of digital television and Internet videos. In The Future of the Internet and How to Stop It, Jonathan Zittrain explores the impact of ‘generative’ technologies and ‘tethered applications’—considering everything from the Apple Mac and the iPhone to the One Laptop per Child programme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports on the current field of narrative-based game design through case study analysis with a particular focus on balancing high narrative agency with low production resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper explores novel driving experiences that make use of gamification and augmented reality in the car. We discuss our design considerations, which are grounded in road safety psychology and video game design theory. We aim to address the tension between safe driving practices and player engagement. Specifically, we propose a holistic, iterative thinking process inspired by game design cognition and share our insights generated through the application of this process. We present preliminary game concepts that blend digital components with physical elements from the driving environment. We further highlight how this design process helped us to iteratively evolve these concepts towards being safer while maintaining fun. These insights and game design cognition itself will be useful to the AutomotiveUI community investigating similar novel driving experiences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When a puzzle game is created, its design parameters must be chosen to allow solvable and interesting challenges to be created for the player. We investigate the use of random sampling as a computationally inexpensive means of automated game analysis, to evaluate the BoxOff family of puzzle games. This analysis reveals useful insights into the game, such as the surprising fact that almost 100% of randomly generated challenges have a solution, but less than 10% will be solved using strictly random play, validating the inventor’s design choices. We show the 1D game to be trivial and the 3D game to be viable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background The Circle of Willis (CoW) is the most important collateral pathway of the cerebral artery. The present study aims to investigate the collateral capacity of CoW with anatomical variation when unilateral internalcarotid artery (ICA) is occluded. Methods Basing on MRI data, we have reconstructed eight 3D models with variations in the posterior circulation of the CoW and set four different degrees of stenosis in the right ICA, namely 24%, 43%, 64% and 79%, respectively. Finally, a total of 40 models are performed with computational fluid dynamics simulations. All of the simulations share the same boundary condition with static pressure and the volume flow rate (VFR) are obtained to evaluate their collateral capacity. Results As for the middle cerebral artery (MCA) and the anterior cerebral artery (ACA), the transitional-type model possesses the best collateral capacity. But for the posterior cerebral artery (PCA), unilateral stenosis of ICA has the weakest influence on the unilateral posterior communicating artery (PCoA) absent model. We also find that the full fetal-type posterior circle of Willis is an utmost dangerous variation which must be paid more attention. Conclusion The results demonstrate that different models have different collateral capacities in coping stenosis of unilateral ICA and these differences can be reflected by different outlets. The study could be used as a reference for neurosurgeon in choosing the best treatment strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The 48 hour game making challenge has been running since 2007. In recent years, we have not only been running a 'game jam' for the local community but we have also been exploring the way in which the event itself and the place of the event has the potential to create its own stories. The 2015 challenge is part of a series of data collection opportunities focussed on the game jam itself and the meaning making that the participants engage in about the event. We are continuing the data collection commenced in 2012: "Game jams are the creative festivals of the game development community and a game jam is very much an event or performance; its stories are those of subjective experience. Participants return year after year and recount personal stories from previous challenges; arrival in the 48hr location typically inspires instances of individual memory and narration more in keeping with those of a music festival or an oft frequented holiday destination. Since its inception, the 48hr has been heavily documented, from the photo-blogging of our first jam and the twitter streams of more recent events to more formal interviews and documentaries (see Anderson, 2012). We have even had our own moments of Gonzo journalism with an on-site press room one year and an ‘embedded’ journalist another year (Keogh, 2011). In the last two years of the 48hr we have started to explore ways and means to collect more abstract data during the event, that is, empirical data about movement and activity. The intent behind this form of data collection was to explore graphic and computer generated visualisations of the event, not for the purpose of formal analysis but in the service of further story telling." [exerpt from truna aka j.turner, Thomas & Owen, 2013) See: truna aka j.turner, Thomas & Owen (2013) Living the indie life: mapping creative teams in a 48 hour game jam and playing with data, Proceedings of the 9th Australasian Conference on Interactive Entertainment, IE'2013, September 30 - October 01 2013, Melbourne, VIC, Australia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is a comparative case study in Japanese video game localization for the video games Sairen, Sairen 2 and Sairen Nyûtoransurêshon, and English-language localized versions of the same games as published in Scandinavia and Australia/New Zealand. All games are developed by Sony Computer Entertainment Inc. and published exclusively for Playstation2 and Playstation3 consoles. The fictional world of the Sairen games draws much influence from Japanese history, as well as from popular and contemporary culture, and in doing so caters mainly to a Japanese audience. For localization, i.e. the adaptation of a product to make it accessible to users outside the original market it was intended for in the first place, this is a challenging issue. Video games are media of entertainment, and therefore localization practice must preserve the games’ effects on the players’ emotions. Further, video games are digital products that are comprised of a multitude of distinct elements, some of which are part of the game world, while others regulate the connection between the player as part of the real world and the game as digital medium. As a result, video game localization is also a practice that has to cope with the technical restrictions that are inherent to the medium. The main theory used throughout the thesis is Anthony Pym’s framework for localization studies that considers the user of the localized product as a defining part of the localization process. This concept presupposes that localization is an adaptation that is performed to make a product better suited for use during a specific reception situation. Pym also addresses the factor that certain products may resist distribution into certain reception situations because of their content, and that certain aspects of localization aim to reduce this resistance through significant alterations of the original product. While Pym developed his ideas with mainly regular software in mind, they can also be adapted well to study video games from a localization angle. Since modern video games are highly complex entities that often switch between interactive and non-interactive modes, Pym’s ideas are adapted throughout the thesis to suit the particular elements being studied. Instances analyzed in this thesis include menu screens, video clips, in-game action and websites. The main research questions focus on how the games’ rules influence localization, and how the games’ fictional domain influences localization. Because there are so many peculiarities inherent to the medium of the video game, other theories are introduced as well to complement the research at hand. These include Lawrence Venuti’s discussions of foreiginizing and domesticating translation methods for literary translation, and Jesper Juul’s definition of games. Additionally, knowledge gathered from interviews with video game localization professionals in Japan during September and October 2009 is also utilized for this study. Apart from answering the aforementioned research questions, one of this thesis’ aims is to enrich the still rather small field of game localization studies, and the study of Japanese video games in particular, one of Japan’s most successful cultural exports.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Game strategies have been developed in past decades and used in the field of economics, engineering, computer science and biology due to their efficiency in solving design optimisation problems. In addition, research on Multi-Objective (MO) and Multidisciplinary Design Optimisation (MDO) has focused on developing robust and efficient optimisation method to produce quality solutions with less computational time. In this paper, a new optimisation method Hybrid Game Strategy for MO problems is introduced and compared to CMA-ES based optimisation approach. Numerical results obtained from both optimisation methods are compared in terms of computational expense and model quality. The benefits of using Game-strategies are demonstrated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Queensland Great Barrier Reef line fishery in Australia is regulated via a range of input and output controls including minimum size limits, daily catch limits and commercial catch quotas. As a result of these measures a substantial proportion of the catch is released or discarded. The fate of these released fish is uncertain, but hook-related mortality can potentially be decreased by using hooks that reduce the rates of injury, bleeding and deep hooking. There is also the potential to reduce the capture of non-target species though gear selectivity. A total of 1053 individual fish representing five target species and three non-target species were caught using six hook types including three hook patterns (non-offset circle, J and offset circle), each in two sizes (small 4/0 or 5/0 and large 8/0). Catch rates for each of the hook patterns and sizes varied between species with no consistent results for target or non-target species. When data for all of the fish species were aggregated there was a trend for larger hooks, J hooks and offset circle hooks to cause a greater number of injuries. Using larger hooks was more likely to result in bleeding, although this trend was not statistically significant. Larger hooks were also more likely to foul-hook fish or hook fish in the eye. There was a reduction in the rates of injuries and bleeding for both target and non-target species when using the smaller hook sizes. For a number of species included in our study the incidence of deep hooking decreased when using non-offset circle hooks, however, these results were not consistent for all species. Our results highlight the variability in hook performance across a range of tropical demersal finfish species. The most obvious conservation benefits for both target and non-target species arise from using smaller sized hooks and non-offset circle hooks. Fishers should be encouraged to use these hook configurations to reduce the potential for post-release mortality of released fish.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examined post-release survival in sand flathead (Platycephalus bassensis) and whether there were survival benefits from the use of circle hooks over conventional hook patterns. Anatomical hooking location was the major factor contributing to mortality, with an almost 100% survival rate for fish hooked in the lip, mouth or eye (shallow-hooked) compared with around 64% for fish hooked in the throat or gut (deep-hooked). Mortality in deep-hooked fish was generally associated with injuries to vital organs (gills, heart, liver) and survival was significantly lower if bleeding was associated with injury (54% compared with 85% for non-bleeders). Circle hooks resulted in significantly lower deep-hooking rates (1%) compared with conventional hook types (4-9%) and, based on catch rates, were at least as effective as conventional hook patterns. Estimated survival rates for line-caught sand flathead were high, over 99% for circle hooks and between 94 and 97% for conventional hooks. These findings support the efficacy of management strategies based on size and bag limits and the practice of catch-and-release fishing for sand flathead, as well as a potential conservation benefit from the use of circle hooks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of learning correct decision rules to minimize the probability of misclassification is a long-standing problem of supervised learning in pattern recognition. The problem of learning such optimal discriminant functions is considered for the class of problems where the statistical properties of the pattern classes are completely unknown. The problem is posed as a game with common payoff played by a team of mutually cooperating learning automata. This essentially results in a probabilistic search through the space of classifiers. The approach is inherently capable of learning discriminant functions that are nonlinear in their parameters also. A learning algorithm is presented for the team and convergence is established. It is proved that the team can obtain the optimal classifier to an arbitrary approximation. Simulation results with a few examples are presented where the team learns the optimal classifier.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To strive to improve the rehabilitation program of individuals with transfemoral amputation fitted with bone-anchored prosthesis based on data from direct measurements of the load applied on the residuum we first of all need to understand the load applied on the fixation. Therefore the load applied on the residuum was first directly measured during standardized activities of daily living such as straight line level walking, ascending and descending stairs and a ramp and walking around a circle. From measuring the load in standardized activities of daily living the load was also measured during different phases of the rehabilitation program such as during walking with walking aids and during load bearing exercises.[1-15] The rehabilitation program for individuals with a transfemoral amputation fitted with an OPRA implant relies on a combination of dynamic and static load bearing exercises.[16-20] This presentation will focus on the study of a set of experimental static load bearing exercises. [1] A group of eleven individuals with unilateral transfemoral amputation fitted with an OPRA implant participated in this study. The load on the implant during the static load bearing exercises was measured using a portable system including a commercial transducer embedded in a short pylon, a laptop and a customized software package. This apparatus was previously shown effective in a proof-of-concept study published by Prof. Frossard. [1-9] The analysis of the static load bearing exercises included an analysis of the reliability as well as the loading compliance. The analysis of the loading reliability showed a high reliability between the loading sessions indicating a correct repetition of the LBE by the participants. [1, 5] The analysis of the loading compliance showed a significant lack of axial compliance leading to a systematic underloading of the long axis of the implant during the proposed experimental static LBE.