905 resultados para Digital Library
Resumo:
In contemporary game development circles the ‘game making jam’ has become an important rite of passage and baptism event, an exploration space and a central indie lifestyle affirmation and community event. Game jams have recently become a focus for design researchers interested in the creative process. In this paper we tell the story of an established local game jam and our various documentation and data collection methods. We present the beginnings of the current project, which seeks to map the creative teams and their process in the space of the challenge, and which aims to enable participants to be more than the objects of the data collection. A perceived issue is that typical documentation approaches are ‘about’ the event as opposed to ‘made by’ the participants and are thus both at odds with the spirit of the jam as a phenomenon and do not really access the rich playful potential of participant experience. In the data collection and visualisation projects described here, we focus on using collected data to re-include the participants in telling stories about their experiences of the event as a place-based experience. Our goal is to find a means to encourage production of ‘anecdata’ - data based on individual story telling that is subjective, malleable, and resists collection via formal mechanisms - and to enable mimesis, or active narrating, on the part of the participants. We present a concept design for data as game based on the logic of early medieval maps and we reflect on how we could enable participation in the data collection itself.
Resumo:
The Land Of Ludos is a proposal or a design concept for a game that re-imagines the recorded Bluetooth device movements from the 2011 48 Hour Game Making Challenge as an interactive narrative experience. As game developers, the most interesting elements of the 48 Hour challenge data visualisation project is not measurement or analysis of process, but the relationships and narratives created during the experience. [exerpt truna aka j.turner, Thomas & Owen, 2013] See: truna aka j.turner, Thomas & Owen (2013) Living the indie life: mapping creative teams in a 48 hour game jam and playing with data, in proc IE'2013, 9th Australasian Conference on Interactive Entertainment, September 30 - October 01 2013, Melbourne, VIC, Australia
Resumo:
The interaction of Au particles with few layer graphene is of interest for the formation of the next generation of sensing devices(1). In this paper we investigate the coupling of single gold nanoparticles to a graphene sheet, and multiple gold nanoparticles with a graphene sheet using COMSOL Multiphysics. By using these simulations we are able to determine the electric field strength and associated hot-spots for various gold nanoparticle-graphene systems. The Au nanoparticles were modelled as 8 nm diameter spheres on 1.5 nm thick (5 layers) graphene, with properties of graphene obtained from the refractive index data of Weber(2) and the Au refractive index data from Palik(3). The field was incident along the plane of the sheet with polarisation tested for both s and p. The study showed strong localised interaction between the Au and graphene with limited spread; however the double particle case where the graphene sheet separated two Au nanoparticles showed distinct interaction between the particles and graphene. An offset was introduced (up to 4 nm) resulting in much reduced coupling between the opposed particles as the distance apart increased. Findings currently suggest that the graphene layer has limited interaction with incident fields with a single particle present whilst reducing the coupling region to a very fine area when opposing particles are involved. It is hoped that the results of this research will provide insight into graphene-plasmon interactions and spur the development of the next generation of sensing devices.
Resumo:
As support grows for greater access to information and data held by governments, so does awareness of the need for appropriate policy, technical and legal frameworks to achieve the desired economic and societal outcomes. Since the late 2000s numerous international organizations, inter-governmental bodies and governments have issued open government data policies, which set out key principles underpinning access to, and the release and reuse of data. These policies reiterate the value of government data and establish the default position that it should be openly accessible to the public under transparent and non-discriminatory conditions, which are conducive to innovative reuse of the data. A key principle stated in open government data policies is that legal rights in government information must be exercised in a manner that is consistent with and supports the open accessibility and reusability of the data. In particular, where government information and data is protected by copyright, access should be provided under licensing terms which clearly permit its reuse and dissemination. This principle has been further developed in the policies issued by Australian Governments into a specific requirement that Government agencies are to apply the Creative Commons Attribution licence (CC BY) as the default licensing position when releasing government information and data. A wide-ranging survey of the practices of Australian Government agencies in managing their information and data, commissioned by the Office of the Australian Information Commissioner in 2012, provides valuable insights into progress towards the achievement of open government policy objectives and the adoption of open licensing practices. The survey results indicate that Australian Government agencies are embracing open access and a proactive disclosure culture and that open licensing under Creative Commons licences is increasingly prevalent. However, the finding that ‘[t]he default position of open access licensing is not clearly or robustly stated, nor properly reflected in the practice of Government agencies’ points to the need to further develop the policy framework and the principles governing information access and reuse, and to provide practical guidance tools on open licensing if the broadest range of government information and data is to be made available for innovative reuse.
Resumo:
The design of applications for dynamic ridesharing or carpooling is often formulated as a matching problem of connecting people with an aligned set of transport needs within a reasonable interval of time and space. This problem formulation relegates social connections to being secondary factors. Technology assisted ridesharing applications that put the matching problem first have revealed that they suffer from being unable to address the factor of social comfort, even after adding friend features or piggybacking on social networking sites. This research aims to understand the fabric of social interactions through which ridesharing happens. We take an online observation approach in order to understand the fabric of social interactions for ridesharing that is happening in highly subscribed online groups of local residents. This understanding will help researchers to identify design challenges and opportunities to support ridesharing in local communities. This paper contributes a fundamental understanding of how social interactions and social comfort precede rideshare requests in local communities.
Resumo:
There are a number of pressing issues facing contemporary online environments that are causing disputes among participants and platform operators and increasing the likelihood of external regulation. A number of solutions have been proposed, including industry self-governance, top-down regulation and emergent self-governance such as EVE Online’s “Council of Stellar Management”. However, none of these solutions seem entirely satisfying; facing challenges from developers who fear regulators will not understand their platforms, or players who feel they are not sufficiently empowered to influence the platform, while many authors have raised concerns over the implementation of top-down regulation, and why the industry may be well-served to pre-empt such action. This paper considers case studies of EVE Online and the offshore gambling industry, and whether a version of self-governance may be suitable for the future of the industry.
Resumo:
EVE Online, released in 2003 by CCP Games, is a space-themed Massively Multiplayer Online Game (MMOG). This sandbox style MMOG has a reputation for being a difficult game with a punishing learning curve that is fairly impenetrable to new players. This has led to the widely held belief among the larger MMOG community that “EVE players are different”, as only a very particular type of player would be dedicated to learning how to play a game this challenging. Taking a critical approach to the claim that “EVE players are different”, this paper complicates the idea that only a certain type of player capable of playing the most hardcore of games will be attracted to this particular MMOG. Instead, we argue that EVE’s “exceptionalism” is actually the result of conscious design decisions on the part of CCP games, which in turn compel particular behaviours that are continually reinforced as the norm by the game’s relatively homogenous player community.
Resumo:
An accumulator based on bilinear pairings was proposed at CT-RSA'05. Here, it is first demonstrated that the security model proposed by Lan Nguyen does lead to a cryptographic accumulator that is not collision resistant. Secondly, it is shown that collision-resistance can be provided by updating the adversary model appropriately. Finally, an improvement on Nguyen's identity escrow scheme, with membership revocation based on the accumulator, by removing the trusted third party is proposed.
Resumo:
This chapter presents a novel control strategy for trajectory tracking of underwater marine vehicles that are designed using port-Hamiltonian theory. A model for neutrally buoyant underwater vehicles is formulated as a PHS, and then the tracking controller is designed for the horizontal plane-surge, sway and yaw. The control design is done by formulating the error dynamics as a set-point regulation port-Hamiltonian control problem. The control design is formulated in two steps. In the first step, a static-feedback tracking controller is designed, and the second step integral action is added. The global asymptotic stability of the closed loop system is proved and the performance of the controller is illustrated using a model of an open-frame offshore underwater vehicle.
Resumo:
A comparison of relay power minimisation subject to received signal-to-noise ratio (SNR) at the receiver and SNR maximisation subject to the total transmitted power of relays for a typical wireless network with distributed beamforming is presented. It is desirable to maximise receiver quality-of-service (QoS) and also to minimise the cost of transmission in terms of power. Hence, these two optimisation problems are very common and have been addressed separately in the literature. It is shown that SNR maximisation subject to power constraint and power minimisation subject to SNR constraint yield the same results for a typical wireless network. It proves that either one of the optimisation approaches is sufficient.
Resumo:
In the TREC Web Diversity track, novelty-biased cumulative gain (α-NDCG) is one of the official measures to assess retrieval performance of IR systems. The measure is characterised by a parameter, α, the effect of which has not been thoroughly investigated. We find that common settings of α, i.e. α=0.5, may prevent the measure from behaving as desired when evaluating result diversification. This is because it excessively penalises systems that cover many intents while it rewards those that redundantly cover only few intents. This issue is crucial since it highly influences systems at top ranks. We revisit our previously proposed threshold, suggesting α be set on a query-basis. The intuitiveness of the measure is then studied by examining actual rankings from TREC 09-10 Web track submissions. By varying α according to our query-based threshold, the discriminative power of α-NDCG is not harmed and in fact, our approach improves α-NDCG's robustness. Experimental results show that the threshold for α can turn the measure to be more intuitive than using its common settings.
Resumo:
To provide card holder authentication while they are conducting an electronic transaction using mobile devices, VISA and MasterCard independently proposed two electronic payment protocols: Visa 3D Secure and MasterCard Secure Code. The protocols use pre-registered passwords to provide card holder authentication and Secure Socket Layer/ Transport Layer Security (SSL/TLS) for data confidentiality over wired networks and Wireless Transport Layer Security (WTLS) between a wireless device and a Wireless Application Protocol (WAP) gateway. The paper presents our analysis of security properties in the proposed protocols using formal method tools: Casper and FDR2. We also highlight issues concerning payment security in the proposed protocols.
Resumo:
Many websites offer the opportunity for customers to rate items and then use customers' ratings to generate items reputation, which can be used later by other users for decision making purposes. The aggregated value of the ratings per item represents the reputation of this item. The accuracy of the reputation scores is important as it is used to rank items. Most of the aggregation methods didn't consider the frequency of distinct ratings and they didn't test how accurate their reputation scores over different datasets with different sparsity. In this work we propose a new aggregation method which can be described as a weighted average, where weights are generated using the normal distribution. The evaluation result shows that the proposed method outperforms state-of-the-art methods over different sparsity datasets.
Resumo:
Twitter is a very popular social network website that allows users to publish short posts called tweets. Users in Twitter can follow other users, called followees. A user can see the posts of his followees on his Twitter profile home page. An information overload problem arose, with the increase of the number of followees, related to the number of tweets available in the user page. Twitter, similar to other social network websites, attempts to elevate the tweets the user is expected to be interested in to increase overall user engagement. However, Twitter still uses the chronological order to rank the tweets. The tweets ranking problem was addressed in many current researches. A sub-problem of this problem is to rank the tweets for a single followee. In this paper we represent the tweets using several features and then we propose to use a weighted version of the famous voting system Borda-Count (BC) to combine several ranked lists into one. A gradient descent method and collaborative filtering method are employed to learn the optimal weights. We also employ the Baldwin voting system for blending features (or predictors). Finally we use the greedy feature selection algorithm to select the best combination of features to ensure the best results.
Resumo:
Recommender systems provide personalized advice for customers online based on their own preferences, while reputation systems generate a community advice on the quality of items on the Web. Both systems use users’ ratings to generate their output. In this paper, we propose to combine reputation models with recommender systems to enhance the accuracy of recommendations. The main contributions include two methods for merging two ranked item lists which are generated based on recommendation scores and reputation scores, respectively, and a personalized reputation method to generate item reputations based on users’ interests. The proposed merging methods can be applicable to any recommendation methods and reputation methods, i.e., they are independent from generating recommendation scores and reputation scores. The experiments we conducted showed that the proposed methods could enhance the accuracy of existing recommender systems.