248 resultados para Profit or result sharing
Resumo:
We consider the problem of prediction with expert advice in the setting where a forecaster is presented with several online prediction tasks. Instead of competing against the best expert separately on each task, we assume the tasks are related, and thus we expect that a few experts will perform well on the entire set of tasks. That is, our forecaster would like, on each task, to compete against the best expert chosen from a small set of experts. While we describe the "ideal" algorithm and its performance bound, we show that the computation required for this algorithm is as hard as computation of a matrix permanent. We present an efficient algorithm based on mixing priors, and prove a bound that is nearly as good for the sequential task presentation case. We also consider a harder case where the task may change arbitrarily from round to round, and we develop an efficient approximate randomized algorithm based on Markov chain Monte Carlo techniques.
Resumo:
Unusual event detection in crowded scenes remains challenging because of the diversity of events and noise. In this paper, we present a novel approach for unusual event detection via sparse reconstruction of dynamic textures over an overcomplete basis set, with the dynamic texture described by local binary patterns from three orthogonal planes (LBPTOP). The overcomplete basis set is learnt from the training data where only the normal items observed. In the detection process, given a new observation, we compute the sparse coefficients using the Dantzig Selector algorithm which was proposed in the literature of compressed sensing. Then the reconstruction errors are computed, based on which we detect the abnormal items. Our application can be used to detect both local and global abnormal events. We evaluate our algorithm on UCSD Abnormality Datasets for local anomaly detection, which is shown to outperform current state-of-the-art approaches, and we also get promising results for rapid escape detection using the PETS2009 dataset.
Resumo:
The increasing capability of mobile devices and social networks to gather contextual and social data has led to increased interest in context-aware computing for mobile applications. This paper explores ways of reconciling two different viewpoints of context, representational and interactional, that have arisen respectively from technical and social science perspectives on context-aware computing. Through a case study in agile ridesharing, the importance of dynamic context control, historical context and broader context is discussed. We build upon earlier work that has sought to address the divide by further explicating the problem in the mobile context and expanding on the design approaches.
Resumo:
This paper presents a comprehensive study to find the most efficient bitrate requirement to deliver mobile video that optimizes bandwidth, while at the same time maintains good user viewing experience. In the study, forty participants were asked to choose the lowest quality video that would still provide for a comfortable and long-term viewing experience, knowing that higher video quality is more expensive and bandwidth intensive. This paper proposes the lowest pleasing bitrates and corresponding encoding parameters for five different content types: cartoon, movie, music, news and sports. It also explores how the lowest pleasing quality is influenced by content type, image resolution, bitrate, and user gender, prior viewing experience, and preference. In addition, it analyzes the trajectory of users’ progression while selecting the lowest pleasing quality. The findings reveal that the lowest bitrate requirement for a pleasing viewing experience is much higher than that of the lowest acceptable quality. Users’ criteria for the lowest pleasing video quality are related to the video’s content features, as well as its usage purpose and the user’s personal preferences. These findings can provide video providers guidance on what quality they should offer to please mobile users.
Resumo:
Recent surveys of information technology management professionals show that understanding business domains in terms of business productivity and cost reduction potential, knowledge of different vertical industry segments and their information requirements, understanding of business processes and client-facing skills are more critical for Information Systems personnel than ever before. In an attempt to restrucuture the information systems curriculum accordingly, our view it that information systems students need to develop an appreciation for organizational work systems in order to understand the operation and significance of information systems within such work systems.
Resumo:
Two-party key exchange (2PKE) protocols have been rigorously analyzed under various models considering different adversarial actions. However, the analysis of group key exchange (GKE) protocols has not been as extensive as that of 2PKE protocols. Particularly, an important security attribute called key compromise impersonation (KCI) resilience has been completely ignored for the case of GKE protocols. Informally, a protocol is said to provide KCI resilience if the compromise of the long-term secret key of a protocol participant A does not allow the adversary to impersonate an honest participant B to A. In this paper, we argue that KCI resilience for GKE protocols is at least as important as it is for 2PKE protocols. Our first contribution is revised definitions of security for GKE protocols considering KCI attacks by both outsider and insider adversaries. We also give a new proof of security for an existing two-round GKE protocol under the revised security definitions assuming random oracles. We then show how to achieve insider KCIR in a generic way using a known compiler in the literature. As one may expect, this additional security assurance comes at the cost of an extra round of communication. Finally, we show that a few existing protocols are not secure against outsider KCI attacks. The attacks on these protocols illustrate the necessity of considering KCI resilience for GKE protocols.
Resumo:
The design of artificial intelligence in computer games is an important component of a player's game play experience. As games are becoming more life-like and interactive, the need for more realistic game AI will increase. This is particularly the case with respect to AI that simulates how human players act, behave and make decisions. The purpose of this research is to establish a model of player-like behavior that may be effectively used to inform the design of artificial intelligence to more accurately mimic a player's decision making process. The research uses a qualitative analysis of player opinions and reactions while playing a first person shooter video game, with recordings of their in game actions, speech and facial characteristics. The initial studies provide player data that has been used to design a model of how a player behaves.
Resumo:
Local governments struggle to engage time poor and seemingly apathetic citizens, as well as the city’s young digital natives, the digital locals. This project aims at providing a lightweight, technological contribution towards removing the hierarchy between those who build the city and those who use it. We aim to narrow this gap by enhancing people’s experience of physical spaces with digital, civic technologies that are directly accessible within that space. This paper presents the findings of a design trial allowing users to interact with a public screen via their mobile phones. The screen facilitated a feedback platform about a concrete urban planning project by promoting specific questions and encouraging direct, in-situ, real-time responses via SMS and twitter. This new mechanism offers additional benefits for civic participation as it gives voice to residents who otherwise would not be heard. It also promotes a positive attitude towards local governments and gathers information different from more traditional public engagement tools.
Resumo:
Privacy issues have hindered the evolution of e-health since its emergence. Patients demand better solutions for the protection of private information. Health professionals demand open access to patient health records. Existing e-health systems find it difficult to fulfill these competing requirements. In this paper, we present an information accountability framework (IAF) for e-health systems. The IAF is intended to address privacy issues and their competing concerns related to e-health. Capabilities of the IAF adhere to information accountability principles and e-health requirements. Policy representation and policy reasoning are key capabilities introduced in the IAF. We investigate how these capabilities are feasible using Semantic Web technologies. We discuss with the use of a case scenario, how we can represent the different types of policies in the IAF using the Open Digital Rights Language (ODRL).
Resumo:
Recognising that creativity is a major driving force in the post-industrial economy, the Chinese government has recently established a range of "creative clusters" – industrial parks devoted to media industries, and arts districts – in order to promote the development of the creative industries. This book examines these new creative clusters, outlining their nature and purpose, and assessing their effectiveness. Drawing on case studies of a range of cluster models, and comparing them with international examples, the book demonstrates that creativity, both in China and internationally, is in fact a process of fitting new ideas to existing patterns, models and formats. It shows how large and exceptionally impressive creative clusters have been successfully established, but raises the important questions of whether profit or culture is the driving force, and of whether the bringing together of independent-minded, creative people, entrepreneurial businessmen, preferential policies and foreign investment may in time lead to unintended changes in social and political attitudes in China, including a weakening of state bureaucratic power. An important contribution to the existing literature on the subject, this book will be of great interest to scholars of urban studies, cultural geography, cultural economics and Asian studies.
Resumo:
Central to multi-stakeholder processes of participatory innovation is to generate knowledge about ‘users’ and to identify business opportunities accordingly. In these processes of collaborative analysis and synthesis, conflicting perceptions within and about a field of interest are likely to surface. Instead of the natural tendency to avoid these tensions, we demonstrate how tensions can be utilized by embodying them in provocative types (provotypes). Provotypes expose and embody tensions that surround a field of interest to support collaborative analysis and collaborative design explorations across stakeholders. In this paper we map how provotyping contributes to four related areas of contemporary Interaction Design practice. Through a case study that brings together stakeholders from the field of indoor climate, we provide characteristics of design provocations and design guidelines for provotypes for participatory innovation.
Resumo:
This working paper reflects upon the opportunities and challenges of designing a form of digital noticeboard system with a remote Aboriginal community that supports their aspirations for both internal and external communication. The project itself has evolved from a relationship built through ecological work between scientists and the local community on the Groote Eylandt archipelago to study native populations of animal species over the long term. In the course of this work the aspiration has emerged to explore how digital noticeboards might support communication on the island and externally. This paper introduces the community, the context and the history of the project. We then reflect upon the science project, its outcomes and a framework empowering the Aboriginal viewpoint, in order to draw lessons for extending what we see as a pragmatic and relationship based approach towards cross-cultural design.
Resumo:
Although mobile phones are often used in public urban places to interact with one’s geographically dispersed social circle, they can also facilitate interactions with people in the same public urban space. The PlaceTagz study investigates how physical artefacts in public urban places can be utilised and combined with mobile phone technologies to facilitate interactions. Printed on stickers, PlaceTagz are QR codes linking to a digital message board enabling collocated users to interact with each other over time resulting in a place-based digital memory. This exploratory project set out to investigate if and how PlaceTagz are used by urban dwellers in a real world deployment. We present findings from analysing content received through PlaceTagz and interview data from application users. QR codes, which do not contain any contextual information, piqued the curiosity of users wondering about the embedded link’s destination and provoked comments in regards to people, place and technology.
Resumo:
Building Web 2.0 sites does not necessarily ensure the success of the site. We aim to better understand what improves the success of a site by drawing insight from biologically inspired design patterns. Web 2.0 sites provide a mechanism for human interaction enabling powerful intercommunication between massive volumes of users. Early Web 2.0 site providers that were previously dominant are being succeeded by newer sites providing innovative social interaction mechanisms. Understanding what site traits contribute to this success drives research into Web sites mechanics using models to describe the associated social networking behaviour. Some of these models attempt to show how the volume of users provides a self-organising and self-contextualisation of content. One model describing coordinated environments is called stigmergy, a term originally describing coordinated insect behavior. This paper explores how exploiting stigmergy can provide a valuable mechanism for identifying and analysing online user behavior specifically when considering that user freedom of choice is restricted by the provided web site functionality. This will aid our building better collaborative Web sites improving the collaborative processes.
Resumo:
Automated feature extraction and correspondence determination is an extremely important problem in the face recognition community as it often forms the foundation of the normalisation and database construction phases of many recognition and verification systems. This paper presents a completely automatic feature extraction system based upon a modified volume descriptor. These features form a stable descriptor for faces and are utilised in a reversible jump Markov chain Monte Carlo correspondence algorithm to automatically determine correspondences which exist between faces. The developed system is invariant to changes in pose and occlusion and results indicate that it is also robust to minor face deformations which may be present with variations in expression.