225 resultados para Credit Spreads
Resumo:
This paper describes a method for analysing videogames based on game activities. It examines the impact of these activities on the player experience. The research approach applies heuristic checklists that deconstruct games in terms of cognitive processes that players engage in during gameplay (e.g., addressing goals, interpreting feedback). For this study we examined three puzzle games, Portal 2, I-Fluid and Braid. The Player Experience of Need Satisfaction (PENS) survey is used to measure player experience following gameplay. Cognitive action provided within games is examined in light of reported player experiences to determine the extent to which these activities influence players’ feelings of competence, autonomy, intuitive control and presence. Findings indicate that the positive experiences are directly influenced by game activity design. Our study also demonstrates the value of expert review in deconstructing gameplay activity as a means of providing direction for game design that enhances the player experience.
Resumo:
Whether by using electronic banking, by using credit cards, or by synchronising a mobile telephone via Bluetooth to an in-car system, humans are a critical part in many cryptographic protocols daily. We reduced the gap that exists between the theory and the reality of the security of these cryptographic protocols involving humans, by creating tools and techniques for proofs and implementations of human-followable security. After three human research studies, we present a model for capturing human recognition; we provide a tool for generating values called Computer-HUman Recognisable Nonces (CHURNs); and we provide a model for capturing human perceptible freshness.
Resumo:
In professions such as teaching, health sciences (medicine, nursing), and built environment, significant work-based learning through practica is an essential element before graduation. However, there is no such requirement in professional accounting education. This paper reports the findings of an exploratory qualitative case study of the implementation of a Workplace Learning Experience Program in Accountancy at the Queensland University of Technology (QUT) in Australia. The interview-based study documents the responses of university students and graduates to this program. The study demonstrates that a 100 hour work placement in Accountancy can enhance student learning. It highlights the potential value of the application of sociocultural theories of learning, especially the concept of situated learning involving legitimate peripheral participation (Lave and Wenger 1991). This research adds to a small body of empirical accounting education literature relating to the benefits of work placements prior to graduation. The effectiveness of this short, for credit, unpaid program should encourage other universities to implement a similar work placement program as a form of pre-graduation learning in professional accounting education.
Resumo:
In contemporary game development circles the ‘game making jam’ has become an important rite of passage and baptism event, an exploration space and a central indie lifestyle affirmation and community event. Game jams have recently become a focus for design researchers interested in the creative process. In this paper we tell the story of an established local game jam and our various documentation and data collection methods. We present the beginnings of the current project, which seeks to map the creative teams and their process in the space of the challenge, and which aims to enable participants to be more than the objects of the data collection. A perceived issue is that typical documentation approaches are ‘about’ the event as opposed to ‘made by’ the participants and are thus both at odds with the spirit of the jam as a phenomenon and do not really access the rich playful potential of participant experience. In the data collection and visualisation projects described here, we focus on using collected data to re-include the participants in telling stories about their experiences of the event as a place-based experience. Our goal is to find a means to encourage production of ‘anecdata’ - data based on individual story telling that is subjective, malleable, and resists collection via formal mechanisms - and to enable mimesis, or active narrating, on the part of the participants. We present a concept design for data as game based on the logic of early medieval maps and we reflect on how we could enable participation in the data collection itself.
Resumo:
Regrowing forests on cleared land is a key strategy to achieve both biodiversity conservation and climate change mitigation globally. Maximizing these co-benefits, however, remains theoretically and technically challenging because of the complex relationship between carbon sequestration and biodiversity in forests, the strong influence of climate variability and landscape position on forest development, the large number of restoration strategies possible, and long time-frames needed to declare success. Through the synthesis of three decades of knowledge on forest dynamics and plant functional traits combined with decision science, we demonstrate that we cannot always maximize carbon sequestration by simply increasing the functional trait diversity of trees planted. The relationships between plant functional diversity, carbon sequestration rates above-ground and in the soil are dependent on climate and landscape positions. We show how to manage ‘identities’ and ‘complementarities’ between plant functional traits in order to achieve systematically maximal co-benefits in various climate and landscape contexts. We provide examples of optimal planting and thinning rules that satisfy this ecological strategy and guide the restoration of forests that are rich in both carbon and plant functional diversity. Our framework provides the first mechanistic approach for generating decision-making rules that can be used to manage forests for multiple objectives, and supports joined carbon credit and biodiversity conservation initiatives, such as Reducing Emissions from Deforestation and forest Degradation REDD+. The decision framework can also be linked to species distribution models and socio-economic models in order to find restoration solutions that maximize simultaneously biodiversity, carbon stocks and other ecosystem services across landscapes. Our study provides the foundation for developing and testing cost-effective and adaptable forest management rules to achieve biodiversity, carbon sequestration and other socio-economic co-benefits under global change.
Resumo:
Business Process Management (BPM) is accepted globally as an organizational approach to enhance productivity and drive cost efficiencies. Studies confirm a shortage of BPM skilled professionals with limited opportunities to develop the required BPM expertise. This study investigates this gap starting from a critical analysis of BPM courses offered by Australian universities and training institutions. These courses were analyzed and mapped against a leading BPM capability framework to determine how well current BPM education and training offerings in Australia address the core capabilities required by BPM professionals globally. To determine the BPM skill-sets sought by industry, online recruitment advertisements were collated, analyzed, and mapped against this BPM capability framework. The outcomes provide a detailed overview on the alignment of available BPM education/training and industry demand. These insights are useful for BPM professionals and their employers to build awareness of the BPM capabilities required for a BPM mature organization. Universities and other training institutions will benefit from these results by understanding where demand is, where the gaps are, and what other BPM education providers are supplying. This structured comparison method could continue to provide a common ground for future discussion across university-industry boundaries and continuous alignment of their respective practices.
Resumo:
Ureaplasmas are the microorganisms most frequently isolated from the amniotic fluid of pregnant women and can cause chronic intrauterine infections. These tiny bacteria are thought to undergo rapid evolution and exhibit a hypermutatable phenotype; however, little is known about how ureaplasmas respond to selective pressures in utero. Using an ovine model of chronic intra-amniotic infection, we investigated if exposure of ureaplasmas to sub-inhibitory concentrations of erythromycin could induce phenotypic or genetic indicators of macrolide resistance. At 55 days gestation, 12 pregnant ewes received an intra-amniotic injection of a non-clonal, clinical U. parvum strain, followed by: (i) erythromycin treatment (IM, 30 mg/kg/day, n=6); or (ii) saline (IM, n=6) at 100 days gestation. Fetuses were then delivered surgically at 125 days gestation. Despite injecting the same inoculum into all ewes, significant differences between amniotic fluid and chorioamnion ureaplasmas were detected following chronic intra-amniotic infection. Numerous polymorphisms were observed in domain V of the 23S rRNA gene of ureaplasmas isolated from the chorioamnion (but not the amniotic fluid), resulting in a mosaic-like sequence. Chorioamnion isolates also harboured the macrolide resistance genes erm(B) and msr(D) and were associated with variable roxithromycin minimum inhibitory concentrations. Remarkably, this variability occurred independently of exposure of ureaplasmas to erythromycin, suggesting that low-level erythromycin exposure does not induce ureaplasmal macrolide resistance in utero. Rather, the significant differences observed between amniotic fluid and chorioamnion ureaplasmas suggest that different anatomical sites may select for ureaplasma sub-types within non-clonal, clinical strains. This may have implications for the treatment of intrauterine ureaplasma infections.
Resumo:
Due to the demand for better and deeper analysis in sports, organizations (both professional teams and broadcasters) are looking to use spatiotemporal data in the form of player tracking information to obtain an advantage over their competitors. However, due to the large volume of data, its unstructured nature, and lack of associated team activity labels (e.g. strategic/tactical), effective and efficient strategies to deal with such data have yet to be deployed. A bottleneck restricting such solutions is the lack of a suitable representation (i.e. ordering of players) which is immune to the potentially infinite number of possible permutations of player orderings, in addition to the high dimensionality of temporal signal (e.g. a game of soccer last for 90 mins). Leveraging a recent method which utilizes a "role-representation", as well as a feature reduction strategy that uses a spatiotemporal bilinear basis model to form a compact spatiotemporal representation. Using this representation, we find the most likely formation patterns of a team associated with match events across nearly 14 hours of continuous player and ball tracking data in soccer. Additionally, we show that we can accurately segment a match into distinct game phases and detect highlights. (i.e. shots, corners, free-kicks, etc) completely automatically using a decision-tree formulation.
Resumo:
Over the past decade, vision-based tracking systems have been successfully deployed in professional sports such as tennis and cricket for enhanced broadcast visualizations as well as aiding umpiring decisions. Despite the high-level of accuracy of the tracking systems and the sheer volume of spatiotemporal data they generate, the use of this high quality data for quantitative player performance and prediction has been lacking. In this paper, we present a method which predicts the location of a future shot based on the spatiotemporal parameters of the incoming shots (i.e. shot speed, location, angle and feet location) from such a vision system. Having the ability to accurately predict future short-term events has enormous implications in the area of automatic sports broadcasting in addition to coaching and commentary domains. Using Hawk-Eye data from the 2012 Australian Open Men's draw, we utilize a Dynamic Bayesian Network to model player behaviors and use an online model adaptation method to match the player's behavior to enhance shot predictability. To show the utility of our approach, we analyze the shot predictability of the top 3 players seeds in the tournament (Djokovic, Federer and Nadal) as they played the most amounts of games.
Resumo:
Efficient and effective feature detection and representation is an important consideration when processing videos, and a large number of applications such as motion analysis, 3D scene understanding, tracking etc. depend on this. Amongst several feature description methods, local features are becoming increasingly popular for representing videos because of their simplicity and efficiency. While they achieve state-of-the-art performance with low computational complexity, their performance is still too limited for real world applications. Furthermore, rapid increases in the uptake of mobile devices has increased the demand for algorithms that can run with reduced memory and computational requirements. In this paper we propose a semi binary based feature detectordescriptor based on the BRISK detector, which can detect and represent videos with significantly reduced computational requirements, while achieving comparable performance to the state of the art spatio-temporal feature descriptors. First, the BRISK feature detector is applied on a frame by frame basis to detect interest points, then the detected key points are compared against consecutive frames for significant motion. Key points with significant motion are encoded with the BRISK descriptor in the spatial domain and Motion Boundary Histogram in the temporal domain. This descriptor is not only lightweight but also has lower memory requirements because of the binary nature of the BRISK descriptor, allowing the possibility of applications using hand held devices.We evaluate the combination of detectordescriptor performance in the context of action classification with a standard, popular bag-of-features with SVM framework. Experiments are carried out on two popular datasets with varying complexity and we demonstrate comparable performance with other descriptors with reduced computational complexity.
Resumo:
At the highest level of competitive sport, nearly all performances of athletes (both training and competitive) are chronicled using video. Video is then often viewed by expert coaches/analysts who then manually label important performance indicators to gauge performance. Stroke-rate and pacing are important performance measures in swimming, and these are previously digitised manually by a human. This is problematic as annotating large volumes of video can be costly, and time-consuming. Further, since it is difficult to accurately estimate the position of the swimmer at each frame, measures such as stroke rate are generally aggregated over an entire swimming lap. Vision-based techniques which can automatically, objectively and reliably track the swimmer and their location can potentially solve these issues and allow for large-scale analysis of a swimmer across many videos. However, the aquatic environment is challenging due to fluctuations in scene from splashes, reflections and because swimmers are frequently submerged at different points in a race. In this paper, we temporally segment races into distinct and sequential states, and propose a multimodal approach which employs individual detectors tuned to each race state. Our approach allows the swimmer to be located and tracked smoothly in each frame despite a diverse range of constraints. We test our approach on a video dataset compiled at the 2012 Australian Short Course Swimming Championships.
Resumo:
A new community and communication type of social networks - online dating - are gaining momentum. With many people joining in the dating network, users become overwhelmed by choices for an ideal partner. A solution to this problem is providing users with partners recommendation based on their interests and activities. Traditional recommendation methods ignore the users’ needs and provide recommendations equally to all users. In this paper, we propose a recommendation approach that employs different recommendation strategies to different groups of members. A segmentation method using the Gaussian Mixture Model (GMM) is proposed to customize users’ needs. Then a targeted recommendation strategy is applied to each identified segment. Empirical results show that the proposed approach outperforms several existing recommendation methods.
Resumo:
The rapid development of the World Wide Web has created massive information leading to the information overload problem. Under this circumstance, personalization techniques have been brought out to help users in finding content which meet their personalized interests or needs out of massively increasing information. User profiling techniques have performed the core role in this research. Traditionally, most user profiling techniques create user representations in a static way. However, changes of user interests may occur with time in real world applications. In this research we develop algorithms for mining user interests by integrating time decay mechanisms into topic-based user interest profiling. Time forgetting functions will be integrated into the calculation of topic interest measurements on in-depth level. The experimental study shows that, considering temporal effects of user interests by integrating time forgetting mechanisms shows better performance of recommendation.
Resumo:
Most recommender systems attempt to use collaborative filtering, content-based filtering or hybrid approach to recommend items to new users. Collaborative filtering recommends items to new users based on their similar neighbours, and content-based filtering approach tries to recommend items that are similar to new users' profiles. The fundamental issues include how to profile new users, and how to deal with the over-specialization in content-based recommender systems. Indeed, the terms used to describe items can be formed as a concept hierarchy. Therefore, we aim to describe user profiles or information needs by using concepts vectors. This paper presents a new method to acquire user information needs, which allows new users to describe their preferences on a concept hierarchy rather than rating items. It also develops a new ranking function to recommend items to new users based on their information needs. The proposed approach is evaluated on Amazon book datasets. The experimental results demonstrate that the proposed approach can largely improve the effectiveness of recommender systems.
Resumo:
Different reputation models are used in the web in order to generate reputation values for products using uses' review data. Most of the current reputation models use review ratings and neglect users' textual reviews, because it is more difficult to process. However, we argue that the overall reputation score for an item does not reflect the actual reputation for all of its features. And that's why the use of users' textual reviews is necessary. In our work we introduce a new reputation model that defines a new aggregation method for users' extracted opinions about products' features from users' text. Our model uses features ontology in order to define general features and sub-features of a product. It also reflects the frequencies of positive and negative opinions. We provide a case study to show how our results compare with other reputation models.