949 resultados para Open book decompositions
Resumo:
My concern in this commentary is the discrepancy between cultural psychologists' theoretical claims that meanings are co-constructed by, with and for individuals in ongoing social interaction, and their research practices where researcher's and research participant's meaning-making processes are separated in time into sequential turns. I argue for the need to live up to these theoretical assumptions, by making both the initial research encounter and the researcher's later interpretation process more co-constructive. I suggest making the initial research encounter more co-constructive by paying attention to these moments when the negotiated flow of interaction between researcher and research participant breaks down, for it allows the research participant's meaning-making to be traced and makes the researcher's efforts towards meaning more explicit. I propose to make the later interpretation process more co-constructive by adopting a more open-ended and dialogical way of writing that is specifically addressed to research participants and invites them to actively engage with researcher's meaning-making.
Resumo:
In this chapter, we explore methods for automatically generating game content—and games themselves—adapted to individual players in order to improve their playing experience or achieve a desired effect. This goes beyond notions of mere replayability and involves modeling player needs to maximize their enjoyment, involvement, and interest in the game being played. We identify three main aspects of this process: generation of new content and rule sets, measurement of this content and the player, and adaptation of the game to change player experience. This process forms a feedback loop of constant refinement, as games are continually improved while being played. Framed within this methodology, we present an overview of our recent and ongoing research in this area. This is illustrated by a number of case studies that demonstrate these ideas in action over a variety of game types, including 3D action games, arcade games, platformers, board games, puzzles, and open-world games. We draw together some of the lessons learned from these projects to comment on the difficulties, the benefits, and the potential for personalized gaming via adaptive game design.
Resumo:
Many nations are highlighting the need for a renaissance in the mathematical sciences as essential to the well-being of all citizens (e.g., Australian Academy of Science, 2006; 2010; The National Academies, 2009). Indeed, the first recommendation of The National Academies’ Rising Above the Storm (2007) was to vastly improve K–12 science and mathematics education. The subsequent report, Rising Above the Gathering Storm Two Years Later (2009), highlighted again the need to target mathematics and science from the earliest years of schooling: “It takes years or decades to build the capability to have a society that depends on science and technology . . . You need to generate the scientists and engineers, starting in elementary and middle school” (p. 9). Such pleas reflect the rapidly changing nature of problem solving and reasoning needed in today’s world, beyond the classroom. As The National Academies (2009) reported, “Today the problems are more complex than they were in the 1950s, and more global. They’ll require a new educated workforce, one that is more open, collaborative, and cross-disciplinary” (p. 19). The implications for the problem solving experiences we implement in schools are far-reaching. In this chapter, I consider problem solving and modelling in the primary school, beginning with the need to rethink the experiences we provide in the early years. I argue for a greater awareness of the learning potential of young children and the need to provide stimulating learning environments. I then focus on data modelling as a powerful means of advancing children’s statistical reasoning abilities, which they increasingly need as they navigate their data-drenched world.
Resumo:
A number of online algorithms have been developed that have small additional loss (regret) compared to the best “shifting expert”. In this model, there is a set of experts and the comparator is the best partition of the trial sequence into a small number of segments, where the expert of smallest loss is chosen in each segment. The regret is typically defined for worst-case data / loss sequences. There has been a recent surge of interest in online algorithms that combine good worst-case guarantees with much improved performance on easy data. A practically relevant class of easy data is the case when the loss of each expert is iid and the best and second best experts have a gap between their mean loss. In the full information setting, the FlipFlop algorithm by De Rooij et al. (2014) combines the best of the iid optimal Follow-The-Leader (FL) and the worst-case-safe Hedge algorithms, whereas in the bandit information case SAO by Bubeck and Slivkins (2012) competes with the iid optimal UCB and the worst-case-safe EXP3. We ask the same question for the shifting expert problem. First, we ask what are the simple and efficient algorithms for the shifting experts problem when the loss sequence in each segment is iid with respect to a fixed but unknown distribution. Second, we ask how to efficiently unite the performance of such algorithms on easy data with worst-case robustness. A particular intriguing open problem is the case when the comparator shifts within a small subset of experts from a large set under the assumption that the losses in each segment are iid.
Resumo:
This paper addresses the development of trust in the use of Open Data through incorporation of appropriate authentication and integrity parameters for use by end user Open Data application developers in an architecture for trustworthy Open Data Services. The advantages of this architecture scheme is that it is far more scalable, not another certificate-based hierarchy that has problems with certificate revocation management. With the use of a Public File, if the key is compromised: it is a simple matter of the single responsible entity replacing the key pair with a new one and re-performing the data file signing process. Under this proposed architecture, the the Open Data environment does not interfere with the internal security schemes that might be employed by the entity. However, this architecture incorporates, when needed, parameters from the entity, e.g. person who authorized publishing as Open Data, at the time that datasets are created/added.
Resumo:
It is rare to find an anthology that realizes the possibilities of the form. We tend to regard our edited collections as lesser siblings, and forget their special value. But at times, a subject seems to require an edited collection much more than it does a classic monograph. So it is with the subject showcased here, which concerns the global circulation, performance and consumption of heavy metal. This is a relatively new and emerging body of work, hitherto scattered disparately in the broader popular music studies, but quickly gaining status as a “studies” with the establishment of a global conference, a journal, and publication of this anthology, all in recent years. Metal Rules the Globe took the editors’ a decade to compile. That they have thought deeply about how they want the collection to speak shows through in the book’s thoughtful arrangement and design, and in the way in which they draw on the contributions herein to develop for the field a research agenda that will take it forward...
Resumo:
For more than 15 years, QUT’s Visual Arts discipline has employed a teaching model known as the ‘open studio’ in their undergraduate BFA program. Distinct from the other models of studio degrees in Australia, the open studio approach emphasizes individual practice by focusing on experimentation, collaboration and cross-disciplinary activities. However, while this activity proves to be highly relevant to exploring and participating in the ‘post medium’ nature of much contemporary art, the open studio also presents a complex of affecting challenges to the artist-teacher. The open studio, it can be argued, produces a different type of student than traditional, discipline-specific art programs – but it also produces a different kind of artist-teacher. In this paper, the authors will provide a reflection on their own experiences as artists and studio lecturers involved with the two ‘bookends’ of the QUT studio program – first year and third year. Using these separate contexts as case studies, the authors will discuss the transformative qualities of the open studio as it is adapted to the particularities of each cohort and the curricular needs of each year level. In particular, the authors will explore the way the teaching experience has influenced and positively challenged their individual (and paradoxically) discipline-focussed, studio practices. It is generally accepted that the teaching of art needs to be continually reconceptualised in response to the changing conditions of contemporary art, culture and technology. This paper will articulate how the authors have worked at that reconceptualisation within both their teaching and studio practices and so practically demonstrate the complex dialogic processes inherent to the teaching of the visual arts studio.
Resumo:
Asking why is an important foundation of inquiry and fundamental to the development of reasoning skills and learning. Despite this, and despite the relentless and often disruptive nature of innovations in information and communications technology (ICT), sophisticated tools that directly support this basic act of learning appear to be undeveloped, not yet recognized, or in the very early stages of development. Why is this so? To this question, there is no single factual answer. In response, however, plausible explanations and further questions arise, and such responses are shown to be typical consequences of why-questioning. A range of contemporary scenarios are presented to highlight the problem. Consideration of the various inputs into the evolution of digital learning is introduced to provide historical context and this serves to situate further discussion regarding innovation that supports inquiry-based learning. This theme is further contextualized by narratives on openness in education, in which openness is also shown to be an evolving construct. Explanatory and descriptive contents are differentiated in order to scope out the kinds of digital tools that might support inquiry instigated by why-questioning and which move beyond the search paradigm. Probing why from a linguistic perspective reveals versatile and ambiguous semantics. The why dimension—asking, learning, knowing, understanding, and explaining why—is introduced as a construct that highlights challenges and opportunities for ICT innovation. By linking reflective practice and dialogue with cognitive engagement, this chapter points to specific frontiers for the design and development of digital learning tools, frontiers in which inquiry may find new openings for support.
Resumo:
The control of environmental factors in open-office environments, such as lighting and temperature is becoming increasingly automated. This development means that office inhabitants are losing the ability to manually adjust environmental conditions according to their needs. In this paper we describe the design, use and evaluation of MiniOrb, a system that employs ambient and tangible interaction mechanisms to allow inhabitants of office environments to maintain awareness of environmental factors, report on their own subjectively perceived office comfort levels and see how these compare to group average preferences. The system is complemented by a mobile application, which enables users to see and set the same sensor values and preferences, but using a screen-based interface. We give an account of the system’s design and outline the results of an in-situ trial and user study. Our results show that devices that combine ambient and tangible interaction approaches are well suited to the task of recording indoor climate preferences and afford a rich set of possible interactions that can complement those enabled by more conventional screen-based interfaces.
Resumo:
Javanese Performances on an Indonesian Stage: Contesting Culture, Embracing Change, is Barbara Hatley’s first book about the performing arts in Indonesia, a topic that piqued her interest while undergoing a masters program at Yale University in the late 1960s. In this sense, it is a landmark study, for Hatley has since become very well known in Indonesianist circles, especially among those with an interest in matters of culture, popular and elite. Until recently, her writings on Indonesian performing arts have only been available in the form of journal articles and book chapters...
Resumo:
Had it been published a decade earlier, Hip-hop Japan might have been cited as a good example of the kind of multi-sited ethnography George Marcus (1998) proposes. Hip-hop Japan is a critical study of cultural globalisation. It presents as much theoretical interpretation, discussions of Japanese popular culture in general, and reviews of formulations of the Japanese self by Japanese scholars, as it does of Japanese hip-hop per se. In fact, the latter is relatively thinly described, as Condry’s project is to demonstrate how Japanese hip-hop’s particularities are made up from a mix of US hip-hop, Japanese modes of fandom, contestatory uses of the Japanese language and the specific logics of the Japanese popular music recording industry. The book journeys into these worlds as much as it does into the world of Japanese hip-hop.
Resumo:
This is the third (but first edited) volume in Sen and Hill’s corpus on Indonesian media. An anthology built from contributions to a 2006 workshop, it is necessarily more fragmented than the editors’ earlier monographs. While this fragmented character helps to evoke a fractured context, it also makes for unwieldiness...
Resumo:
Vicki Mayer’s book is unusual in that, despite its title, it is not about television producers at all, or at least not in the sense that scholars and the television industry itself have traditionally understood the role. Rather than referring to those in creative, managerial or financial control, or those with substantial intellectual input into a program, Mayer uses the term in a deliberately broad sense to mean, essentially, anyone ‘whose labor, however small, contributes to [television] production’ (179).