969 resultados para Computer art
Resumo:
This wall-mounted sculpture features eight photographic prints displayed on a computer monitor mounting system. The eight panels each swivel and articulate separately. Together, they combine to create an abstract landscape based on a desktop background image of an idyllic tropical setting. Recalling the workstation of a futures trader, logistics manager, or design guru, the screen armature draws out the simultaneously romantic and vacuous characteristics of the imagery. By combining the visual languages of both physical and on-screen desktop environments with the pictorial traditions of landscape and abstraction, this work questions how and where we deploy nature, desire and wonderment in our increasingly technologised lives.
Resumo:
Grant Stevens is ambivalent. The young Brisbane artist made his name with a series of computer-generated animated-text videos that explore clichés but seem undecided as to whether they are trivial and vacuous, profound and authentic or somehow both at once. Stevens plunders mass-media sources (the familiar image repertoire dished up by Hollywood, television, pop music and the Internet) as readymade content. He explores this everyday language, sometimes for its ambiguity, but more often for its almost uncanny lucidity. Resembling meditation and relaxation guides, his recent videos beg the question: what made us so anxious? This book examines Stevens' artistic output over the first ten years of his practice. It includes essays by Mark Pennings and Chris Kraus.
Resumo:
Christoph Schlingensief: Art Without Borders, edited by Tara Forrest and Anna Teresa Scheer, is the first English-language collection of essays about this extraordinary German artist. As Forrest and Scheer suggest in their introduction, ‘access to Schlingensief’s highly challenging productions has been hampered by the fact that very little has been published on his oeuvre in the English-speaking world’. This collection aims to introduce English-speaking artists, scholars and academics to Schlingensief’s extensive, experimental, and at times highly controversial body of work across film, theatre, television, live art and activism...
Resumo:
Appearance-based loop closure techniques, which leverage the high information content of visual images and can be used independently of pose, are now widely used in robotic applications. The current state-of-the-art in the field is Fast Appearance-Based Mapping (FAB-MAP) having been demonstrated in several seminal robotic mapping experiments. In this paper, we describe OpenFABMAP, a fully open source implementation of the original FAB-MAP algorithm. Beyond the benefits of full user access to the source code, OpenFABMAP provides a number of configurable options including rapid codebook training and interest point feature tuning. We demonstrate the performance of OpenFABMAP on a number of published datasets and demonstrate the advantages of quick algorithm customisation. We present results from OpenFABMAP’s application in a highly varied range of robotics research scenarios.
Resumo:
It would be a rare thing to visit an early years setting or classroom in Australia that does not display examples of young children’s artworks. This practice serves to give schools a particular ‘look’, but is no guarantee of quality art education. The Australian National Review of Visual Arts Education (NRVE) (2009) has called for changes to visual art education in schools. The planned new National Curriculum includes the arts (music, dance, drama, media and visual arts) as one of the five learning areas. Research shows that it is the classroom teacher that makes the difference, and teacher education has a large part to play in reforms to art education. This paper provides an account of one foundation unit of study (Unit 1) for first year university students enrolled in a 4-year Bachelor degree program who are preparing to teach in the early years (0–8 years). To prepare pre-service teachers to meet the needs of children in the 21st century, Unit 1 blends old and new ways of seeing art, child and pedagogy. Claims for the effectiveness of this model are supported with evidence-based research, conducted over the six years of iterations and ongoing development of Unit 1.
Resumo:
This research explores the function of entrepreneurship in nonprofit art museums. Traditionally, entrepreneurship literature features debates on customer orientation and innovation. This paper reviews a tension in entrepreneurship: the relationship between limited funding and the need to innovate in nonprofit art museums. The paper develops a construct by which to explain the structure of entrepreneurship in nonprofit art museums in Australia and New Zealand since 1975. From this discussion, different strategies and tensions are highlighted that nonprofit art museum directors have used. The dynamics are explored in ten large art museums and the managerial implications are developed.
Resumo:
I put my abstract in for this conference back in March, based on some evaluation work I had been doing in 2010 with my colleague Professor Greg Hearn for the 3C Regional Writing NeoGreography Project. I had been swapping notes with a colleague from the Smithsonian’s Centre for Folklife and Cultural Heritage about their evaluation work, and stuck inside during the rains of January, I decided to apply for a Qld Smithsonian fellowship based on the quandary of evaluation-particular in public histories (oral histories) and digital storytelling. In July I was awarded the fellowship, so I have tweaked my presentation to talk about what we hope to do with this collaboration, to propel the importance placed on evaluation in public arts programs in Qld and beyond.
Resumo:
At the core of our uniquely human cognitive abilities is the capacity to see things from different perspectives, or to place them in a new context. We propose that this was made possible by two cognitive transitions. First, the large brain of Homo erectus facilitated the onset of recursive recall: the ability to string thoughts together into a stream of potentially abstract or imaginative thought. This hypothesis is sup-ported by a set of computational models where an artificial society of agents evolved to generate more diverse and valuable cultural outputs under conditions of recursive recall. We propose that the capacity to see things in context arose much later, following the appearance of anatomically modern humans. This second transition was brought on by the onset of contextual focus: the capacity to shift between a minimally contextual analytic mode of thought, and a highly contextual associative mode of thought, conducive to combining concepts in new ways and ‘breaking out of a rut’. When contextual focus is implemented in an art-generating computer program, the resulting artworks are seen as more creative and appealing. We summarize how both transitions can be modeled using a theory of concepts which high-lights the manner in which different contexts can lead to modern humans attributing very different meanings to the interpretation of one concept.
Resumo:
We have always felt that “something very special” was happening in the 48hr and other similar game jams. This “something” is more than the intensity and challenge of the experience, although this certainly has appeal for the participants. We had an intuition that these intense 48 hour game jams exposed something pertinent to the changing shape of the Australian games industry where we see the demise of the late 20th century large studio - the “Night Elf” model and the growth of the small independent model. There are a large number of wider economic and cultural factors around this evolution but our interest is specifically in the change from “industry” to “creative industry” and the growth of games as a cultural media and art practice. If we are correct in our intuition, then illuminating this something also has important ramifications for those courses which teach game and interaction design and development. Rather than undertake a formal ethno-methodological approach, we decided to track as many of the actors in the event as possible. We documented the experience (Keith Novak’s beautiful B&W photography), the individual and their technology (IOGraph mouse tracking), the teams as a group (Time lapse photography) and movement tracking throughout the whole space (Blue tooth phone tracking). The raw data collected has given us opportunity to start a commentary on the “something special” happening in the 48hr.
Resumo:
The integration of unmanned aircraft into civil airspace is a complex issue. One key question is whether unmanned aircraft can operate just as safely as their manned counterparts. The absence of a human pilot in unmanned aircraft automatically points to a deficiency that is the lack of an inherent see-and-avoid capability. To date, regulators have mandated that an “equivalent level of safety” be demonstrated before UAVs are permitted to routinely operate in civil airspace. This chapter proposes techniques, methods, and hardware integrations that describe a “sense-and-avoid” system designed to address the lack of a see-and-avoid capability in UAVs.
Resumo:
We report and reflect upon the early stages of a research project that endeavours to establish a culture of critical design thinking in a tertiary game design course. We first discuss the current state of the Australian game industry and consider some perceived issues in game design courses and graduate outcomes. The second sec-tion presents our response to these issues: a project in progress which uses techniques originally exploited by Augusto Boal in his work, Theatre of the Oppressed. We appropriate Boal’s method to promote critical design thinking in a games design class. Finally, we reflect on the project and the ontology of design thinking from the perspective of Bruce Archer’s call to reframe design as a ‘third academic art’.
Resumo:
Modelling activities in crowded scenes is very challenging as object tracking is not robust in complicated scenes and optical flow does not capture long range motion. We propose a novel approach to analyse activities in crowded scenes using a “bag of particle trajectories”. Particle trajectories are extracted from foreground regions within short video clips using particle video, which estimates long range motion in contrast to optical flow which is only concerned with inter-frame motion. Our applications include temporal video segmentation and anomaly detection, and we perform our evaluation on several real-world datasets containing complicated scenes. We show that our approaches achieve state-of-the-art performance for both tasks.
Resumo:
The ability to detect unusual events in surviellance footage as they happen is a highly desireable feature for a surveillance system. However, this problem remains challenging in crowded scenes due to occlusions and the clustering of people. In this paper, we propose using the Distributed Behavior Model (DBM), which has been widely used in computer graphics, for video event detection. Our approach does not rely on object tracking, and is robust to camera movements. We use sparse coding for classification, and test our approach on various datasets. Our proposed approach outperforms a state-of-the-art work which uses the social force model and Latent Dirichlet Allocation.
Resumo:
This paper explores the art and craft of teaching in higher education. It presents a model of the relationship between art and craft drawn from the author’s theoretical and empirical work, and provides examples from the higher education context to illustrate the model. It discusses the characteristics of teaching as art and craft and critiques the move towards standardisation and conformity in favour of originality, creativity and innovation. It suggests that to see teaching as art is more holistic, satisfying and transformative than to see it as craft. It argues for reclaiming the art of teaching and provides strategies for encouraging and supporting artistic teaching.
Resumo:
This paper investigates the effects of limited speech data in the context of speaker verification using a probabilistic linear discriminant analysis (PLDA) approach. Being able to reduce the length of required speech data is important to the development of automatic speaker verification system in real world applications. When sufficient speech is available, previous research has shown that heavy-tailed PLDA (HTPLDA) modeling of speakers in the i-vector space provides state-of-the-art performance, however, the robustness of HTPLDA to the limited speech resources in development, enrolment and verification is an important issue that has not yet been investigated. In this paper, we analyze the speaker verification performance with regards to the duration of utterances used for both speaker evaluation (enrolment and verification) and score normalization and PLDA modeling during development. Two different approaches to total-variability representation are analyzed within the PLDA approach to show improved performance in short-utterance mismatched evaluation conditions and conditions for which insufficient speech resources are available for adequate system development. The results presented within this paper using the NIST 2008 Speaker Recognition Evaluation dataset suggest that the HTPLDA system can continue to achieve better performance than Gaussian PLDA (GPLDA) as evaluation utterance lengths are decreased. We also highlight the importance of matching durations for score normalization and PLDA modeling to the expected evaluation conditions. Finally, we found that a pooled total-variability approach to PLDA modeling can achieve better performance than the traditional concatenated total-variability approach for short utterances in mismatched evaluation conditions and conditions for which insufficient speech resources are available for adequate system development.