679 resultados para Computer art
Resumo:
This paper deals with the development of ‘art clusters’ and their relocation in the city of Shanghai. It first looks at the revival of the city’s old inner city industrial area (along banks of Suzhou River) through ‘organic’ or ‘alternative’ artist-led cultural production; second, it describes the impact on these activities of the industrial restructuring of the wider city, reliant on large-scale real estate development, business services and global finance; and finally, outlines the relocation of these arts (and related) cultural industries to dispersed CBD locations as a result of those spatial, industrial and policy changes.
Resumo:
“Turtle Twilight” is a two-screen video installation. Paragraphs of text adapted from a travel blog type across the left-hand screen. A computer-generated image of a tropical sunset is slowly animated on the right-hand screen. The two screens are accompanied by an atmospheric stock music track. This work examines how we construct, represent and deploy ‘nature’ in our contemporary lives. It mixes cinematic codes with image, text and sound gleaned from online sources. By extending on Nicolas Bourriad’s understanding of ‘postproduction’ and the creative and critical strategies of ‘editing’, it questions the relationship between contemporary screen culture, nature, desire and contemplation.
Resumo:
Bystander is a multi-user, immersive, interactive environment intended for public display in a museum or art gallery. It is designed to make available heritage collections in novel and culturally responsible ways. We use its development as a case study to examine the role played in that process by a range of tools and techniques from participatory design traditions. We describe how different tools were used within the design process, specifically: the ways in which the potential audience members were both included and represented; the prototypes that have been constructed as a way of envisioning how the final work might be experienced; and how these tools have been brought together in ongoing designing and evaluation. We close the paper with some reflections on the extension of participatory commitments into still-emerging areas of technology design that prioritise the design of spaces for human experience and reflective interaction.
Resumo:
‘Grounded Media’ is a form of art practice focused around the understanding that our ecological crisis is also a cultural crisis, perpetuated by our sense of separation from the material and immaterial ecologies upon which we depend. This misunderstanding of relationships manifests not only as environmental breakdown, but also in the hemorrhaging of our social fabric. ‘Grounded Media’ is consistent with an approach to media art making that I name ‘ecosophical’ and ‘praxis-led’ – which seeks through a range of strategies, to draw attention to the integrity, diversity and efficacy of the biophysical, social and electronic environments of which we are an integral part. It undertakes this through particular choices of location, interaction design,participative strategies and performative direction. This form of working emerged out of the production of two major projects, Grounded Light [8] and Shifting Intimacies [9] and is evident in a recent prototypical wearable art project called In_Step [6]. The following analysis and reflections will assist in promoting new, sustainable roles for media artists who are similarly interested in attuning their practices.
Resumo:
Internet and computer addiction has been a popular research area since the 90s. Studies on Internet and computer addiction have usually been conducted in the US, and the investigation of computer and Internet addiction at different countries is an interesting area of research. This study investigates computer and Internet addiction among teenagers and Internet cafe visitors in Turkey. We applied a survey to 983 visitors in the Internet cafes. The results show that the Internet cafe visitors are usually teenagers, mostly middle and high-school students and usually are busy with computer and Internet applications like chat, e-mail, browsing and games. The teenagers come to the Internet cafe to spend time with friends and the computers. In addition, about 30% of cafe visitors admit to having an Internet addiction, and about 20% specifically mention the problems that they are having with the Internet. It is rather alarming to consider the types of activities that the teenagers are performing in an Internet cafe, their reasons for being there, the percentage of self-awareness about Internet addiction, and the lack of control of applications in the cafe.
Resumo:
In keeping with the proliferation of free software development initiatives and the increased interest in the business process management domain, many open source workflow and business process management systems have appeared during the last few years and are now under active development. This upsurge gives rise to two important questions: What are the capabilities of these systems? and How do they compare to each other and to their closed source counterparts? In other words: What is the state-of-the-art in the area?. To gain an insight into these questions, we have conducted an in-depth analysis of three of the major open source workflow management systems – jBPM, OpenWFE, and Enhydra Shark, the results of which are reported here. This analysis is based on the workflow patterns framework and provides a continuation of the series of evaluations performed using the same framework on closed source systems, business process modelling languages, and web-service composition standards. The results from evaluations of the three open source systems are compared with each other and also with the results from evaluations of three representative closed source systems: Staffware, WebSphere MQ, and Oracle BPEL PM. The overall conclusion is that open source systems are targeted more toward developers rather than business analysts. They generally provide less support for the patterns than closed source systems, particularly with respect to the resource perspective, i.e. the various ways in which work is distributed amongst business users and managed through to completion.
Resumo:
Everything (2008) is a looped 3 channel digital video (extracted from a 3D computer animation) that appropriates a range of media including photography, drawing, painting, and pre-shot video. The work departs from traditional time-based video which is generally based on a recording of an external event. Instead, “Everything” constructs an event and space more like a painting or drawing might. The works combines constructed events (including space, combinations of objects, and aesthetic relationship of forms) with pre-recorded video footage and pre-made paintings and drawings. The result is a montage of objects, images – both still and moving – and abstracted ‘painterly’ gestures. This technique creates a complex temporal displacement. 'Past' refers to pre-recorded media such as painting and photography, and 'future' refers to a possible virtual space not in the present, that these objects may occupy together. Through this simultaneity between the real and the virtual, the work comments on a disembodied sense of space and time, while also puncturing the virtual with a sense of materiality through the tactility of drawing and painting forms and processes. In so doing, te work challenges the perspectival Cartesian space synonymous with the virtual. In this work the disembodied wandering virtual eye is met with an uncanny combination of scenes, where scale and the relationships between objects are disrupted and changed. Everything is one of the first international examples of 3D animation technology being utilised in contemporary art. The work won the inaugural $75,000 Premier of Queensland National New Media Art Award and was subsequently acquired by the Queensland Art Gallery. The work has been exhibited and reviewed nationally and internationally.
Resumo:
Relics is a single-channel video derived from a 3D computer animation that combines a range of media including photography, drawing, painting, and pre-shot video. It is constructed around a series of pictorial stills which become interlinked by the more traditionally filmic processes of panning, zooming and crane shots. In keeping with these ideas, the work revolves around a series of static architectural forms within the strangely menacing enclosure of a geodesic dome. These clinical aspects of the work are complemented by a series of elements that evoke fluidity : fireworks, mirrored biomorphic forms and oscillating projections. The visual dimension of the work is complemented by a soundtrack of rainforest bird calls. Through its ambiguous combination of recorded and virtual imagery, Relics explores the indeterminate boundaries between real and virtual space. On the one hand, it represents actual events and spaces drawn from the artist studio and image archive; on the other it represents the highly idealised spaces of drawing and 3D animation. In this work the disembodied wandering virtual eye is met with an uncanny combination of scenes, where scale and the relationships between objects are disrupted and changed. Through this simultaneity between the real and the virtual, the work conveys a disembodied sense of space and time that carries a powerful sense of affect. Relics was among the first international examples of 3D animation technology in contemporary art. It was originally exhibited in the artist’s solo show, ‘Places That Don’t Exist’ (2007, George Petelin Gallery, Gold Coast) and went on to be included in the group shows ‘d/Art 07/Screen: The Post Cinema Experience’ (2007, Chauvel Cinema, Sydney) , ‘Experimenta Utopia Now: International Biennial of Media Art’ (2010, Arts Centre, Melbourne and national touring venues) and ‘Move on Asia’ (2009, Alternative space Loop, Seoul and Para-site Art Space, Hong Kong) and was broadcast on Souvenirs from Earth (Video Art Cable Channel, Germany and France). The work was analysed in catalogue texts for ‘Places That Don’t Exist’ (2007), ‘d/Art 07’ (2007) and ‘Experimenta Utopia Now’ (2010) and the’ Souvenirs from Earth’ website.
Resumo:
Prevailing video adaptation solutions change the quality of the video uniformly throughout the whole frame in the bitrate adjustment process; while region-of-interest (ROI)-based solutions selectively retains the quality in the areas of the frame where the viewers are more likely to pay more attention to. ROI-based coding can improve perceptual quality and viewer satisfaction while trading off some bandwidth. However, there has been no comprehensive study to measure the bitrate vs. perceptual quality trade-off so far. The paper proposes an ROI detection scheme for videos, which is characterized with low computational complexity and robustness, and measures the bitrate vs. quality trade-off for ROI-based encoding using a state-of-the-art H.264/AVC encoder to justify the viability of this type of encoding method. The results from the subjective quality test reveal that ROI-based encoding achieves a significant perceptual quality improvement over the encoding with uniform quality at the cost of slightly more bits. Based on the bitrate measurements and subjective quality assessments, the bitrate and the perceptual quality estimation models for non-scalable ROI-based video coding (AVC) are developed, which are found to be similar to the models for scalable video coding (SVC).
Resumo:
Feature extraction and selection are critical processes in developing facial expression recognition (FER) systems. While many algorithms have been proposed for these processes, direct comparison between texture, geometry and their fusion, as well as between multiple selection algorithms has not been found for spontaneous FER. This paper addresses this issue by proposing a unified framework for a comparative study on the widely used texture (LBP, Gabor and SIFT) and geometric (FAP) features, using Adaboost, mRMR and SVM feature selection algorithms. Our experiments on the Feedtum and NVIE databases demonstrate the benefits of fusing geometric and texture features, where SIFT+FAP shows the best performance, while mRMR outperforms Adaboost and SVM. In terms of computational time, LBP and Gabor perform better than SIFT. The optimal combination of SIFT+FAP+mRMR also exhibits a state-of-the-art performance.
Resumo:
Across post-industrial societies worldwide, the creative industries are increasingly seen as a key economic driver. These industries - including fields as diverse as advertising, art, computer games, crafts, design, fashion, film, museums, music, performing arts, publishing, radio, theatre and TV - are built upon individual creativity and innovation and have the potential to create wealth and employment through the mechanism of intellectual property. Creative Industries: Critical Readings brings together the key writings - drawing on both journals and books - to present an authoritative and wide-ranging survey of this emerging field of study. The set is presented with an introduction and the writings are divided into four volumes, organized thematically: Volume 1: Concepts - focuses on the concept of creativity and the development of government and industry interest in creative industries; Volume 2: Economy - maps the role and function of creative industries in the economy at large; Volume 3: Organization - examines the ways in which creative institutions organize themselves; and Volume 4: Work - addresses issues of creative work, labour and careers This major reference work will be invaluable to scholars in economics, cultural studies, sociology, media studies and organization studies.
Resumo:
The research field was curatorship of the Machinima genre - a film-making practice that uses real time 3D computer graphics engines to create cinematic productions. The context was the presentation of gallery non-specific work for large-scale exhibition, as an investigation in thinking beyond traditional strategies of white cube. Strongly influenced by the Christiane Paul (Ed) seminal text, 'New Media in the White Cube and Beyond, Curatorial Models for Digital Art', the context was the repositioning of a genre traditionally focussed on delivery through small-screen, indoor, personal spaces, to large exhibition hall spaces. Beyond the core questions of collecting, documenting, expanding and rethinking the place of Machinima within the history of contemporary digital arts, the curatorial premise asked how to best invert the relationship between context of media production within the gaming domain, using novel presentational strategies that might best promote the 'take-home' impulse. The exhibition was used not as the ultimate destination for work but rather as a place to experience, sort and choose from a high volume of possible works for subsequent investigation by audiences within their own game-ready, domestic environments. In pursuit of this core aim, the exhibition intentionally promoted 'sensory overload'. The exhibition also included a gaming lab experience where audiences could begin to learn the DIY concepts of the medium, and be stimulated to revisit, consider and re-make their own relationship to this genre. The research was predominantly practice-led and collaborative (in close concert with the Machinima community), and ethnographic in that it sought to work with, understand and promote the medium in a contemporary art context. This benchmark exhibition, building on the 15-year history of the medium, was warmly received by the global Machinima community as evidenced by the significant debate, feedback and general interest recorded. The exhibition has recently begun an ongoing Australian touring schedule. To date, the exhibition has received critical attention nationally and internationally in Das Superpaper, the Courier Mail, Machinimart, 4ZZZ-FM, the Sydney Morning Herald, Games and Business, Australian Gamer, Kotaku Australia, and the Age.
Resumo:
Tabernacle is an experimental game world-building project which explores the relationship between the map and the 3-dimensional visualisation enabled by high-end game engines. The project is named after the 6th century tabernacle maps of Cosmas Indicopleustes in his Christian Topography. These maps articulate a cultural or metaphoric, rather than measured view of the world, contravening Alper's distinction which observes that “maps are measurement, art is experience”. The project builds on previous research into the use of game engines and 3D navigable representation to enable cultural experience, particularly non-Western cultural experiences and ways of seeing. Like the earlier research, Tabernacle highlights the problematic disjuncture between the modern Cartesian map structures of the engine and the mapping traditions of non-Western cultures. Tabernacle represents a practice-based research provocation. The project exposes assumptions about the maps which underpin 3D game worlds, and the autocratic tendencies of world construction software. This research is of critical importance as game engines and simulation technologies are becoming more popular in the recreation of culture and history. A key learning from the Tabernacle project was the ways in which available game engines – technologies with roots in the Enlightenment - constrained the team’s ability to represent a very different culture with a different conceptualisation of space and maps. Understanding the cultural legacies of the software itself is critical as we are tempted by the opportunities for representation of culture and history that they seem to offer. The project was presented at Perth Digital Arts and Culture in 2007 and reiterated using a different game engine in 2009. Further reflections were discussed in a conference paper presented at OZCHI 2009 and a peer-reviewed journal article, and insights gained from the experience continue to inform the author’s research.
Resumo:
It is a big challenge to acquire correct user profiles for personalized text classification since users may be unsure in providing their interests. Traditional approaches to user profiling adopt machine learning (ML) to automatically discover classification knowledge from explicit user feedback in describing personal interests. However, the accuracy of ML-based methods cannot be significantly improved in many cases due to the term independence assumption and uncertainties associated with them. This paper presents a novel relevance feedback approach for personalized text classification. It basically applies data mining to discover knowledge from relevant and non-relevant text and constraints specific knowledge by reasoning rules to eliminate some conflicting information. We also developed a Dempster-Shafer (DS) approach as the means to utilise the specific knowledge to build high-quality data models for classification. The experimental results conducted on Reuters Corpus Volume 1 and TREC topics support that the proposed technique achieves encouraging performance in comparing with the state-of-the-art relevance feedback models.
Resumo:
Cities accumulate and distribute vast sets of digital information. Many decision-making and planning processes in councils, local governments and organisations are based on both real-time and historical data. Until recently, only a small, carefully selected subset of this information has been released to the public – usually for specific purposes (e.g. train timetables, release of planning application through websites to name just a few). This situation is however changing rapidly. Regulatory frameworks, such as the Freedom of Information Legislation in the US, the UK, the European Union and many other countries guarantee public access to data held by the state. One of the results of this legislation and changing attitudes towards open data has been the widespread release of public information as part of recent Government 2.0 initiatives. This includes the creation of public data catalogues such as data.gov.au (U.S.), data.gov.uk (U.K.), data.gov.au (Australia) at federal government levels, and datasf.org (San Francisco) and data.london.gov.uk (London) at municipal levels. The release of this data has opened up the possibility of a wide range of future applications and services which are now the subject of intensified research efforts. Previous research endeavours have explored the creation of specialised tools to aid decision-making by urban citizens, councils and other stakeholders (Calabrese, Kloeckl & Ratti, 2008; Paulos, Honicky & Hooker, 2009). While these initiatives represent an important step towards open data, they too often result in mere collections of data repositories. Proprietary database formats and the lack of an open application programming interface (API) limit the full potential achievable by allowing these data sets to be cross-queried. Our research, presented in this paper, looks beyond the pure release of data. It is concerned with three essential questions: First, how can data from different sources be integrated into a consistent framework and made accessible? Second, how can ordinary citizens be supported in easily composing data from different sources in order to address their specific problems? Third, what are interfaces that make it easy for citizens to interact with data in an urban environment? How can data be accessed and collected?