969 resultados para Computer art
Resumo:
Everything (2008) is a looped 3 channel digital video (extracted from a 3D computer animation) that appropriates a range of media including photography, drawing, painting, and pre-shot video. The work departs from traditional time-based video which is generally based on a recording of an external event. Instead, “Everything” constructs an event and space more like a painting or drawing might. The works combines constructed events (including space, combinations of objects, and aesthetic relationship of forms) with pre-recorded video footage and pre-made paintings and drawings. The result is a montage of objects, images – both still and moving – and abstracted ‘painterly’ gestures. This technique creates a complex temporal displacement. 'Past' refers to pre-recorded media such as painting and photography, and 'future' refers to a possible virtual space not in the present, that these objects may occupy together. Through this simultaneity between the real and the virtual, the work comments on a disembodied sense of space and time, while also puncturing the virtual with a sense of materiality through the tactility of drawing and painting forms and processes. In so doing, te work challenges the perspectival Cartesian space synonymous with the virtual. In this work the disembodied wandering virtual eye is met with an uncanny combination of scenes, where scale and the relationships between objects are disrupted and changed. Everything is one of the first international examples of 3D animation technology being utilised in contemporary art. The work won the inaugural $75,000 Premier of Queensland National New Media Art Award and was subsequently acquired by the Queensland Art Gallery. The work has been exhibited and reviewed nationally and internationally.
Resumo:
Relics is a single-channel video derived from a 3D computer animation that combines a range of media including photography, drawing, painting, and pre-shot video. It is constructed around a series of pictorial stills which become interlinked by the more traditionally filmic processes of panning, zooming and crane shots. In keeping with these ideas, the work revolves around a series of static architectural forms within the strangely menacing enclosure of a geodesic dome. These clinical aspects of the work are complemented by a series of elements that evoke fluidity : fireworks, mirrored biomorphic forms and oscillating projections. The visual dimension of the work is complemented by a soundtrack of rainforest bird calls. Through its ambiguous combination of recorded and virtual imagery, Relics explores the indeterminate boundaries between real and virtual space. On the one hand, it represents actual events and spaces drawn from the artist studio and image archive; on the other it represents the highly idealised spaces of drawing and 3D animation. In this work the disembodied wandering virtual eye is met with an uncanny combination of scenes, where scale and the relationships between objects are disrupted and changed. Through this simultaneity between the real and the virtual, the work conveys a disembodied sense of space and time that carries a powerful sense of affect. Relics was among the first international examples of 3D animation technology in contemporary art. It was originally exhibited in the artist’s solo show, ‘Places That Don’t Exist’ (2007, George Petelin Gallery, Gold Coast) and went on to be included in the group shows ‘d/Art 07/Screen: The Post Cinema Experience’ (2007, Chauvel Cinema, Sydney) , ‘Experimenta Utopia Now: International Biennial of Media Art’ (2010, Arts Centre, Melbourne and national touring venues) and ‘Move on Asia’ (2009, Alternative space Loop, Seoul and Para-site Art Space, Hong Kong) and was broadcast on Souvenirs from Earth (Video Art Cable Channel, Germany and France). The work was analysed in catalogue texts for ‘Places That Don’t Exist’ (2007), ‘d/Art 07’ (2007) and ‘Experimenta Utopia Now’ (2010) and the’ Souvenirs from Earth’ website.
Resumo:
Prevailing video adaptation solutions change the quality of the video uniformly throughout the whole frame in the bitrate adjustment process; while region-of-interest (ROI)-based solutions selectively retains the quality in the areas of the frame where the viewers are more likely to pay more attention to. ROI-based coding can improve perceptual quality and viewer satisfaction while trading off some bandwidth. However, there has been no comprehensive study to measure the bitrate vs. perceptual quality trade-off so far. The paper proposes an ROI detection scheme for videos, which is characterized with low computational complexity and robustness, and measures the bitrate vs. quality trade-off for ROI-based encoding using a state-of-the-art H.264/AVC encoder to justify the viability of this type of encoding method. The results from the subjective quality test reveal that ROI-based encoding achieves a significant perceptual quality improvement over the encoding with uniform quality at the cost of slightly more bits. Based on the bitrate measurements and subjective quality assessments, the bitrate and the perceptual quality estimation models for non-scalable ROI-based video coding (AVC) are developed, which are found to be similar to the models for scalable video coding (SVC).
Resumo:
Feature extraction and selection are critical processes in developing facial expression recognition (FER) systems. While many algorithms have been proposed for these processes, direct comparison between texture, geometry and their fusion, as well as between multiple selection algorithms has not been found for spontaneous FER. This paper addresses this issue by proposing a unified framework for a comparative study on the widely used texture (LBP, Gabor and SIFT) and geometric (FAP) features, using Adaboost, mRMR and SVM feature selection algorithms. Our experiments on the Feedtum and NVIE databases demonstrate the benefits of fusing geometric and texture features, where SIFT+FAP shows the best performance, while mRMR outperforms Adaboost and SVM. In terms of computational time, LBP and Gabor perform better than SIFT. The optimal combination of SIFT+FAP+mRMR also exhibits a state-of-the-art performance.
Resumo:
Across post-industrial societies worldwide, the creative industries are increasingly seen as a key economic driver. These industries - including fields as diverse as advertising, art, computer games, crafts, design, fashion, film, museums, music, performing arts, publishing, radio, theatre and TV - are built upon individual creativity and innovation and have the potential to create wealth and employment through the mechanism of intellectual property. Creative Industries: Critical Readings brings together the key writings - drawing on both journals and books - to present an authoritative and wide-ranging survey of this emerging field of study. The set is presented with an introduction and the writings are divided into four volumes, organized thematically: Volume 1: Concepts - focuses on the concept of creativity and the development of government and industry interest in creative industries; Volume 2: Economy - maps the role and function of creative industries in the economy at large; Volume 3: Organization - examines the ways in which creative institutions organize themselves; and Volume 4: Work - addresses issues of creative work, labour and careers This major reference work will be invaluable to scholars in economics, cultural studies, sociology, media studies and organization studies.
Resumo:
The research field was curatorship of the Machinima genre - a film-making practice that uses real time 3D computer graphics engines to create cinematic productions. The context was the presentation of gallery non-specific work for large-scale exhibition, as an investigation in thinking beyond traditional strategies of white cube. Strongly influenced by the Christiane Paul (Ed) seminal text, 'New Media in the White Cube and Beyond, Curatorial Models for Digital Art', the context was the repositioning of a genre traditionally focussed on delivery through small-screen, indoor, personal spaces, to large exhibition hall spaces. Beyond the core questions of collecting, documenting, expanding and rethinking the place of Machinima within the history of contemporary digital arts, the curatorial premise asked how to best invert the relationship between context of media production within the gaming domain, using novel presentational strategies that might best promote the 'take-home' impulse. The exhibition was used not as the ultimate destination for work but rather as a place to experience, sort and choose from a high volume of possible works for subsequent investigation by audiences within their own game-ready, domestic environments. In pursuit of this core aim, the exhibition intentionally promoted 'sensory overload'. The exhibition also included a gaming lab experience where audiences could begin to learn the DIY concepts of the medium, and be stimulated to revisit, consider and re-make their own relationship to this genre. The research was predominantly practice-led and collaborative (in close concert with the Machinima community), and ethnographic in that it sought to work with, understand and promote the medium in a contemporary art context. This benchmark exhibition, building on the 15-year history of the medium, was warmly received by the global Machinima community as evidenced by the significant debate, feedback and general interest recorded. The exhibition has recently begun an ongoing Australian touring schedule. To date, the exhibition has received critical attention nationally and internationally in Das Superpaper, the Courier Mail, Machinimart, 4ZZZ-FM, the Sydney Morning Herald, Games and Business, Australian Gamer, Kotaku Australia, and the Age.
Resumo:
Tabernacle is an experimental game world-building project which explores the relationship between the map and the 3-dimensional visualisation enabled by high-end game engines. The project is named after the 6th century tabernacle maps of Cosmas Indicopleustes in his Christian Topography. These maps articulate a cultural or metaphoric, rather than measured view of the world, contravening Alper's distinction which observes that “maps are measurement, art is experience”. The project builds on previous research into the use of game engines and 3D navigable representation to enable cultural experience, particularly non-Western cultural experiences and ways of seeing. Like the earlier research, Tabernacle highlights the problematic disjuncture between the modern Cartesian map structures of the engine and the mapping traditions of non-Western cultures. Tabernacle represents a practice-based research provocation. The project exposes assumptions about the maps which underpin 3D game worlds, and the autocratic tendencies of world construction software. This research is of critical importance as game engines and simulation technologies are becoming more popular in the recreation of culture and history. A key learning from the Tabernacle project was the ways in which available game engines – technologies with roots in the Enlightenment - constrained the team’s ability to represent a very different culture with a different conceptualisation of space and maps. Understanding the cultural legacies of the software itself is critical as we are tempted by the opportunities for representation of culture and history that they seem to offer. The project was presented at Perth Digital Arts and Culture in 2007 and reiterated using a different game engine in 2009. Further reflections were discussed in a conference paper presented at OZCHI 2009 and a peer-reviewed journal article, and insights gained from the experience continue to inform the author’s research.
Resumo:
It is a big challenge to acquire correct user profiles for personalized text classification since users may be unsure in providing their interests. Traditional approaches to user profiling adopt machine learning (ML) to automatically discover classification knowledge from explicit user feedback in describing personal interests. However, the accuracy of ML-based methods cannot be significantly improved in many cases due to the term independence assumption and uncertainties associated with them. This paper presents a novel relevance feedback approach for personalized text classification. It basically applies data mining to discover knowledge from relevant and non-relevant text and constraints specific knowledge by reasoning rules to eliminate some conflicting information. We also developed a Dempster-Shafer (DS) approach as the means to utilise the specific knowledge to build high-quality data models for classification. The experimental results conducted on Reuters Corpus Volume 1 and TREC topics support that the proposed technique achieves encouraging performance in comparing with the state-of-the-art relevance feedback models.
Resumo:
Cities accumulate and distribute vast sets of digital information. Many decision-making and planning processes in councils, local governments and organisations are based on both real-time and historical data. Until recently, only a small, carefully selected subset of this information has been released to the public – usually for specific purposes (e.g. train timetables, release of planning application through websites to name just a few). This situation is however changing rapidly. Regulatory frameworks, such as the Freedom of Information Legislation in the US, the UK, the European Union and many other countries guarantee public access to data held by the state. One of the results of this legislation and changing attitudes towards open data has been the widespread release of public information as part of recent Government 2.0 initiatives. This includes the creation of public data catalogues such as data.gov.au (U.S.), data.gov.uk (U.K.), data.gov.au (Australia) at federal government levels, and datasf.org (San Francisco) and data.london.gov.uk (London) at municipal levels. The release of this data has opened up the possibility of a wide range of future applications and services which are now the subject of intensified research efforts. Previous research endeavours have explored the creation of specialised tools to aid decision-making by urban citizens, councils and other stakeholders (Calabrese, Kloeckl & Ratti, 2008; Paulos, Honicky & Hooker, 2009). While these initiatives represent an important step towards open data, they too often result in mere collections of data repositories. Proprietary database formats and the lack of an open application programming interface (API) limit the full potential achievable by allowing these data sets to be cross-queried. Our research, presented in this paper, looks beyond the pure release of data. It is concerned with three essential questions: First, how can data from different sources be integrated into a consistent framework and made accessible? Second, how can ordinary citizens be supported in easily composing data from different sources in order to address their specific problems? Third, what are interfaces that make it easy for citizens to interact with data in an urban environment? How can data be accessed and collected?
Resumo:
Existing secure software development principles tend to focus on coding vulnerabilities, such as buffer or integer overflows, that apply to individual program statements, or issues associated with the run-time environment, such as component isolation. Here we instead consider software security from the perspective of potential information flow through a program’s object-oriented module structure. In particular, we define a set of quantifiable "security metrics" which allow programmers to quickly and easily assess the overall security of a given source code program or object-oriented design. Although measuring quality attributes of object-oriented programs for properties such as maintainability and performance has been well-covered in the literature, metrics which measure the quality of information security have received little attention. Moreover, existing securityrelevant metrics assess a system either at a very high level, i.e., the whole system, or at a fine level of granularity, i.e., with respect to individual statements. These approaches make it hard and expensive to recognise a secure system from an early stage of development. Instead, our security metrics are based on well-established compositional properties of object-oriented programs (i.e., data encapsulation, cohesion, coupling, composition, extensibility, inheritance and design size), combined with data flow analysis principles that trace potential information flow between high- and low-security system variables. We first define a set of metrics to assess the security quality of a given object-oriented system based on its design artifacts, allowing defects to be detected at an early stage of development. We then extend these metrics to produce a second set applicable to object-oriented program source code. The resulting metrics make it easy to compare the relative security of functionallyequivalent system designs or source code programs so that, for instance, the security of two different revisions of the same system can be compared directly. This capability is further used to study the impact of specific refactoring rules on system security more generally, at both the design and code levels. By measuring the relative security of various programs refactored using different rules, we thus provide guidelines for the safe application of refactoring steps to security-critical programs. Finally, to make it easy and efficient to measure a system design or program’s security, we have also developed a stand-alone software tool which automatically analyses and measures the security of UML designs and Java program code. The tool’s capabilities are demonstrated by applying it to a number of security-critical system designs and Java programs. Notably, the validity of the metrics is demonstrated empirically through measurements that confirm our expectation that program security typically improves as bugs are fixed, but worsens as new functionality is added.
Resumo:
At St Thomas' Hospital, we have developed a computer program on a Titan graphics supercomputer to plan the stereotactic implantation of iodine-125 seeds for the palliative treatment of recurrent malignant gliomas. Use of the Gill-Thomas-Cosman relocatable frame allows planning and surgery to be carried out at different hospitals on different days. Stereotactic computed tomography (CT) and positron emission tomography (PET) scans are performed and the images transferred to the planning computer. The head, tumour and frame fiducials are outlined on the relevant images, and a three-dimensional model generated. Structures which could interfere with the surgery or radiotherapy, such as major vessels, shunt tubing etc., can also be outlined and included in the display. Catheter target and entry points are set using a three-dimensional cursor controlled by a set of dials attached to the computer. The program calculates and displays the radiation dose distribution within the target volume for various catheter and seed arrangements. The CT co-ordinates of the fiducial rods are used to convert catheter co-ordinates from CT space to frame space and to calculate the catheter insertion angles and depths. The surgically implanted catheters are after-loaded the next day and the seeds left in place for between 4 and 6 days, giving a nominal dose of 50 Gy to the edge of the target volume. 25 patients have been treated so far.
Resumo:
This paper describes a new system, dubbed Continuous Appearance-based Trajectory Simultaneous Localisation and Mapping (CAT-SLAM), which augments sequential appearance-based place recognition with local metric pose filtering to improve the frequency and reliability of appearance-based loop closure. As in other approaches to appearance-based mapping, loop closure is performed without calculating global feature geometry or performing 3D map construction. Loop-closure filtering uses a probabilistic distribution of possible loop closures along the robot’s previous trajectory, which is represented by a linked list of previously visited locations linked by odometric information. Sequential appearance-based place recognition and local metric pose filtering are evaluated simultaneously using a Rao–Blackwellised particle filter, which weights particles based on appearance matching over sequential frames and the similarity of robot motion along the trajectory. The particle filter explicitly models both the likelihood of revisiting previous locations and exploring new locations. A modified resampling scheme counters particle deprivation and allows loop-closure updates to be performed in constant time for a given environment. We compare the performance of CAT-SLAM with FAB-MAP (a state-of-the-art appearance-only SLAM algorithm) using multiple real-world datasets, demonstrating an increase in the number of correct loop closures detected by CAT-SLAM.
Resumo:
In this video, a thumping house-music track is accompanied by lines of rotating text, which resemble computer screen-savers. The text is sourced from websites offering tips for dating and seducing potential lovers. This work engages with the language of online forums. It reworks text from online advice forums and mixes them with visual codes of computer graphics. By extending on some of Nicolas Bourriaud’s ideas around ‘postproduction’ and the creative and critical strategies of ‘editing’, it offers new speculative perspectives on the relationship between screen realities, desire and romance.
Resumo:
When does 1960s art begin and end? Certainly, aside from a few affinities, the decade’s artistic output does not exactly correspond to its popular conception as the ‘Swinging Sixties’. While it was rare that psychedelic art was truly challenging, the decade saw a number of perceptions change regarding the aims, boundaries and possibilities of experiencing art. Thus, this era has come to represent a watershed or crisis in modernist art. While in the Australian context many of these nascent trends were properly realised in the 1970s – with the full force and impact of post-object art – other challenges were first articulated in the 1950s. So, like any other demarcation of a decade, its limits and boundaries are porous.
Resumo:
This chapter’s interest in fiction’s relationship to truth, lies, and secrecy is not so much a matter of how closely fiction resembles or mirrors the world (its mimetic quality), or what we can learn from fiction (its epistemological value). Rather, the concern is both literary and philosophical: a literary concern that takes into account how texts that thematise secrecy work to withhold and to disclose their secrets as part of the process of narrating and sequencing; and a philosophical concern that considers how survival is contingent on secrets and other forms of concealment such as lies, deception, and half-truths. The texts selected for examination are: Secrets (2002), Skim (2008), and Persepolis: The Story of a Childhood (2003). These texts draw attention to the ways in which the lies and secrets of the female protagonists are part of the intricate mechanism of survival, and demonstrate the ways in which fiction relies upon concealment and revelation as forms of truth-telling.