969 resultados para Computer art
Resumo:
Genomic sequences are fundamentally text documents, admitting various representations according to need and tokenization. Gene expression depends crucially on binding of enzymes to the DNA sequence at small, poorly conserved binding sites, limiting the utility of standard pattern search. However, one may exploit the regular syntactic structure of the enzyme's component proteins and the corresponding binding sites, framing the problem as one of detecting grammatically correct genomic phrases. In this paper we propose new kernels based on weighted tree structures, traversing the paths within them to capture the features which underpin the task. Experimentally, we and that these kernels provide performance comparable with state of the art approaches for this problem, while offering significant computational advantages over earlier methods. The methods proposed may be applied to a broad range of sequence or tree-structured data in molecular biology and other domains.
Resumo:
An Interview with John Rajchman, Department of Art History, Columbia University, on Architecture, Deleuze and Foucault at his apartment, Riverside Drive, New York City, February 10, 2003.
Resumo:
TEXTA runs as a type of book club that we have previously labelled as ‘bespoke’ (Ellison, Holliday and Van Luyn 2012). We visualise TEXTA as a meeting place between the community and the university, as a space for discussion and engagement with both visual art forms and written texts. In today’s presentation, we shall briefly establish the ‘bespoke’ bookclub. We then want to introduce the idea of TEXTA as an example of a book club that negotiates Edward Soja’s Thirdspace (1996) – a space that incorporates and extends concepts of First and Secondspace (or perceived and conceived spaces). In doing so, we showcase two recent sessions of TEXTA as case studies. We will then illustrate some ideas we have for expanding TEXTA beyond the boundaries of Brisbane city, and invite feedback on how to further extend the opportunities for community engagement that TEXTA can offer in regional areas.
Resumo:
This thesis developed a method for real-time and handheld 3D temperature mapping using a combination of off-the-shelf devices and efficient computer algorithms. It contributes a new sensing and data processing framework to the science of 3D thermography, unlocking its potential for application areas such as building energy auditing and industrial monitoring. New techniques for the precise calibration of multi-sensor configurations were developed, along with several algorithms that ensure both accurate and comprehensive surface temperature estimates can be made for rich 3D models as they are generated by a non-expert user.
Resumo:
Since Canada’s colonial beginnings, it has become increasingly riddled with classism, racism,sexism, and other damaging outcomes of structured social inequality. In 2006, however,many types of social injustice were turbo‐charged under the federal leadership of the Harper government. For example, a recent southern Ontario study shows that less than half of working people between the ages of 25 and 65 have full‐time jobs with benefits. The main objective of this paper is to critique the dominant Canadian political economic order and the pain and suffering it has caused for millions of people. Informed by left realism and other progressive ways of knowing, I also suggest some ways of turning the tide.
Resumo:
This paper presents a novel framework for the unsupervised alignment of an ensemble of temporal sequences. This approach draws inspiration from the axiom that an ensemble of temporal signals stemming from the same source/class should have lower rank when "aligned" rather than "misaligned". Our approach shares similarities with recent state of the art methods for unsupervised images ensemble alignment (e.g. RASL) which breaks the problem into a set of image alignment problems (which have well known solutions i.e. the Lucas-Kanade algorithm). Similarly, we propose a strategy for decomposing the problem of temporal ensemble alignment into a similar set of independent sequence problems which we claim can be solved reliably through Dynamic Time Warping (DTW). We demonstrate the utility of our method using the Cohn-Kanade+ dataset, to align expression onset across multiple sequences, which allows us to automate the rapid discovery of event annotations.
Resumo:
Proxy re-encryption (PRE) is a highly useful cryptographic primitive whereby Alice and Bob can endow a proxy with the capacity to change ciphertext recipients from Alice to Bob, without the proxy itself being able to decrypt, thereby providing delegation of decryption authority. Key-private PRE (KP-PRE) specifies an additional level of confidentiality, requiring pseudo-random proxy keys that leak no information on the identity of the delegators and delegatees. In this paper, we propose a CPA-secure PK-PRE scheme in the standard model (which we then transform into a CCA-secure scheme in the random oracle model). Both schemes enjoy highly desirable properties such as uni-directionality and multi-hop delegation. Unlike (the few) prior constructions of PRE and KP-PRE that typically rely on bilinear maps under ad hoc assumptions, security of our construction is based on the hardness of the standard Learning-With-Errors (LWE) problem, itself reducible from worst-case lattice hard problems that are conjectured immune to quantum cryptanalysis, or “post-quantum”. Of independent interest, we further examine the practical hardness of the LWE assumption, using Kannan’s exhaustive search algorithm coupling with pruning techniques. This leads to state-of-the-art parameters not only for our scheme, but also for a number of other primitives based on LWE published the literature.
Resumo:
Term-based approaches can extract many features in text documents, but most include noise. Many popular text-mining strategies have been adapted to reduce noisy information from extracted features; however, text-mining techniques suffer from low frequency. The key issue is how to discover relevance features in text documents to fulfil user information needs. To address this issue, we propose a new method to extract specific features from user relevance feedback. The proposed approach includes two stages. The first stage extracts topics (or patterns) from text documents to focus on interesting topics. In the second stage, topics are deployed to lower level terms to address the low-frequency problem and find specific terms. The specific terms are determined based on their appearances in relevance feedback and their distribution in topics or high-level patterns. We test our proposed method with extensive experiments in the Reuters Corpus Volume 1 dataset and TREC topics. Results show that our proposed approach significantly outperforms the state-of-the-art models.
Resumo:
Addressing possibilities for authentic combinations of diverse media within an installation setting, this research tested hybrid blends of the physical, digital and temporal to explore liminal space and image. The practice led research reflected on creation of artworks from three perspectives – material, immaterial and hybrid – and in doing so, developed a new methodological structure that extends conventional forms of triangulation. This study explored how physical and digital elements each sought hierarchical presence, yet simultaneously coexisted, thereby extending the visual and conceptual potential of the work. Outcomes demonstrated how utilising and recording transitional processes of hybrid imagery achieved a convergence of diverse, experiential forms. "Hybrid authority" – an authentic convergence of disparate elements – was articulated in the creation and public sharing of processual works and the creation of an innovative framework for hybrid art practice.
Resumo:
My practice-led research explores and maps workflows for generating experimental creative work involving inertia based motion capture technology. Motion capture has often been used as a way to bridge animation and dance resulting in abstracted visuals outcomes. In early works this process was largely done by rotoscoping, reference footage and mechanical forms of motion capture. With the evolution of technology, optical and inertial forms of motion capture are now more accessible and able to accurately capture a larger range of complex movements. The creative work titled “Contours in Motion” was the first in a series of studies on captured motion data used to generating experimental visual forms that reverberate in space and time. With the source or ‘seed’ comes from using an Xsens MVN - Inertial Motion Capture system to capture spontaneous dance movements, with the visual generation conducted through a customised dynamics simulation. The aim of the creative work was to diverge way from a standard practice of using particle system and/or a simple re-targeting of the motion data to drive a 3d character as a means to produce abstracted visual forms. To facilitate this divergence a virtual dynamic object was tether to a selection of data points from a captured performance. The proprieties of the dynamic object were then adjusted to balance the influences from the human movement data with the influence of computer based randomization. The resulting outcome was a visual form that surpassed simple data visualization to project the intent of the performer’s movements into a visual shape itself. The reported outcomes from this investigation have contributed to a larger study on the use of motion capture in the generative arts, furthering the understanding of and generating theories on practice.
Resumo:
Review(s) of: An opening: Twelve love stories about art, by Stephanie Radok 2012, Wakefield Press, Adelaide, xiv, 168 p., ISBN 9781743050415 (pbk).
Resumo:
A long query provides more useful hints for searching relevant documents, but it is likely to introduce noise which affects retrieval performance. In order to smooth such adverse effect, it is important to reduce noisy terms, introduce and boost additional relevant terms. This paper presents a comprehensive framework, called Aspect Hidden Markov Model (AHMM), which integrates query reduction and expansion, for retrieval with long queries. It optimizes the probability distribution of query terms by utilizing intra-query term dependencies as well as the relationships between query terms and words observed in relevance feedback documents. Empirical evaluation on three large-scale TREC collections demonstrates that our approach, which is automatic, achieves salient improvements over various strong baselines, and also reaches a comparable performance to a state of the art method based on user’s interactive query term reduction and expansion.
Resumo:
Because of their limited number of senior positions and fewer alternative career paths, small businesses have a more difficult time attracting and retaining skilled information systems (IS) staff and are thus dependent upon external expertise. Small businesses are particularly dependent on outside expertise when first computerizing. Because small businesses suffer from severe financial constraints. it is often difficult to justify the cost of custom software. Hence. for many small businesses, engaging a consultant to help with identifying suitable packaged software and related hardware, is their first critical step toward computerization. This study explores the importance of proactive client involvement when engaging a consultant to assist with computer system selection in small businesses. Client involvement throughout consultant engagement is found to be integral to project success and frequently lacking due to misconceptions of small businesses regarding their role. Small businesses often overestimate the impact of consultant and vendor support in achieving successful computer system selection and implementation. For consultant engagement to be successful, the process must be viewed as being directed toward the achievement of specific organizational results where the client accepts responsibility for direction of the process.
Resumo:
From the earliest human creative expressions there has been a relationship between art, technology and science. In Western history this relationship is often seen as drawing from the advances in both art and science that occurred during the Renaissance, and as captured in the polymath figure of da Vinci. The 20th century development of computer technology, and the more recent emergence of creative practice-led research as a recognised methodology, has lead to a renewed appreciation of the relationship between art, science and technology. This chapter focuses on transdisciplinary practices that bring together arts, science and technology in imaginative ways. Showing how such combinations have led to changes in both practice and forms of creative expression for artists and their partners across disciplines. The aim of this chapter is to sketch an outline of the types of transdisiplinary creative research projects that currently signify best practice in the field, which is done in reference to key literature and exemplars drawn from the Australian context.