679 resultados para Computer art
Resumo:
Computer vision is much more than a technique to sense and recover environmental information from an UAV. It should play a main role regarding UAVs’ functionality because of the big amount of information that can be extracted, its possible uses and applications, and its natural connection to human driven tasks, taking into account that vision is our main interface to world understanding. Our current research’s focus lays on the development of techniques that allow UAVs to maneuver in spaces using visual information as their main input source. This task involves the creation of techniques that allow an UAV to maneuver towards features of interest whenever a GPS signal is not reliable or sufficient, e.g. when signal dropouts occur (which usually happens in urban areas, when flying through terrestrial urban canyons or when operating on remote planetary bodies), or when tracking or inspecting visual targets—including moving ones—without knowing their exact UMT coordinates. This paper also investigates visual serving control techniques that use velocity and position of suitable image features to compute the references for flight control. This paper aims to give a global view of the main aspects related to the research field of computer vision for UAVs, clustered in four main active research lines: visual serving and control, stereo-based visual navigation, image processing algorithms for detection and tracking, and visual SLAM. Finally, the results of applying these techniques in several applications are presented and discussed: this study will encompass power line inspection, mobile target tracking, stereo distance estimation, mapping and positioning.
Resumo:
Computer forensics is the process of gathering and analysing evidence from computer systems to aid in the investigation of a crime. Typically, such investigations are undertaken by human forensic examiners using purpose-built software to discover evidence from a computer disk. This process is a manual one, and the time it takes for a forensic examiner to conduct such an investigation is proportional to the storage capacity of the computer's disk drives. The heterogeneity and complexity of various data formats stored on modern computer systems compounds the problems posed by the sheer volume of data. The decision to undertake a computer forensic examination of a computer system is a decision to commit significant quantities of a human examiner's time. Where there is no prior knowledge of the information contained on a computer system, this commitment of time and energy occurs with little idea of the potential benefit to the investigation. The key contribution of this research is the design and development of an automated process to describe a computer system and its activity for the purposes of a computer forensic investigation. The term proposed for this process is computer profiling. A model of a computer system and its activity has been developed over the course of this research. Using this model a computer system, which is the subj ect of investigation, can be automatically described in terms useful to a forensic investigator. The computer profiling process IS resilient to attempts to disguise malicious computer activity. This resilience is achieved by detecting inconsistencies in the information used to infer the apparent activity of the computer. The practicality of the computer profiling process has been demonstrated by a proof-of concept software implementation. The model and the prototype implementation utilising the model were tested with data from real computer systems. The resilience of the process to attempts to disguise malicious activity has also been demonstrated with practical experiments conducted with the same prototype software implementation.
Resumo:
Digital forensics investigations aim to find evidence that helps confirm or disprove a hypothesis about an alleged computer-based crime. However, the ease with which computer-literate criminals can falsify computer event logs makes the prosecutor's job highly challenging. Given a log which is suspected to have been falsified or tampered with, a prosecutor is obliged to provide a convincing explanation for how the log may have been created. Here we focus on showing how a suspect computer event log can be transformed into a hypothesised actual sequence of events, consistent with independent, trusted sources of event orderings. We present two algorithms which allow the effort involved in falsifying logs to be quantified, as a function of the number of `moves' required to transform the suspect log into the hypothesised one, thus allowing a prosecutor to assess the likelihood of a particular falsification scenario. The first algorithm always produces an optimal solution but, for reasons of efficiency, is suitable for short event logs only. To deal with the massive amount of data typically found in computer event logs, we also present a second heuristic algorithm which is considerably more efficient but may not always generate an optimal outcome.
Resumo:
Games and related virtual environments have been a much-hyped area of the entertainment industry. The classic quote is that games are now approaching the size of Hollywood box office sales [1]. Books are now appearing that talk up the influence of games on business [2], and it is one of the key drivers of present hardware development. Some of this 3D technology is now embedded right down at the operating system level via the Windows Presentation Foundations – hit Windows/Tab on your Vista box to find out... In addition to this continued growth in the area of games, there are a number of factors that impact its development in the business community. Firstly, the average age of gamers is approaching the mid thirties. Therefore, a number of people who are in management positions in large enterprises are experienced in using 3D entertainment environments. Secondly, due to the pressure of demand for more computational power in both CPU and Graphical Processing Units (GPUs), your average desktop, any decent laptop, can run a game or virtual environment. In fact, the demonstrations at the end of this paper were developed at the Queensland University of Technology (QUT) on a standard Software Operating Environment, with an Intel Dual Core CPU and basic Intel graphics option. What this means is that the potential exists for the easy uptake of such technology due to 1. a broad range of workers being regularly exposed to 3D virtual environment software via games; 2. present desktop computing power now strong enough to potentially roll out a virtual environment solution across an entire enterprise. We believe such visual simulation environments can have a great impact in the area of business process modeling. Accordingly, in this article we will outline the communication capabilities of such environments, giving fantastic possibilities for business process modeling applications, where enterprises need to create, manage, and improve their business processes, and then communicate their processes to stakeholders, both process and non-process cognizant. The article then concludes with a demonstration of the work we are doing in this area at QUT.
Resumo:
This paper discusses the use of models in automatic computer forensic analysis, and proposes and elaborates on a novel model for use in computer profiling, the computer profiling object model. The computer profiling object model is an information model which models a computer as objects with various attributes and inter-relationships. These together provide the information necessary for a human investigator or an automated reasoning engine to make judgements as to the probable usage and evidentiary value of a computer system. The computer profiling object model can be implemented so as to support automated analysis to provide an investigator with the information needed to decide whether manual analysis is required.
Resumo:
Network Jamming systems provide real-time collaborative media performance experiences for novice or inexperienced users. In this paper we will outline the theoretical and developmental drivers for our Network Jamming software, called jam2jam. jam2jam employs generative algorithmic techniques with particular implications for accessibility and learning. We will describe how theories of engagement have directed the design and development of jam2jam and show how iterative testing cycles in numerous international sites have informed the evolution of the system and its educational potential. Generative media systems present an opportunity for users to leverage computational systems to make sense of complex media forms through interactive and collaborative experiences. Generative music and art are a relatively new phenomenon that use procedural invention as a creative technique to produce music and visual media. These kinds of systems present a range of affordances that can facilitate new kinds of relationships with music and media performance and production. Early systems have demonstrated the potential to provide access to collaborative ensemble experiences to users with little formal musical or artistic expertise.This presentation examines the educational affordances of these systems evidenced by field data drawn from the Network Jamming Project. These generative performance systems enable access to a unique kind of music/media’ ensemble performance with very little musical/ media knowledge or skill and they further offer the possibility of unique interactive relationships with artists and creative knowledge through collaborative performance. Through the process of observing, documenting and analysing young people interacting with the generative media software jam2jam a theory of meaningful engagement has emerged from the need to describe and codify how users experience creative engagement with music/media performance and the locations of meaning. In this research we observed that the musical metaphors and practices of ‘ensemble’ or collaborative performance and improvisation as a creative process for experienced musicians can be made available to novice users. The relational meanings of these musical practices afford access to high level personal, social and cultural experiences. Within the creative process of collaborative improvisation lie a series of modes of creative engagement that move from appreciation through exploration, selection, direction toward embodiment. The expressive sounds and visions made in real-time by improvisers collaborating are immediate and compelling. Generative media systems let novices access these experiences with simple interfaces that allow them to make highly professional and expressive sonic and visual content simply by using gestures and being attentive and perceptive to their collaborators. These kinds of experiences present the potential for highly complex expressive interactions with sound and media as a performance. Evidence that has emerged from this research suggest that collaborative performance with generative media is transformative and meaningful. In this presentation we draw out these ideas around an emerging theory of meaningful engagement that has evolved from the development of network jamming software. Primarily we focus on demonstrating how these experiences might lead to understandings that may be of educational and social benefit.
Resumo:
SCAPE is an interactive simulation that allows teachers and students to experiment with sustainable urban design. The project is based on the Kelvin Grove Urban Village, Brisbane. Groups of students role play as political, retail, elderly, student, council and builder characters to negotiate on game decisions around land use, density, housing types and transport in order to design a sustainable urban community. As they do so, the 3D simulation reacts in real time to illustrate what the village would look like as well as provide statistical information about the community they are creating. SCAPE brings together education, urban professional and technology expertise, helping it achieve educational outcomes, reflect real-world scenarios and include sophisticated logic and decision making processes and effects.---------- The research methodology was primarily practice led underpinned by action research methods resulting in innovative approaches and techniques in adapting digital games and simulation technologies to create dynamic and engaging experiences in pedagogical contexts. It also illustrates the possibilities for urban designers to engage a variety of communities in the processes, complexities and possibilities of urban development and sustainability.
Resumo:
the (dis)orientation of thought in its encounter with art can be understood as the direct result of an encounter with indeterminacy as a lack in meaning. As an artist I am aware of how this indeterminacy impacts on the perceived value and authority of the artistic voice and in particular its value as a research voice. This paper explores this indeterminacy of meaning, as a profound and disturbing unknowing characteristic of the sublime and argues its value to advanced thought and for any methodological understanding of practice-led research. Lyotard described the sublime as an ‘understanding’ through which art and its associated practices may be able to resist an all too easy assimilation by the public as just a consumer commodity. His thought represents an attempt to both politically and philosophically understand art’s, and particularly abstract painting’s, affect as a state of profound and positive unknowing. To talk of the sublime in art is to speak of the suspension of any comfortable certainty in being and instead to engage with the real as a limit to meaning and knowing. It is to talk of the presentation of the unpresentable as a momentary but significant dissolution of representation. This understanding of the sublime is then further explored through the cultural phenomena of the monochrome painting and applied to the work of the two contemporary artists, Franz Erhard Walter and Günter Umberg. Initially the monochrome was understood as an attempt to go beyond traditional representation and present the unpresentable. In the one hundred years or so since that initial move this understanding has broadened. The monochrome now presents itself as a genre or even project within visual art but it still has much to teach us. In the concretely abstract and performative artworks of Franz Erhard Walter and Günter Umberg, traces of this ambition remain and their work can be seen to pose questions probing our understandings and experiences of artistic meaning, its value and the real.
Resumo:
The ways in which the "traditional" tension between words and artwork can be perceived has huge implications for understanding the relationship between critical or theoretical interpretation, art and practice, and research. Within the practice-led PhD this can generate a strange sense of disjuncture for the artist-researcher particularly when engaged in writing the exegesis. This paper aims to explore this tension through an introductory investigation of the work of the philosopher Andrew Benjamin. For Benjamin criticism completes the work of art. Criticism is, with the artwork, at the centre of our experience and theoretical understanding of art – in this way the work of art and criticism are co-productive. The reality of this co-productivity can be seen in three related articles on the work of American painter Marcia Hafif. In each of these articles there are critical negotiations of just how the work of art operates as art and theoretically, within the field of art. This focus has important ramifications for the writing and reading of the exegesis within the practice-led research higher degree. By including art as a significant part of the research reporting process the artist-researcher is also staking a claim as to the critical value of their work. Rather than resisting the tension between word and artwork the practice-led artist-researcher need to embrace the co-productive nature of critical word and creative work to more completely articulate their practice and its significance as research. The ideal venue and opportunity for this is the exegesis.
Resumo:
We propose to design a Custom Learning System that responds to the unique needs and potentials of individual students, regardless of their location, abilities, attitudes, and circumstances. This project is intentionally provocative and future-looking but it is not unrealistic or unfeasible. We propose that by combining complex learning databases with a learner’s personal data, we could provide all students with a personal, customizable, and flexible education. This paper presents the initial research undertaken for this project of which the main challenges were to broadly map the complex web of data available, to identify what logic models are required to make the data meaningful for learning, and to translate this knowledge into simple and easy-to-use interfaces. The ultimate outcome of this research will be a series of candidate user interfaces and a broad system logic model for a new smart system for personalized learning. This project is student-centered, not techno-centric, aiming to deliver innovative solutions for learners and schools. It is deliberately future-looking, allowing us to ask questions that take us beyond the limitations of today to motivate new demands on technology.
Resumo:
This paper presents a retrospective view of a game design practice that recently switched from the development of complex learning games to the development of simple authoring tools for students to design their own learning games for each other. We introduce how our ‘10% Rule’, a premise that only 10% of what is learnt during a game design process is ultimately appreciated by the player, became a major contributor to the evolving practice. We use this rule primarily as an analytical and illustrative tool to discuss the learning involved in designing and playing learning games rather than as a scientifically and empirically proven rule. The 10% rule was promoted by our experience as designers and allows us to explore the often overlooked and valuable learning processes involved in designing learning games and mobile games in particular. This discussion highlights that in designing mobile learning games, students are not only reflecting on their own learning processes through setting up structures for others to enquire and investigate, they are also engaging in high-levels of independent inquiry and critical analysis in authentic learning settings. We conclude the paper with a discussion of the importance of these types of learning processes and skills of enquiry in 21st Century learning.
Resumo:
User-Based intelligent systems are already commonplace in a student’s online digital life. Each time they browse, search, buy, join, comment, play, travel, upload, download, a system collects, analyses and processes data in an effort to customise content and further improve services. This panel session will explore how intelligent systems, particularly those that gather data from mobile devices, can offer new possibilities to assist in the delivery of customised, personal and engaging learning experiences. The value of intelligent systems for education lies in their ability to formulate authentic and complex learner profiles that bring together and systematically integrate a student’s personal world with a formal curriculum framework. As we well know, a mobile device can collect data relating to a student’s interests (gathered from search history, applications and communications), location, surroundings and proximity to others (GPS, Bluetooth). However, what has been less explored is the opportunity for a mobile device to map the movements and activities of a student from moment to moment and over time. This longitudinal data provides a holistic profile of a student, their state and surroundings. Analysing this data may allow us to identify patterns that reveal a student’s learning processes; when and where they work best and for how long. Through revealing a student’s state and surroundings outside of schools hour, this longitudinal data may also highlight opportunities to transform a student’s everyday world into an inventory for learning, punctuating their surroundings with learning recommendations. This would in turn lead to new ways to acknowledge and validate and foster informal learning, making it legitimate within a formal curriculum.
Resumo:
An Alternate Reality Game (ARG) is a unique experience that blurs the edges between our everyday lives and imagined game worlds. Players are invited to interact with each other and fictional characters using familiar tools such as emails, websites, telephones, and sometimes newspapers, radio and television. ARGs come in all shapes and sizes, tell a variety of different stories and inspire all kinds of interactions between people, their networks and the very streets in which they live. Some ARGs simply immerse you in fictional scenarios and indulge you in quirky challenges. While others reveal hidden histories of a city and teach us about important political causes. But the most exciting thing about ARGs is that they have the potential to inspire participants to imagine their everyday tools and places as resources for their own creative endeavors. Deb Polson will be presenting some of the most inspiring ARGs of recent years and revealing some of the design techniques that were used to create them. Most significantly Deb will discuss ways in which educators can imagine using ARGs as rich teaching tools that inspire collaborative learning and motivate students to engage in all kinds of subject matter.