902 resultados para Digital techniques
Resumo:
Design as seen from the designer's perspective is a series of amazing imaginative jumps or creative leaps. But design as seen by the design historian is a smooth progression or evolution of ideas that they seem self-evident and inevitable after the event. But the next step is anything but obvious for the artist/creator/inventor/designer stuck at that point just before the creative leap. They know where they have come from and have a general sense of where they are going, but often do not have a precise target or goal. This is why it is misleading to talk of design as a problem-solving activity - it is better defined as a problem-finding activity. This has been very frustrating for those trying to assist the design process with computer-based, problem-solving techniques. By the time the problem has been defined, it has been solved. Indeed the solution is often the very definition of the problem. Design must be creative-or it is mere imitation. But since this crucial creative leap seem inevitable after the event, the question must arise, can we find some way of searching the space ahead? Of course there are serious problems of knowing what we are looking for and the vastness of the search space. It may be better to discard altogether the term "searching" in the context of the design process: Conceptual analogies such as search, search spaces and fitness landscapes aim to elucidate the design process. However, the vastness of the multidimensional spaces involved make these analogies misguided and they thereby actually result in further confounding the issue. The term search becomes a misnomer since it has connotations that imply that it is possible to find what you are looking for. In such vast spaces the term search must be discarded. Thus, any attempt at searching for the highest peak in the fitness landscape as an optimal solution is also meaningless. Futhermore, even the very existence of a fitness landscape is fallacious. Although alternatives in the same region of the vast space can be compared to one another, distant alternatives will stem from radically different roots and will therefore not be comparable in any straightforward manner (Janssen 2000). Nevertheless we still have this tantalizing possibility that if a creative idea seems inevitable after the event, then somehow might the process be rserved? This may be as improbable as attempting to reverse time. A more helpful analogy is from nature, where it is generally assumed that the process of evolution is not long-term goal directed or teleological. Dennett points out a common minsunderstanding of Darwinism: the idea that evolution by natural selection is a procedure for producing human beings. Evolution can have produced humankind by an algorithmic process, without its being true that evolution is an algorithm for producing us. If we were to wind the tape of life back and run this algorithm again, the likelihood of "us" being created again is infinitesimally small (Gould 1989; Dennett 1995). But nevertheless Mother Nature has proved a remarkably successful, resourceful, and imaginative inventor generating a constant flow of incredible new design ideas to fire our imagination. Hence the current interest in the potential of the evolutionary paradigm in design. These evolutionary methods are frequently based on techniques such as the application of evolutionary algorithms that are usually thought of as search algorithms. It is necessary to abandon such connections with searching and see the evolutionary algorithm as a direct analogy with the evolutionary processes of nature. The process of natural selection can generate a wealth of alternative experiements, and the better ones survive. There is no one solution, there is no optimal solution, but there is continuous experiment. Nature is profligate with her prototyping and ruthless in her elimination of less successful experiments. Most importantly, nature has all the time in the world. As designers we cannot afford prototyping and ruthless experiment, nor can we operate on the time scale of the natural design process. Instead we can use the computer to compress space and time and to perform virtual prototyping and evaluation before committing ourselves to actual prototypes. This is the hypothesis underlying the evolutionary paradigm in design (1992, 1995).
Resumo:
I invented YouTube. Well, not YouTube exactly, but something close – something called YIRN; and not by myself exactly, but with a team. In 2003-5 I led a research project designed to link geographically dispersed young people, to allow them to post their own photos, videos and music, and to comment on the same from various points of view – peer to peer, author to public, or impresario to audience. We wanted to find a way to take the individual creative productivity that is associated with the Internet and combine it with the easy accessibility and openness to other people’s imagination that is associated with broadcasting; especially, in the context of young people, listening to the radio. So we called it the Youth Internet Radio Network, or YIRN.
Resumo:
There are two aspects to the problem of digital scholarship and pedagogy. One is to do with scholarship; the other with pedagogy. In scholarship, the association of knowledge with its printed form remains dominant. In pedagogy, the desire to abandon print for ‘new’ media is urgent, at least in some parts of the academy. Film and media studies are thus at the intersection of opposing forces – pulling the field ‘back’ to print and ‘forward’ to digital media. These tensions may be especially painful in a field whose own object of study is another form of communication, neither print nor digital but broadcast. Although print has been overtaken in the popular marketplace by audio-visual forms, this was never achieved in the domain of scholarship. Even when it is digitally distributed, the output of research is still a ‘paper.’ But meanwhile, in the realm of teaching, production- and practice-based pedagogy has become firmly established. Nevertheless a disjunction remains, between high-end scholarship in research universities and vocational training in teaching institutions; but neither is well equipped to deal with the digital challenge.
Resumo:
The growth of direct marketing has been attributed to rapid advances in techn ology and the changing market context. The fundamental ability of direct marketers to communicate with consumers and to elicit a response, combined with the ubiquitous nature and power of mobile digital technology, provides a synergy that will increase the potential for the success of direct marketing. The aim of this paper is to provide an analytical framework identifying the developments in the digital environment from e-marketing to m-marketing, and to alert direct marketers to the enhanced capabilities available to them.
Resumo:
Designers and artists have integrated recent advances in interactive, tangible and ubiquitous computing technologies to create new forms of interactive environments in the domains of work, recreation, culture and leisure. Many designs of technology systems begin with the workplace in mind, and with function, ease of use, and efficiency high on the list of priorities. [1] These priorities do not fit well with works designed for an interactive art environment, where the aims are many, and where the focus on utility and functionality is to support a playful, ambiguous or even experimental experience for the participants. To evaluate such works requires an integration of art-criticism techniques with more recent Human Computer Interaction (HCI) methods, and an understanding of the different nature of engagement in these environments. This paper begins a process of mapping a set of priorities for amplifying engagement in interactive art installations. I first define the concept of ludic engagement and its usefulness as a lens for both design and evaluation in these settings. I then detail two fieldwork evaluations I conducted within two exhibitions of interactive artworks, and discuss their outcomes and the future directions of this research.
Resumo:
In this paper we explore what is required of a User Interface (UI) design in order to encourage participation around playing and creating Location-Based Games (LBGs). To base our research in practice, we present Cipher Cities, a web based system. Through the design of this system, we investigate how UI design can provide tools for complex content creation to compliment and encourage the use of mobile phones for designing, distributing, and playing LBGs. Furthermore we discuss how UI design can promote and support socialisation around LBGs through the design of functional interface components and services such as groups, user profiles, and player status listings.
Resumo:
Many governments world wide are attempting to increase accountability, transparency, and the quality of services by adopting information and communications technologies (ICTs) to modernize and change the way their administrations work. Meanwhile e-government is becoming a significant decision-making and service tool at local, regional and national government levels. The vast majority of users of these government online services see significant benefits from being able to access services online. The rapid pace of technological development has created increasingly more powerful ICTs that are capable of radically transforming public institutions and private organizations alike. These technologies have proven to be extraordinarily useful instruments in enabling governments to enhance the quality, speed of delivery and reliability of services to citizens and to business (VanderMeer & VanWinden, 2003). However, just because the technology is available does not mean it is accessible to all. The term digital divide has been used since the 1990s to describe patterns of unequal access to ICTs—primarily computers and the Internet—based on income, ethnicity, geography, age, and other factors. Over time it has evolved to more broadly define disparities in technology usage, resulting from a lack of access, skills, or interest in using technology. This article provides an overview of recent literature on e-government and the digital divide, and includes a discussion on the potential of e-government in addressing the digital divide.
Resumo:
This thesis focuses on the volatile and hygroscopic properties of mixed aerosol species. In particular, the influence organic species of varying solubility have upon seed aerosols. Aerosol studies were conducted at the Paul Scherrer Institut Laboratory for Atmospheric Chemistry (PSI-LAC, Villigen, Switzerland) and at the Queensland University of Technology International Laboratory for Air Quality and Health (QUT-ILAQH, Brisbane, Australia). The primary measurement tool employed in this program was the Volatilisation and Hygroscopicity Tandem Differential Mobility Analyser (VHTDMA - Johnson et al. 2004). This system was initially developed at QUT within the ILAQH and was completely re-developed as part of this project (see Section 1.4 for a description of this process). The new VHTDMA was deployed to the PSI-LAC where an analysis of the volatile and hygroscopic properties of ammonium sulphate seeds coated with organic species formed from the photo-oxidation of á-pinene was conducted. This investigation was driven by a desire to understand the influence of atmospherically prevalent organics upon water uptake by material with cloud forming capabilities. Of particular note from this campaign were observed influences of partially soluble organic coatings upon inorganic ammonium sulphate seeds above and below their deliquescence relative humidity (DRH). Above the DRH of the seed increasing the volume fraction of the organic component was shown to reduce the water uptake of the mixed particle. Below the DRH the organic was shown to activate the water uptake of the seed. This was the first time this effect had been observed for á-pinene derived SOA. In contrast with the simulated aerosols generated at the PSI-LAC a case study of the volatile and hygroscopic properties of diesel emissions was undertaken. During this stage of the project ternary nucleation was shown, for the first time, to be one of the processes involved in formation of diesel particulate matter. Furthermore, these particles were shown to be coated with a volatile hydrophobic material which prevented the water uptake of the highly hygroscopic material below. This result was a first and indicated that previous studies into the hygroscopicity of diesel emission had erroneously reported the particles to be hydrophobic. Both of these results contradict the previously upheld Zdanovksii-Stokes-Robinson (ZSR) additive rule for water uptake by mixed species. This is an important contribution as it adds to the weight of evidence that limits the validity of this rule.
Resumo:
Selecting an appropriate business process modelling technique forms an important task within the methodological challenges of a business process management project. While a plethora of available techniques has been developed over the last decades, there is an obvious shortage of well-accepted reference frameworks that can be used to evaluate and compare the capabilities of the different techniques. Academic progress has been made at least in the area of representational analyses that use ontology as a benchmark for such evaluations. This paper reflects on the comprehensive experiences with the application of a model based on the Bunge ontology in this context. A brief overview of the underlying research model characterizes the different steps in such a research project. A comparative summary of previous representational analyses of process modelling techniques over time gives insights into the relative maturity of selected process modelling techniques. Based on these experiences suggestions are made as to where ontology-based representational analyses could be further developed and what limitations are inherent to such analyses.
Resumo:
In this paper we propose a method for vision only topological simultaneous localisation and mapping (SLAM). Our approach does not use motion or odometric information but a sequence of colour histograms from visited places. In particular, we address the perceptual aliasing problem which occurs using external observations only in topological navigation. We propose a Bayesian inference method to incrementally build a topological map by inferring spatial relations from the sequence of observations while simultaneously estimating the robot's location. The algorithm aims to build a small map which is consistent with local adjacency information extracted from the sequence measurements. Local adjacency information is incorporated to disambiguate places which otherwise would appear to be the same. Experiments in an indoor environment show that the proposed technique is capable of dealing with perceptual aliasing using visual observations only and successfully performs topological SLAM.
Resumo:
In architecture courses, instilling a wider understanding of the industry specific representations practiced in the Building Industry is normally done under the auspices of Technology and Science subjects. Traditionally, building industry professionals communicated their design intentions using industry specific representations. Originally these mainly two dimensional representations such as plans, sections, elevations, schedules, etc. were produced manually, using a drawing board. Currently, this manual process has been digitised in the form of Computer Aided Design and Drafting (CADD) or ubiquitously simply CAD. While CAD has significant productivity and accuracy advantages over the earlier manual method, it still only produces industry specific representations of the design intent. Essentially, CAD is a digital version of the drawing board. The tool used for the production of these representations in industry is still mainly CAD. This is also the approach taken in most traditional university courses and mirrors the reality of the situation in the building industry. A successor to CAD, in the form of Building Information Modelling (BIM), is presently evolving in the Construction Industry. CAD is mostly a technical tool that conforms to existing industry practices. BIM on the other hand is revolutionary both as a technical tool and as an industry practice. Rather than producing representations of design intent, BIM produces an exact Virtual Prototype of any building that in an ideal situation is centrally stored and freely exchanged between the project team. Essentially, BIM builds any building twice: once in the virtual world, where any faults are resolved, and finally, in the real world. There is, however, no established model for learning through the use of this technology in Architecture courses. Queensland University of Technology (QUT), a tertiary institution that maintains close links with industry, recognises the importance of equipping their graduates with skills that are relevant to industry. BIM skills are currently in increasing demand throughout the construction industry through the evolution of construction industry practices. As such, during the second half of 2008, QUT 4th year architectural students were formally introduced for the first time to BIM, as both a technology and as an industry practice. This paper will outline the teaching team’s experiences and methodologies in offering a BIM unit (Architectural Technology and Science IV) at QUT for the first time and provide a description of the learning model. The paper will present the results of a survey on the learners’ perspectives of both BIM and their learning experiences as they learn about and through this technology.
Resumo:
In the future we will have a detailed ecological model of the whole planet with capabilities to explore and predict the consequences of alternative futures. However, such a planetary eco-model will take time to develop, time to populate with data, and time to validate - time the planet doesn't have. In the interim, we can model the major concentrations of energy use and pollution - our cities - and connect them to form a "talking cities network". Such a networked city model would be much quicker to build and validate. And the advantage of this approach is that it is safer and more effective for us to interfere with the operation of our cities than to tamper directly with the behaviour of natural systems. Essentially, it could be thought of as providing the planet with a nervous system and would empower us to better develop and manage sustainable cities.
Resumo:
This chapter discusses digital storytelling as a methodology for participatory public history through a detailed reflection on an applied research project that integrated both public history and digital storytelling in the context of a new master-planned urban development: the Kelvin Grove Urban Village Sharing Stories project.
Resumo:
Sleeper is an 18'00" musical work for live performer and laptop computer which exists as both a live performance work and a recorded work for audio CD. The work has been presented at a range of international performance events and survey exhibitions. These include the 2003 International Computer Music Conference (Singapore) where it was selected for CD publication, Variable Resistance (San Francisco Museum of Modern Art, USA), and i.audio, a survey of experimental sound at the Performance Space, Sydney. The source sound materials are drawn from field recordings made in acoustically resonant spaces in the Australian urban environment, amplified and acoustic instruments, radio signals, and sound synthesis procedures. The processing techniques blur the boundaries between, and exploit, the perceptual ambiguities of de-contextualised and processed sound. The work thus challenges the arbitrary distinctions between sound, noise and music and attempts to reveal the inherent musicality in so-called non-musical materials via digitally re-processed location audio. Thematically the work investigates Paul Virilio’s theory that technology ‘collapses space’ via the relationship of technology to speed. Technically this is explored through the design of a music composition process that draws upon spatially and temporally dispersed sound materials treated using digital audio processing technologies. One of the contributions to knowledge in this work is a demonstration of how disparate materials may be employed within a compositional process to produce music through the establishment of musically meaningful morphological, spectral and pitch relationships. This is achieved through the design of novel digital audio processing networks and a software performance interface. The work explores, tests and extends the music perception theories of ‘reduced listening’ (Schaeffer, 1967) and ‘surrogacy’ (Smalley, 1997), by demonstrating how, through specific audio processing techniques, sounds may shifted away from ‘causal’ listening contexts towards abstract aesthetic listening contexts. In doing so, it demonstrates how various time and frequency domain processing techniques may be used to achieve this shift.