805 resultados para digital material representation
Resumo:
Live coding performances provide a context with particular demands and limitations for music making. In this paper we discuss how as the live coding duo aa-cell we have responded to these challenges, and what this experience has revealed about the computational representation of music and approaches to interactive computer music performance. In particular we have identified several effective and efficient processes that underpin our practice including probability, linearity, periodicity, set theory, and recursion and describe how these are applied and combined to build sophisticated musical structures. In addition, we outline aspects of our performance practice that respond to the improvisational, collaborative and communicative requirements of musical live coding.
Resumo:
Diffusion is the process that leads to the mixing of substances as a result of spontaneous and random thermal motion of individual atoms and molecules. It was first detected by the English botanist Robert Brown in 1827, and the phenomenon became known as ‘Brownian motion’. More specifically, the motion observed by Brown was translational diffusion – thermal motion resulting in random variations of the position of a molecule. This type of motion was given a correct theoretical interpretation in 1905 by Albert Einstein, who derived the relationship between temperature, the viscosity of the medium, the size of the diffusing molecule, and its diffusion coefficient. It is translational diffusion that is indirectly observed in MR diffusion-tensor imaging (DTI). The relationship obtained by Einstein provides the physical basis for using translational diffusion to probe the microscopic environment surrounding the molecule.
Resumo:
For several reasons, the Fourier phase domain is less favored than the magnitude domain in signal processing and modeling of speech. To correctly analyze the phase, several factors must be considered and compensated, including the effect of the step size, windowing function and other processing parameters. Building on a review of these factors, this paper investigates a spectral representation based on the Instantaneous Frequency Deviation, but in which the step size between processing frames is used in calculating phase changes, rather than the traditional single sample interval. Reflecting these longer intervals, the term delta-phase spectrum is used to distinguish this from instantaneous derivatives. Experiments show that mel-frequency cepstral coefficients features derived from the delta-phase spectrum (termed Mel-Frequency delta-phase features) can produce broadly similar performance to equivalent magnitude domain features for both voice activity detection and speaker recognition tasks. Further, it is shown that the fusion of the magnitude and phase representations yields performance benefits over either in isolation.
Resumo:
Digital modelling tools are the next generation of computer aided design (CAD) tools for the construction industry. They allow a designer to build a virtual model of the building project before the building is constructed. This supports a whole range of analysis, and the identification and resolution of problems before they arise on-site, in ways that were previously not feasible.
Resumo:
IEC Technical Committee 57 (TC57) published a series of standards and technical reports for “Communication networks and systems for power utility automation” as the IEC 61850 series. Sampled value (SV) process buses allow for the removal of potentially lethal voltages and damaging currents inside substation control rooms and marshalling kiosks, reduce the amount of cabling required in substations, and facilitate the adoption of non-conventional instrument transformers. IEC 61850-9-2 provides an inter-operable solution to support multi-vendor process bus solutions. A time synchronisation system is required for a SV process bus, however the details are not defined in IEC 61850-9-2. IEEE Std 1588-2008, Precision Time Protocol version 2 (PTPv2), provides the greatest accuracy of network based time transfer systems, with timing errors of less than 100 ns achievable. PTPv2 is proposed by the IEC Smart Grid Strategy Group to synchronise IEC 61850 based substation automation systems. IEC 61850-9-2, PTPv2 and Ethernet are three complementary protocols that together define the future of sampled value digital process connections in substations. The suitability of PTPv2 for use with SV is evaluated, with preliminary results indicating that steady state performance is acceptable (jitter < 300 ns), and that extremely stable grandmaster oscillators are required to ensure SV timing requirements are met when recovering from loss of external synchronisation (such as GPS).
Resumo:
Children often have difficulties in learning spatial representations. This study investigated the effect of four different instructional formats on learning outcomes and strategies used when dealing with spatial tasks such as assembly procedures. It was hypothesised that instructional material that imposed least extraneous cognitive load would facilitate enhanced learning. Forty secondary students were presented with four types of instruction; orthographic drawing, isometric drawing, physical model and, isometric and physical model together. The findings provide evidence to suggest that working from physical models caused least extraneous cognitive load compared to the isometric and orthographic groups. The model group took less time, had more correctly completed models, required fewer extra looks, spent less time studying the instruction and made fewer errors. Problem decomposition, forward working and attending to information in the foreground of the graphical representation strategies were analysed.
Resumo:
Decentralised sensor networks typically consist of multiple processing nodes supporting one or more sensors. These nodes are interconnected via wireless communication. Practical applications of Decentralised Data Fusion have generally been restricted to using Gaussian based approaches such as the Kalman or Information Filter This paper proposes the use of Parzen window estimates as an alternate representation to perform Decentralised Data Fusion. It is required that the common information between two nodes be removed from any received estimates before local data fusion may occur Otherwise, estimates may become overconfident due to data incest. A closed form approximation to the division of two estimates is described to enable conservative assimilation of incoming information to a node in a decentralised data fusion network. A simple example of tracking a moving particle with Parzen density estimates is shown to demonstrate how this algorithm allows conservative assimilation of network information.
Resumo:
The aim of this paper is to demonstrate the validity of using Gaussian mixture models (GMM) for representing probabilistic distributions in a decentralised data fusion (DDF) framework. GMMs are a powerful and compact stochastic representation allowing efficient communication of feature properties in large scale decentralised sensor networks. It will be shown that GMMs provide a basis for analytical solutions to the update and prediction operations for general Bayesian filtering. Furthermore, a variant on the Covariance Intersect algorithm for Gaussian mixtures will be presented ensuring a conservative update for the fusion of correlated information between two nodes in the network. In addition, purely visual sensory data will be used to show that decentralised data fusion and tracking of non-Gaussian states observed by multiple autonomous vehicles is feasible.
Resumo:
Road surface macro-texture is an indicator used to determine the skid resistance levels in pavements. Existing methods of quantifying macro-texture include the sand patch test and the laser profilometer. These methods utilise the 3D information of the pavement surface to extract the average texture depth. Recently, interest in image processing techniques as a quantifier of macro-texture has arisen, mainly using the Fast Fourier Transform (FFT). This paper reviews the FFT method, and then proposes two new methods, one using the autocorrelation function and the other using wavelets. The methods are tested on pictures obtained from a pavement surface extending more than 2km's. About 200 images were acquired from the surface at approx. 10m intervals from a height 80cm above ground. The results obtained from image analysis methods using the FFT, the autocorrelation function and wavelets are compared with sensor measured texture depth (SMTD) data obtained from the same paved surface. The results indicate that coefficients of determination (R2) exceeding 0.8 are obtained when up to 10% of outliers are removed.
Resumo:
Community engagement with time poor and seemingly apathetic citizens continues to challenge local governments. Capturing the attention of a digitally literate community who are technology and socially savvy adds a new quality to this challenge. Community engagement is resource and time intensive, yet local governments have to manage on continually tightened budgets. The benefits of assisting citizens in taking ownership in making their community and city a better place to live in collaboration with planners and local governments are well established. This study investigates a new collaborative form of civic participation and engagement for urban planning that employs in-place digital augmentation. It enhances people’s experience of physical spaces with digital technologies that are directly accessible within that space, in particular through interaction with mobile phones and public displays. The study developed and deployed a system called Discussions in Space (DIS) in conjunction with a major urban planning project in Brisbane. Planners used the system to ask local residents planning-related questions via a public screen, and passers-by sent responses via SMS or Twitter onto the screen for others to read and reflect, hence encouraging in-situ, real-time, civic discourse. The low barrier of entry proved to be successful in engaging a wide range of residents who are generally not heard due to their lack of time or interest. The system also reflected positively on the local government for reaching out in this way. Challenges and implications of the short-texted and ephemeral nature of this medium were evaluated in two focus groups with urban planners. The paper concludes with an analysis of the planners’ feedback evaluating the merits of the data generated by the system to better engage with Australia’s new digital locals.
Resumo:
While Business Process Management (BPM) is an established discipline, the increased adoption of BPM technology in recent years has introduced new challenges. One challenge concerns dealing with the ever-growing complexity of business process models. Mechanisms for dealing with this complexity can be classified into two categories: 1) those that are solely concerned with the visual representation of the model and 2) those that change its inner structure. While significant attention is paid to the latter category in the BPM literature, this paper focuses on the former category. It presents a collection of patterns that generalize and conceptualize various existing mechanisms to change the visual representation of a process model. Next, it provides a detailed analysis of the degree of support for these patterns in a number of state-of-the-art languages and tools. This paper concludes with the results of a usability evaluation of the patterns conducted with BPM practitioners.
Resumo:
Australian queer (GLBTIQ) university student media is an important site of cultural and political self-representation. These groups exist within university student unions and the unions provide them with space, financial support and resources. Community media is a significant site for the development of queer identity, community and a key part of queer politics. This paper reviews my research into queer community media which is grounded in a Queer Theoretical perspective of identity performativity. Cover argues that Queer Theoretical approaches that study media products fail to consider the material contexts which contribute to their construction. I use an ethnographic approach combined with discourse analysis in order to reveal queer student activists’ media representations of queer, and the production contexts which shape them. My research contributes to queer media scholarship by using a methodology to address the gap that Cover identifies.
Resumo:
The field of literacy studies has always been challenged by the changing technologies that humans have used to express, represent and communicate their feelings, ideas, understandings and knowledge. However, while the written word has remained central to literacy processes over a long period, it is generally accepted that there have been significant changes to what constitutes ‘literate’ practice. In particular, the status of the printed word has been challenged by the increasing dominance of the image, along with the multimodal meaning-making systems facilitated by digital media. For example, Gunther Kress and other members of the New London Group have argued that the second half of the twentieth century saw a significant cultural shift from the linguistic to the visual as the dominant semiotic mode. This in turn, they suggest, was accompanied by a cultural shift ‘from page to screen’ as a dominant space of representation (e.g. Cope & Kalantzis, 2000; Kress, 2003; New London Group, 1996). In a similar vein, Bill Green has noted that we have witnessed a shift from the regime of the print apparatus to a regime of the digital electronic apparatus (Lankshear, Snyder and Green, 2000). For these reasons, the field of literacy education has been challenged to find new ways to conceptualise what is meant by ‘literacy’ in the twenty first century and to rethink the conditions under which children might best be taught to be fully literate so that they can operate with agency in today’s world.
Resumo:
The pervasiveness of technology in the 21st Century has meant that adults and children live in a society where digital devices are integral to their everyday lives and participation in society. How we communicate, learn, work, entertain ourselves, and even shop is influenced by technology. Therefore, before children begin school they are potentially exposed to a range of learning opportunities mediated by digital devices. These devices include microwaves, mobile phones, computers, and console games such as Playstations® and iPods®. In Queensland preparatory classrooms and in the homes of these children, teachers and parents support and scaffold young children’s experiences, providing them with access to a range of tools that promote learning and provide entertainment. This paper examines teachers’ and parents’ perspectives and considers whether they are techno-optimists who advocate for and promote the inclusion of digital technology, or whether they are they techno-pessimists, who prefer to exclude digital devices from young children’s everyday experiences. An exploratory, single case study design was utilised to gather data from three teachers and ten parents of children in the preparatory year. Teacher data was collected through interviews and email correspondence. Parent data was collected from questionnaires and focus groups. All parents who responded to the research invitation were mothers. The results of data analysis identified a misalignment among adults’ perspectives. Teachers were identified as techno-optimists and parents were identified as techno-pessimists with further emergent themes particular to each category being established. This is concerning because both teachers and mothers influence young children’s experiences and numeracy knowledge, thus, a shared understanding and a common commitment to supporting young children’s use of technology would be beneficial. Further research must investigate fathers’ perspectives of digital devices and the beneficial and detrimental roles that a range of digital devices, tools, and entertainment gadgets play in 21st Century children’s lives.
Resumo:
The use of Performance Capture techniques in the creation of games that involve Motion Capture is a relatively new phenomenon. To date there is no prescribed methodology that prepares actors for the rigors of this new industry and as such there are many questions to be answered around how actors navigate these environments successfully when all available training and theoretical material is focused on performance for theatre and film. This article proposes that through a deployment of an Ecological Approach to Visual Perception we may begin to chart this territory for actors and begin to contend with the demands of performing for the motion captured gaming scenario.