947 resultados para digital communications
Resumo:
This paper examines three functions of music technology in the study of music. Firstly, as a tool, secondly, as an instrument and, lastly, as a medium for thinking. As our societies become increasingly embroiled in digital media for representation and communication, our philosophies of music education need to adapt to integrate these developments while maintaining the essence of music. The foundation of music technology in the 1990s is the digital representation of sound. It is this fundamental shift to a new medium with which to represent sound that carries with it the challenge to address digital technology and its multiple effects on music creation and presentation. In this paper I suggest that music institutions should take a broad and integrated approach to the place of music technology in their courses, based on the understanding of digital representation of sound and these three functions it can serve. Educators should reconsider digital technologies such as synthesizers and computers as music instruments and cognitive amplifiers, not simply as efficient tools.
Resumo:
Currently the Bachelor of Design is the generic degree offered to the four disciplines of Architecture, Landscape Architecture, Industrial Design, and Interior Design within the School of Design at the Queensland University of Technology. Regardless of discipline, Digital Communication is a core unit taken by the 600 first year students entering the Bachelor of Design degree. Within the design disciplines the communication of the designer's intentions is achieved primarily through the use of graphic images, with written information being considered as supportive or secondary. As such, Digital Communication attempts to educate learners in the fundamentals of this graphic design communication, using a generic digital or software tool. Past iterations of the unit have not acknowledged the subtle difference in design communication of the different design disciplines involved, and has used a single generic software tool. Following a review of the unit in 2008, it was decided that a single generic software tool was no longer entirely sufficient. This decision was based on the recognition that there was an increasing emergence of discipline specific digital tools, and an expressed student desire and apparent aptitude to learn these discipline specific tools. As a result the unit was reconstructed in 2009 to offer both discipline specific and generic software instruction, if elected by the student. This paper, apart from offering the general context and pedagogy of the existing and restructured units, will more importantly offer research data that validates the changes made to the unit. Most significant of this new data is the results of surveys that authenticate actual student aptitude versus desire in learning discipline specific tools. This is done through an exposure of student self efficacy in problem resolution and technological prowess - generally and specifically within the unit. More traditional means of validation is also presented that includes the results of the generic university-wide Learning Experience Survey of the unit, as well as a comparison between the assessment results of the restructured unit versus the previous year.
Resumo:
This study explores young people's creative practice through using Information and Communications Technologies (ICTs) - in one particular learning area - Drama. The study focuses on school-based contexts and the impact of ICT-based interventions within two drama education case studies. The first pilot study involved the use of online spaces to complement a co-curricula performance project. The second focus case was a curriculum-based project with online spaces and digital technologies being used to create a cyberdrama. Each case documents the activity systems, participant experiences and meaning making in specific institutional and technological contexts. The nature of creative practice and learning are analysed, using frameworks drawn from Vygotsky's socio-historical theory (including his work on creativity) and from activity theory. Case study analysis revealed the nature of contradictions encountered and these required an analysis of institutional constraints and the dynamics of power. Cyberdrama offers young people opportunities to explore drama through new modes and the use of ICTs can be seen as contributing different tools, spaces and communities for creative activity. To be able to engage in creative practice using ICTs requires a focus on a range of cultural tools and social practices beyond those of the purely technological. Cybernetic creative practice requires flexibility in the negotiation of tool use and subjects and a system that responds to feedback and can adapt. Classroom-based dramatic practice may allow for the negotiation of power and tool use in the development of collaborative works of the imagination. However, creative practice using ICTs in schools is typically restricted by authoritative power structures and access issues. The research identified participant engagement and meaning making emerging from different factors, with some students showing preferences for embodied creative practice in Drama that did not involve ICTs. The findings of the study suggest ICT-based interventions need to focus on different applications for the technology but also on embodied experience, the negotiation of power, identity and human interactions.
Resumo:
Digital rights management allows information owners to control the use and dissemination of electronic documents via a machine-readable licence. Documents are distributed in a protected form such that they may only be used with trusted environments, and only in accordance with terms and conditions stated in the licence. Digital rights management has found uses in protecting copyrighted audio-visual productions, private personal information, and companies' trade secrets and intellectual property. This chapter describes a general model of digital rights management together with the technologies used to implement each component of a digital rights management system, and desribes how digital rights management can be applied to secure the distribution of electronic information in a variety of contexts.
Resumo:
How does a digitally mediated environment work towards the ongoing support of the Hip Hop landscape present in the work of Jonzi D productions UK National Tour of "Markus the Sadist"
Resumo:
In the early part of 2008, a major political upset was pulled off in the Southeast Asian nation of Malaysia when the ruling coalition, Barisan Nasional (National Front), lost its long-held parliamentary majority after the general elections. Given the astonishingly high profile of political bloggers and relatively well established alternative online new sites within the nation, it was not surprising that many new media proponents saw the result as a major triumph of the medium. Through a brief account of the Hindraf (Hindu Rights Action Force) saga and the socio-political dissent nursed, in part, through new media in contemporary Malaysia, this paper seeks to lend context to the events that precede and surround the election as an example of the relationship between media and citizenship in praxis. In so doing it argues that the political turnaround, if indeed it proves to be, cannot be considered the consequence of new media alone. Rather, that to comprehensively assess the implications of new media for citizenship is to take into account the specific histories, conditions and actions (or lack of) of the various social actors involved.
Resumo:
Presentation of research projects
Resumo:
This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.
Resumo:
This dissertation develops the model of a prototype system for the digital lodgement of spatial data sets with statutory bodies responsible for the registration and approval of land related actions under the Torrens Title system. Spatial data pertain to the location of geographical entities together with their spatial dimensions and are classified as point, line, area or surface. This dissertation deals with a sub-set of spatial data, land boundary data that result from the activities performed by surveying and mapping organisations for the development of land parcels. The prototype system has been developed, utilising an event-driven paradigm for the user-interface, to exploit the potential of digital spatial data being generated from the utilisation of electronic techniques. The system provides for the creation of a digital model of the cadastral network and dependent data sets for an area of interest from hard copy records. This initial model is calibrated on registered control and updated by field survey to produce an amended model. The field-calibrated model then is electronically validated to ensure it complies with standards of format and content. The prototype system was designed specifically to create a database of land boundary data for subsequent retrieval by land professionals for surveying, mapping and related activities. Data extracted from this database are utilised for subsequent field survey operations without the need to create an initial digital model of an area of interest. Statistical reporting of differences resulting when subsequent initial and calibrated models are compared, replaces the traditional checking operations of spatial data performed by a land registry office. Digital lodgement of survey data is fundamental to the creation of the database of accurate land boundary data. This creation of the database is fundamental also to the efficient integration of accurate spatial data about land being generated by modem technology such as global positioning systems, and remote sensing and imaging, with land boundary information and other information held in Government databases. The prototype system developed provides for the delivery of accurate, digital land boundary data for the land registration process to ensure the continued maintenance of the integrity of the cadastre. Such data should meet also the more general and encompassing requirements of, and prove to be of tangible, longer term benefit to the developing, electronic land information industry.
Resumo:
This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent