897 resultados para Digital Informational Environment
Resumo:
Signal Processing (SP) is a subject of central importance in engineering and the applied sciences. Signals are information-bearing functions, and SP deals with the analysis and processing of signals (by dedicated systems) to extract or modify information. Signal processing is necessary because signals normally contain information that is not readily usable or understandable, or which might be disturbed by unwanted sources such as noise. Although many signals are non-electrical, it is common to convert them into electrical signals for processing. Most natural signals (such as acoustic and biomedical signals) are continuous functions of time, with these signals being referred to as analog signals. Prior to the onset of digital computers, Analog Signal Processing (ASP) and analog systems were the only tool to deal with analog signals. Although ASP and analog systems are still widely used, Digital Signal Processing (DSP) and digital systems are attracting more attention, due in large part to the significant advantages of digital systems over the analog counterparts. These advantages include superiority in performance,s peed, reliability, efficiency of storage, size and cost. In addition, DSP can solve problems that cannot be solved using ASP, like the spectral analysis of multicomonent signals, adaptive filtering, and operations at very low frequencies. Following the recent developments in engineering which occurred in the 1980's and 1990's, DSP became one of the world's fastest growing industries. Since that time DSP has not only impacted on traditional areas of electrical engineering, but has had far reaching effects on other domains that deal with information such as economics, meteorology, seismology, bioengineering, oceanology, communications, astronomy, radar engineering, control engineering and various other applications. This book is based on the Lecture Notes of Associate Professor Zahir M. Hussain at RMIT University (Melbourne, 2001-2009), the research of Dr. Amin Z. Sadik (at QUT & RMIT, 2005-2008), and the Note of Professor Peter O'Shea at Queensland University of Technology. Part I of the book addresses the representation of analog and digital signals and systems in the time domain and in the frequency domain. The core topics covered are convolution, transforms (Fourier, Laplace, Z. Discrete-time Fourier, and Discrete Fourier), filters, and random signal analysis. There is also a treatment of some important applications of DSP, including signal detection in noise, radar range estimation, banking and financial applications, and audio effects production. Design and implementation of digital systems (such as integrators, differentiators, resonators and oscillators are also considered, along with the design of conventional digital filters. Part I is suitable for an elementary course in DSP. Part II (which is suitable for an advanced signal processing course), considers selected signal processing systems and techniques. Core topics covered are the Hilbert transformer, binary signal transmission, phase-locked loops, sigma-delta modulation, noise shaping, quantization, adaptive filters, and non-stationary signal analysis. Part III presents some selected advanced DSP topics. We hope that this book will contribute to the advancement of engineering education and that it will serve as a general reference book on digital signal processing.
Resumo:
The emergence of mobile and ubiquitous computing has created what is referred to as a hybrid space – a virtual layer of digital information and interaction opportunities that sits on top and augments the physical environment. The increasing connectedness through such media, from anywhere to anybody at anytime, makes us less dependent on being physically present somewhere in particular. But, what is the role of ubiquitous computing in making physical presence at a particular place more attractive? Acknowledging historic context and identity as important attributes of place, this work embarks on a ‘global sense of place’ in which the cultural diversity, multiple identities, backgrounds, skills and experiences of people traversing a place are regarded as social assets of that place. The aim is to explore ways how physical architecture and infrastructure of a place can be mediated towards making invisible social assets visible, thus augmenting people’s situated social experience. Thereby, the focus is on embodied media, i.e. media that materialise digital information as observable and sometimes interactive parts of the physical environment hence amplify people’s real world experience, rather than substituting or moving it to virtual spaces.
Resumo:
In this this paper I identify specific historical trajectories that are directly contingent upon the deployment and use of new media, but which are actually hidden by a focus on the purely technological. They are: the increasingly abstract and alienated nature of economic value; the subsumption of all labour - material and intellectual - under systemic capital; and the convergence of formerly distinct spheres of analysis –the spheres of production, circulation, and consumption. This paper examines the implications of the knowledge economy from an historical materialist perspective. I synthesise the systemic views of Marx (1846/1972, 1875/1972 1970 1973 1976 1978 1981), Adorno (1951/1974 1964/1973 1991; Horkheimer and Adorno 1944/1998; Jarvis 1998), and Bourdieu (1991 1998) to argue for a language-focused approach to new media research and suggest aspects of Marxist thought which might be useful in researching emergent socio-technical domains. I also identify specific categories in the Marxist tradition which may no longer be analytically useful for researching the effects of new media.
Resumo:
This chapter examines how a change in school leadership can successfully address competencies in complex situations and thus create a positive learning environment in which Indigenous students can excel in their learning rather than accept a culture that inhibits school improvement. Mathematics has long been an area that has failed to assist Indigenous students in improving their learning outcomes, as it is a Eurocentric subject (Rothbaum, Weisz, Pott, Miyake & Morelli, 2000, De Plevitz, 2007) and does not contextualize pedagogy with Indigenous culture and perspectives (Matthews, Cooper & Baturo, 2007). The chapter explores the work of a team of Indigenous and non-Indigenous academics from the YuMi Deadly Centre who are turning the tide on improving Indigenous mathematical outcomes in schools and in communities with high numbers of Aboriginal and Torres Strait Islander students.
Resumo:
This chapter will begin by considering some of the distinctive features of media as creative industries, including their assessment of risk and return on investment, team-based production, the management of creativity, the value chain of production, distribution and circulation, and the significance of intellectual property in their revenue strategies. It will then critically appraise three strategies to capture new markets and revenue streams in the context of the rise of the Internet, digital media and globally networked distribution. The three strategies to be considered are conglomeration, networking and globalization, and the focus will be on the media giants such as News Corporation, Disney and Time-Warner. It will be argued that all three present considerable challenges in their application, and digital media technologies are weakening rather than strengthening their capacity to control the global media environment. The chapter will conclude with consideration of some implications of this analysis for questions of media power.
Resumo:
We describe a scaling method for templating digital radiographs using conventional acetate templates independent of template magnification without the need for a calibration marker. The mean magnification factor for the radiology department was determined (119.8%, range117%-123.4%). This fixed magnification factor was used to scale the radiographs by the method described. 32 femoral heads on postoperative THR radiographs were then measured and compared to the actual size. The mean absolute accuracy was within 0.5% of actual head size (range 0 to 3%) with a mean absolute difference of 0.16mm (range 0-1mm, SD 0.26mm). Intraclass Correlation Coefficient (ICC) showed excellent reliability for both inter and intraobserver measurements with ICC scores of 0.993 (95% CI 0.988-0.996) for interobserver measurements and intraobserver measurements ranging between 0.990-0.993 (95% CI 0.980-0.997).
Resumo:
This paper presents a key based generic model for digital image watermarking. The model aims at addressing an identified gap in the literature by providing a basis for assessing different watermarking requirements in various digital image applications. We start with a formulation of a basic watermarking system, and define system inputs and outputs. We then proceed to incorporate the use of keys in the design of various system components. Using the model, we also define a few fundamental design and evaluation parameters. To demonstrate the significance of the proposed model, we provide an example of how it can be applied to formally define common attacks.
Resumo:
The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of seven published/submitted papers, of which one has been published, three accepted for publication and the other three are under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of proposing strategies for the performance control of Distributed Generation (DG) system with digital estimation of power system signal parameters. Distributed Generation (DG) has been recently introduced as a new concept for the generation of power and the enhancement of conventionally produced electricity. Global warming issue calls for renewable energy resources in electricity production. Distributed generation based on solar energy (photovoltaic and solar thermal), wind, biomass, mini-hydro along with use of fuel cell and micro turbine will gain substantial momentum in the near future. Technically, DG can be a viable solution for the issue of the integration of renewable or non-conventional energy resources. Basically, DG sources can be connected to local power system through power electronic devices, i.e. inverters or ac-ac converters. The interconnection of DG systems to power system as a compensator or a power source with high quality performance is the main aim of this study. Source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, distortion at the point of common coupling in weak source cases, source current power factor, and synchronism of generated currents or voltages are the issues of concern. The interconnection of DG sources shall be carried out by using power electronics switching devices that inject high frequency components rather than the desired current. Also, noise and harmonic distortions can impact the performance of the control strategies. To be able to mitigate the negative effect of high frequency and harmonic as well as noise distortion to achieve satisfactory performance of DG systems, new methods of signal parameter estimation have been proposed in this thesis. These methods are based on processing the digital samples of power system signals. Thus, proposing advanced techniques for the digital estimation of signal parameters and methods for the generation of DG reference currents using the estimates provided is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. One of the main parameters of a power system signal is its frequency. Phasor Measurement (PM) technique is one of the renowned and advanced techniques used for the estimation of power system frequency. Chapter 2 focuses on an in-depth analysis conducted on the PM technique to reveal its strengths and drawbacks. The analysis will be followed by a new technique proposed to enhance the speed of the PM technique while the input signal is free of even-order harmonics. The other techniques proposed in this thesis as the novel ones will be compared with the PM technique comprehensively studied in Chapter 2. An algorithm based on the concept of Kalman filtering is proposed in Chapter 3. The algorithm is intended to estimate signal parameters like amplitude, frequency and phase angle in the online mode. The Kalman filter is modified to operate on the output signal of a Finite Impulse Response (FIR) filter designed by a plain summation. The frequency estimation unit is independent from the Kalman filter and uses the samples refined by the FIR filter. The frequency estimated is given to the Kalman filter to be used in building the transition matrices. The initial settings for the modified Kalman filter are obtained through a trial and error exercise. Another algorithm again based on the concept of Kalman filtering is proposed in Chapter 4 for the estimation of signal parameters. The Kalman filter is also modified to operate on the output signal of the same FIR filter explained above. Nevertheless, the frequency estimation unit, unlike the one proposed in Chapter 3, is not segregated and it interacts with the Kalman filter. The frequency estimated is given to the Kalman filter and other parameters such as the amplitudes and phase angles estimated by the Kalman filter is taken to the frequency estimation unit. Chapter 5 proposes another algorithm based on the concept of Kalman filtering. This time, the state parameters are obtained through matrix arrangements where the noise level is reduced on the sample vector. The purified state vector is used to obtain a new measurement vector for a basic Kalman filter applied. The Kalman filter used has similar structure to a basic Kalman filter except the initial settings are computed through an extensive math-work with regards to the matrix arrangement utilized. Chapter 6 proposes another algorithm based on the concept of Kalman filtering similar to that of Chapter 3. However, this time the initial settings required for the better performance of the modified Kalman filter are calculated instead of being guessed by trial and error exercises. The simulations results for the parameters of signal estimated are enhanced due to the correct settings applied. Moreover, an enhanced Least Error Square (LES) technique is proposed to take on the estimation when a critical transient is detected in the input signal. In fact, some large, sudden changes in the parameters of the signal at these critical transients are not very well tracked by Kalman filtering. However, the proposed LES technique is found to be much faster in tracking these changes. Therefore, an appropriate combination of the LES and modified Kalman filtering is proposed in Chapter 6. Also, this time the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 7 proposes the other algorithm based on the concept of Kalman filtering similar to those of Chapter 3 and 6. However, this time an optimal digital filter is designed instead of the simple summation FIR filter. New initial settings for the modified Kalman filter are calculated based on the coefficients of the digital filter applied. Also, the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 8 uses the estimation algorithm proposed in Chapter 7 for the interconnection scheme of a DG to power network. Robust estimates of the signal amplitudes and phase angles obtained by the estimation approach are used in the reference generation of the compensation scheme. Several simulation tests provided in this chapter show that the proposed scheme can very well handle the source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, and synchronism of generated currents or voltages. The purposed compensation scheme also prevents distortion in voltage at the point of common coupling in weak source cases, balances the source currents, and makes the supply side power factor a desired value.
Resumo:
This thesis explores the business environment for self-publishing musicians at the end of the 20th century and the start of the 21st century from theoretical and empirical standpoints. The exploration begins by asking three research questions: what are the factors affecting the sustainability of an Independent music business; how many of those factors can be directly influenced by an Independent musician in the day-to-day operations of their musical enterprise; and how can those factors be best manipulated to maximise the benefit generated from digital music assets? It answers these questions by considering the nature of value in the music business in light of theories of political economy, then quantitative and qualitative examinations of the nature of participation in the music business, and then auto-ethnographic approaches to the application of two technologically enabled tools available to Independent musicians. By analyzing the results of five different examinations of the topic it answers each research question with reference to four sets of recurring issues that affect the operations of a 21st century music business: the musicians’ personal characteristics, their ability to address their business’s informational needs; their ability to manage the relationships upon which their business depends; and their ability to resolve the remaining technological problems that confront them. It discusses ways in which Independent self-publishing musicians can and cannot deal with these four issues on a day-to-day basis and highlights aspects for which technological solutions do not exist as well as ways in which technology is not as effective as has been claimed. It then presents a self-critique and proposes some directions for further study before concluding by suggesting some common features of 21st century Independent music businesses. This thesis makes three contributions to knowledge. First, it provides a new understanding of the sources of musical value, shows how this explains changes in the music industries over the past 30 years, and provides a framework for predicting future developments in those industries. Second, it shows how the technological discontinuity that has occurred around the start of the 21st century has and has not affected the production and distribution of digital cultural artefacts and thus the attitudes, approaches, and business prospects of Independent musicians. Third, it argues for new understandings of two methods by which self-publishing musicians can grow a business using production methods that are only beginning to be more broadly understood: home studio recording and fan-sourced production. Developed from the perspective of working musicians themselves, this thesis identifies four sets of issues that determine the probable success of musicians’ efforts to adopt new technologies to capture the value of the musicians’ creativity and thereby foster growth that will sustain an Independent music business in the 21st century.
Resumo:
Virtual prototyping emerges as a new technology to replace existing physical prototypes for product evaluation, which are costly and time consuming to manufacture. Virtualization technology allows engineers and ergonomists to perform virtual builds and different ergonomic analyses on a product. Digital Human Modelling (DHM) software packages such as Siemens Jack, often integrate with CAD systems to provide a virtual environment which allows investigation of operator and product compatibility. Although the integration between DHM and CAD systems allows for the ergonomic analysis of anthropometric design, human musculoskeletal, multi-body modelling software packages such as the AnyBody Modelling System (AMS) are required to support physiologic design. They provide muscular force analysis, estimate human musculoskeletal strain and help address human comfort assessment. However, the independent characteristics of the modelling systems Jack and AMS constrain engineers and ergonomists in conducting a complete ergonomic analysis. AMS is a stand alone programming system without a capability to integrate into CAD environments. Jack is providing CAD integrated human-in-the-loop capability, but without considering musculoskeletal activity. Consequently, engineers and ergonomists need to perform many redundant tasks during product and process design. Besides, the existing biomechanical model in AMS uses a simplified estimation of body proportions, based on a segment mass ratio derived scaling approach. This is insufficient to represent user populations anthropometrically correct in AMS. In addition, sub-models are derived from different sources of morphologic data and are therefore anthropometrically inconsistent. Therefore, an interface between the biomechanical AMS and the virtual human model Jack was developed to integrate a musculoskeletal simulation with Jack posture modeling. This interface provides direct data exchange between the two man-models, based on a consistent data structure and common body model. The study assesses kinematic and biomechanical model characteristics of Jack and AMS, and defines an appropriate biomechanical model. The information content for interfacing the two systems is defined and a protocol is identified. The interface program is developed and implemented through Tcl and Jack-script(Python), and interacts with the AMS console application to operate AMS procedures.
Resumo:
This report discusses findings of a case study into "CADD, BIM and IPD" undertaken as a part of the retrospective analysis component of Sustainable Built Environment National Research Centre (SBEnrc) Project 2.7 Leveraging R&D investment for the Australian Built Environment. This case study investigated the evolution that has taken place in the Queensland Department of Public Works Division of Project Services during the last 20 years from: the initial implementation of computer aided design and documentation(CADD); to the experimentation with building information modelling (BIM) from the mid 2000’s; embedding integrated practice (IP); to current steps towards integrated project delivery (IPD) with the integration of contractors in the design/delivery process. This case study should be read in conjunction with Part 1 of this suite of reports.
Resumo:
This paper presents a feasible spatial collision avoidance approach for fixed-wing unmanned aerial vehicles (UAVs). The proposed strategy aims to achieve the desired relative bearing in the horizontal plane and relative elevation in the vertical plane so that the host aircraft is able to avoid collision with the intruder aircraft in 3D. The host aircraft will follow a desired trajectory in the collision avoidance course and resume the pre-arranged trajectory after collision is avoided. The approaching stopping condition is determined for the host aircraft to trigger an evasion maneuver to avoid collision in terms of measured heading. A switching controller is designed to achieve the spatial collision avoidance strategy. Simulation results demonstrate that the proposed approach can effectively avoid spatial collision, making it suitable for integration into flight control systems of UAVs.
Resumo:
This chapter describes a university/high school partnership focused on digital storytelling. It also explains the multi-stage process used to establish this successful partnership and project. The authors discuss the central role that technology played in developing this university/high school partnership, a collaboration that extended the impact of a digital storytelling project to reach high school students, university students, educators, high school administrators, and the local community. Valuing a reflective process that can lead to the creation of a powerful final product, the authors describe the impact of digital storytelling on multiple stakeholders, including the 13 university students and 33 culturally and linguistically diverse high school youth who participated during the fall of 2009. In addition, the chapter includes reflections from university and high school student participants expressed during focus groups conducted throughout the project. While most participants had a positive experience with the project, complications with the technology component often caused frustrations and additional challenges. Goals for sharing this project are to critically evaluate digital storytelling, describe lessons learned, and recommend good practices for others working within a similar context or with parallel goals.
Resumo:
The Moon appears to be much larger closer to the horizon than when higher in the sky. This is called the ‘Moon Illusion’ since the observed size of the Moon is not actually larger when the Moon is just above the horizon. This article describes a technique for verifying that the observed size of the Moon in not larger on the horizon. The technique can be easily performed in a high school teaching environment. Moreover, the technique demonstrates the surprising fact that the observed size of the Moon is actually smaller on the horizon due to atmospheric refraction. For the purposes of this paper, several images of the moon were taken with the Moon close to the horizon and close to the zenith. Images were processed using a free program called ImageJ. The Moon was found to be 5.73 ±0.04% smaller in area on the horizon then at the zenith.