832 resultados para Deployment of Federal Institutes
Resumo:
English has long been the subject where print text has reigned supreme. Increasingly in our networked and electronically connected world, however, we can be using digital technologies to create and respond to texts studied in English classrooms. The current approach to English includes the concept of ‘multiliteracies,’ which suggests that print texts alone are necessary but not sufficient’ (E.Q, 2000) and that literacy includes the flexible and sustainable mastery of a repertoire of practices. This also includes the decoding and deployment of media technologies (E.Q, 2000). This has become more possible in Australia as secondary students have increasing access to computers and online platforms at home and at school. With the advent of web 2.0., with its interactive platforms and free media making software, teachers and students can use this software to access information and emerging online literature in English covering a range of text types and new forms for authentic audiences and contexts. This chapter is concerned with responding to literary and mediated texts through the use of technologies. If we remain open to trying out new textual forms and see our digital ‘native students’ (Prensky, 2007) as our best resource, we can move beyond technophobia, become digital travellers’ ourselves and embrace new digital forms in our classrooms.
Resumo:
Jonzi D, one of the leading Hip Hop voices in the UK, creates contemporary theatrical works that merge dance, street art, original scored music and contemporary rap poetry, to create theatrical events that expand a thriving sense of a Hip Hop nation with citizens in the UK, throughout southern Africa and the rest of the world. In recent years Hip Hop has evolved as a performance genre in and of itself that not only borrows from other forms but vitally now contributes back to the body of contemporary practice in the performing arts. As part of this work Jonzi’s company Jonzi D Productions is committed to creating and touring original Hip Hop theatre that promotes the continuing development and awareness of a nation with its own language, culture and currency that exists without borders. Through the deployment of a universal voice from the local streets of Johannesburg and the East End of London, Jonzi D creates a form of highly energized performance that elevates Hip Hop as great democratiser between the highly developed global and under resourced local in the world. It is the staging of this democratised and technologised future (and present), that poses the greatest challenge for the scenographer working with Jonzi and his company, and the associated deprogramming and translation of the artists particular filmic vision to the stage, that this discussion will explore. This paper interrogates not only how a scenographic strategy can support the existence of this work but also how the scenographer as outsider can enter and influence this nation.
Resumo:
This paper discusses how internet services can be brought one step closer to the rural dispersed communities by improving wireless broadband communications in those areas. To accomplish this objective we describe the use of an innovative Multi-User-Single-Antenna for MIMO (MUSA-MIMO) technology using the spectrum currently allocated to analogue TV. MUSA-MIMO technology can be considered as a special case of MIMO technology, which is beneficial when provisioning reliable and high-speed communication channels. This paper describes channel modelling techniques to characterise the MUSA-MIMO system allowing an effective deployment of this technology. Particularly, it describes the development of a novel MUSA MIMO channel model that takes into account temporal variations in the rural wireless environment. This can be considered as a novel approach tailor-maid to rural Australia for provisioning efficient wireless broadband communications.
Resumo:
Acoustically, car cabins are extremely noisy and as a consequence, existing audio-only speech recognition systems, for voice-based control of vehicle functions such as the GPS based navigator, perform poorly. Audio-only speech recognition systems fail to make use of the visual modality of speech (eg: lip movements). As the visual modality is immune to acoustic noise, utilising this visual information in conjunction with an audio only speech recognition system has the potential to improve the accuracy of the system. The field of recognising speech using both auditory and visual inputs is known as Audio Visual Speech Recognition (AVSR). Continuous research in AVASR field has been ongoing for the past twenty-five years with notable progress being made. However, the practical deployment of AVASR systems for use in a variety of real-world applications has not yet emerged. The main reason is due to most research to date neglecting to address variabilities in the visual domain such as illumination and viewpoint in the design of the visual front-end of the AVSR system. In this paper we present an AVASR system in a real-world car environment using the AVICAR database [1], which is publicly available in-car database and we show that the use of visual speech conjunction with the audio modality is a better approach to improve the robustness and effectiveness of voice-only recognition systems in car cabin environments.
Resumo:
Emerging data streaming applications in Wireless Sensor Networks require reliable and energy-efficient Transport Protocols. Our recent Wireless Sensor Network deployment in the Burdekin delta, Australia, for water monitoring [T. Le Dinh, W. Hu, P. Sikka, P. Corke, L. Overs, S. Brosnan, Design and deployment of a remote robust sensor network: experiences from an outdoor water quality monitoring network, in: Second IEEE Workshop on Practical Issues in Building Sensor Network Applications (SenseApp 2007), Dublin, Ireland, 2007] is one such example. This application involves streaming sensed data such as pressure, water flow rate, and salinity periodically from many scattered sensors to the sink node which in turn relays them via an IP network to a remote site for archiving, processing, and presentation. While latency is not a primary concern in this class of application (the sampling rate is usually in terms of minutes or hours), energy-efficiency is. Continuous long-term operation and reliable delivery of the sensed data to the sink are also desirable. This paper proposes ERTP, an Energy-efficient and Reliable Transport Protocol for Wireless Sensor Networks. ERTP is designed for data streaming applications, in which sensor readings are transmitted from one or more sensor sources to a base station (or sink). ERTP uses a statistical reliability metric which ensures the number of data packets delivered to the sink exceeds the defined threshold. Our extensive discrete event simulations and experimental evaluations show that ERTP is significantly more energyefficient than current approaches and can reduce energy consumption by more than 45% when compared to current approaches. Consequently, sensor nodes are more energy-efficient and the lifespan of the unattended WSN is increased.
Resumo:
This paper proposes a security architecture for the basic cross indexing systems emerging as foundational structures in current health information systems. In these systems unique identifiers are issued to healthcare providers and consumers. In most cases, such numbering schemes are national in scope and must therefore necessarily be used via an indexing system to identify records contained in pre-existing local, regional or national health information systems. Most large scale electronic health record systems envisage that such correlation between national healthcare identifiers and pre-existing identifiers will be performed by some centrally administered cross referencing, or index system. This paper is concerned with the security architecture for such indexing servers and the manner in which they interface with pre-existing health systems (including both workstations and servers). The paper proposes two required structures to achieve the goal of a national scale, and secure exchange of electronic health information, including: (a) the employment of high trust computer systems to perform an indexing function, and (b) the development and deployment of an appropriate high trust interface module, a Healthcare Interface Processor (HIP), to be integrated into the connected workstations or servers of healthcare service providers. This proposed architecture is specifically oriented toward requirements identified in the Connectivity Architecture for Australia’s e-health scheme as outlined by NEHTA and the national e-health strategy released by the Australian Health Ministers.
Resumo:
While close talking microphones give the best signal quality and produce the highest accuracy from current Automatic Speech Recognition (ASR) systems, the speech signal enhanced by microphone array has been shown to be an effective alternative in a noisy environment. The use of microphone arrays in contrast to close talking microphones alleviates the feeling of discomfort and distraction to the user. For this reason, microphone arrays are popular and have been used in a wide range of applications such as teleconferencing, hearing aids, speaker tracking, and as the front-end to speech recognition systems. With advances in sensor and sensor network technology, there is considerable potential for applications that employ ad-hoc networks of microphone-equipped devices collaboratively as a virtual microphone array. By allowing such devices to be distributed throughout the users’ environment, the microphone positions are no longer constrained to traditional fixed geometrical arrangements. This flexibility in the means of data acquisition allows different audio scenes to be captured to give a complete picture of the working environment. In such ad-hoc deployment of microphone sensors, however, the lack of information about the location of devices and active speakers poses technical challenges for array signal processing algorithms which must be addressed to allow deployment in real-world applications. While not an ad-hoc sensor network, conditions approaching this have in effect been imposed in recent National Institute of Standards and Technology (NIST) ASR evaluations on distant microphone recordings of meetings. The NIST evaluation data comes from multiple sites, each with different and often loosely specified distant microphone configurations. This research investigates how microphone array methods can be applied for ad-hoc microphone arrays. A particular focus is on devising methods that are robust to unknown microphone placements in order to improve the overall speech quality and recognition performance provided by the beamforming algorithms. In ad-hoc situations, microphone positions and likely source locations are not known and beamforming must be achieved blindly. There are two general approaches that can be employed to blindly estimate the steering vector for beamforming. The first is direct estimation without regard to the microphone and source locations. An alternative approach is instead to first determine the unknown microphone positions through array calibration methods and then to use the traditional geometrical formulation for the steering vector. Following these two major approaches investigated in this thesis, a novel clustered approach which includes clustering the microphones and selecting the clusters based on their proximity to the speaker is proposed. Novel experiments are conducted to demonstrate that the proposed method to automatically select clusters of microphones (ie, a subarray), closely located both to each other and to the desired speech source, may in fact provide a more robust speech enhancement and recognition than the full array could.
Resumo:
Uninhabited aerial vehicles (UAVs) are a cutting-edge technology that is at the forefront of aviation/aerospace research and development worldwide. Many consider their current military and defence applications as just a token of their enormous potential. Unlocking and fully exploiting this potential will see UAVs in a multitude of civilian applications and routinely operating alongside piloted aircraft. The key to realising the full potential of UAVs lies in addressing a host of regulatory, public relation, and technological challenges never encountered be- fore. Aircraft collision avoidance is considered to be one of the most important issues to be addressed, given its safety critical nature. The collision avoidance problem can be roughly organised into three areas: 1) Sense; 2) Detect; and 3) Avoid. Sensing is concerned with obtaining accurate and reliable information about other aircraft in the air; detection involves identifying potential collision threats based on available information; avoidance deals with the formulation and execution of appropriate manoeuvres to maintain safe separation. This thesis tackles the detection aspect of collision avoidance, via the development of a target detection algorithm that is capable of real-time operation onboard a UAV platform. One of the key challenges of the detection problem is the need to provide early warning. This translates to detecting potential threats whilst they are still far away, when their presence is likely to be obscured and hidden by noise. Another important consideration is the choice of sensors to capture target information, which has implications for the design and practical implementation of the detection algorithm. The main contributions of the thesis are: 1) the proposal of a dim target detection algorithm combining image morphology and hidden Markov model (HMM) filtering approaches; 2) the novel use of relative entropy rate (RER) concepts for HMM filter design; 3) the characterisation of algorithm detection performance based on simulated data as well as real in-flight target image data; and 4) the demonstration of the proposed algorithm's capacity for real-time target detection. We also consider the extension of HMM filtering techniques and the application of RER concepts for target heading angle estimation. In this thesis we propose a computer-vision based detection solution, due to the commercial-off-the-shelf (COTS) availability of camera hardware and the hardware's relatively low cost, power, and size requirements. The proposed target detection algorithm adopts a two-stage processing paradigm that begins with an image enhancement pre-processing stage followed by a track-before-detect (TBD) temporal processing stage that has been shown to be effective in dim target detection. We compare the performance of two candidate morphological filters for the image pre-processing stage, and propose a multiple hidden Markov model (MHMM) filter for the TBD temporal processing stage. The role of the morphological pre-processing stage is to exploit the spatial features of potential collision threats, while the MHMM filter serves to exploit the temporal characteristics or dynamics. The problem of optimising our proposed MHMM filter has been examined in detail. Our investigation has produced a novel design process for the MHMM filter that exploits information theory and entropy related concepts. The filter design process is posed as a mini-max optimisation problem based on a joint RER cost criterion. We provide proof that this joint RER cost criterion provides a bound on the conditional mean estimate (CME) performance of our MHMM filter, and this in turn establishes a strong theoretical basis connecting our filter design process to filter performance. Through this connection we can intelligently compare and optimise candidate filter models at the design stage, rather than having to resort to time consuming Monte Carlo simulations to gauge the relative performance of candidate designs. Moreover, the underlying entropy concepts are not constrained to any particular model type. This suggests that the RER concepts established here may be generalised to provide a useful design criterion for multiple model filtering approaches outside the class of HMM filters. In this thesis we also evaluate the performance of our proposed target detection algorithm under realistic operation conditions, and give consideration to the practical deployment of the detection algorithm onboard a UAV platform. Two fixed-wing UAVs were engaged to recreate various collision-course scenarios to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. Based on this collected data, our proposed detection approach was able to detect targets out to distances ranging from about 400m to 900m. These distances, (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning ahead of impact that approaches the 12.5 second response time recommended for human pilots. Furthermore, readily available graphic processing unit (GPU) based hardware is exploited for its parallel computing capabilities to demonstrate the practical feasibility of the proposed target detection algorithm. A prototype hardware-in- the-loop system has been found to be capable of achieving data processing rates sufficient for real-time operation. There is also scope for further improvement in performance through code optimisations. Overall, our proposed image-based target detection algorithm offers UAVs a cost-effective real-time target detection capability that is a step forward in ad- dressing the collision avoidance issue that is currently one of the most significant obstacles preventing widespread civilian applications of uninhabited aircraft. We also highlight that the algorithm development process has led to the discovery of a powerful multiple HMM filtering approach and a novel RER-based multiple filter design process. The utility of our multiple HMM filtering approach and RER concepts, however, extend beyond the target detection problem. This is demonstrated by our application of HMM filters and RER concepts to a heading angle estimation problem.
Resumo:
Keyword Spotting is the task of detecting keywords of interest within continu- ous speech. The applications of this technology range from call centre dialogue systems to covert speech surveillance devices. Keyword spotting is particularly well suited to data mining tasks such as real-time keyword monitoring and unre- stricted vocabulary audio document indexing. However, to date, many keyword spotting approaches have su®ered from poor detection rates, high false alarm rates, or slow execution times, thus reducing their commercial viability. This work investigates the application of keyword spotting to data mining tasks. The thesis makes a number of major contributions to the ¯eld of keyword spotting. The ¯rst major contribution is the development of a novel keyword veri¯cation method named Cohort Word Veri¯cation. This method combines high level lin- guistic information with cohort-based veri¯cation techniques to obtain dramatic improvements in veri¯cation performance, in particular for the problematic short duration target word class. The second major contribution is the development of a novel audio document indexing technique named Dynamic Match Lattice Spotting. This technique aug- ments lattice-based audio indexing principles with dynamic sequence matching techniques to provide robustness to erroneous lattice realisations. The resulting algorithm obtains signi¯cant improvement in detection rate over lattice-based audio document indexing while still maintaining extremely fast search speeds. The third major contribution is the study of multiple veri¯er fusion for the task of keyword veri¯cation. The reported experiments demonstrate that substantial improvements in veri¯cation performance can be obtained through the fusion of multiple keyword veri¯ers. The research focuses on combinations of speech background model based veri¯ers and cohort word veri¯ers. The ¯nal major contribution is a comprehensive study of the e®ects of limited training data for keyword spotting. This study is performed with consideration as to how these e®ects impact the immediate development and deployment of speech technologies for non-English languages.
Resumo:
Aim This paper is a report of a study conducted to validate an instrument for measuring advanced practice nursing role delineation in an international contemporary health service context using the Delphi technique. Background Although most countries now have clear definitions and competency standards for nurse practitioners, no such clarity exists for many advanced practice nurse roles, leaving healthcare providers uncertain whether their service needs can or should be met by an advanced practice nurse or a nurse practitioner. The validation of a tool depicting advanced practice nursing is essential for the appropriate deployment of advanced practice nurses. This paper is the second in a three-phase study to develop an operational framework for assigning advanced practice nursing roles. Method An expert panel was established to review the activities in the Strong Model of Advanced Practice Role Delineation tool. Using the Delphi technique, data were collected via an on-line survey through a series of iterative rounds in 2008. Feedback and statistical summaries of responses were distributed to the panel until the 75% consensus cut-off was obtained. Results After three rounds and modification of five activities, consensus was obtained for validation of the content of this tool. Conclusion The Strong Model of Advanced Practice Role Delineation tool is valid for depicting the dimensions of practice of the advanced practice role in an international contemporary health service context thereby having the potential to optimize the utilization of the advanced practice nursing workforce.
Resumo:
This paper presents a framework for performing real-time recursive estimation of landmarks’ visual appearance. Imaging data in its original high dimensional space is probabilistically mapped to a compressed low dimensional space through the definition of likelihood functions. The likelihoods are subsequently fused with prior information using a Bayesian update. This process produces a probabilistic estimate of the low dimensional representation of the landmark visual appearance. The overall filtering provides information complementary to the conventional position estimates which is used to enhance data association. In addition to robotics observations, the filter integrates human observations in the appearance estimates. The appearance tracks as computed by the filter allow landmark classification. The set of labels involved in the classification task is thought of as an observation space where human observations are made by selecting a label. The low dimensional appearance estimates returned by the filter allow for low cost communication in low bandwidth sensor networks. Deployment of the filter in such a network is demonstrated in an outdoor mapping application involving a human operator, a ground and an air vehicle.
Resumo:
The use of Performance Capture techniques in the creation of games that involve Motion Capture is a relatively new phenomenon. To date there is no prescribed methodology that prepares actors for the rigors of this new industry and as such there are many questions to be answered around how actors navigate these environments successfully when all available training and theoretical material is focused on performance for theatre and film. This article proposes that through a deployment of an Ecological Approach to Visual Perception we may begin to chart this territory for actors and begin to contend with the demands of performing for the motion captured gaming scenario.
Resumo:
This paper reports on the challenges faced during the design and deployment of educationally-focused cultural probes with children. The aim of the project was to use cultural probes to discover insights into children's interests and ideas within an educational context. The deployment of a cultural probe pack with children aged between 11 and 13 has demonstrated the method's effectiveness as a tool for design inspiration. Children's responses to the cultural probe have provided a valuable insight into the attributes of successful probe activities, the nature of contextual information which may be gathered and the limitations of the method.
Resumo:
The low stream salinity naturally in the Nebine-Mungallala Catchment, extent of vegetation retention, relatively low rainfall and high evaporation indicates that there is a relatively low risk of rising shallow groundwater tables in the catchment. Scalding caused by wind and water erosion exposing highly saline sub-soils is a more important regional issue, such as in the Homeboin area. Local salinisation associated with evaporation of bore water from free flowing bore drains and bores is also an important land degradation issue particularly in the lower Nebine, Wallam and Mungallala Creeks. The replacement of free flowing artesian bores and bore drains with capped bores and piped water systems under the Great Artesian Basin bore rehabilitation program is addressing local salinisation and scalding in the vicinity of bore drains and preventing the discharge of saline bore water to streams. Three principles for the prevention and control of salinity in the Nebine Mungallala catchment have been identified in this review: • Avoid salinity through avoiding scalds – i.e. not exposing the near-surface salt in landscape through land degradation; • Riparian zone management: Scalding often occurs within 200m or so of watering lines. Natural drainage lines are most likely to be overstocked, and thus have potential for scalding. Scalding begins when vegetation is removed, and without that binding cover, wind and water erosion exposes the subsoil; and • Monitoring of exposed or grazed soil areas. Based on the findings of the study, we make the following recommendations: 1. Undertake a geotechnical study of existing maps and other data to help identify and target areas most at risk of rising water tables causing salinity. Selected monitoring should then be established using piezometers as an early warning system. 2. SW NRM should financially support scald reclamation activity through its various funding programs. However, for this to have any validity in the overall management of salinity risk, it is critical that such funding require the landholder to undertake a salinity hazard/risk assessment of his/her holding. 3. A staged approach to funding may be appropriate. In the first instance, it would be reasonable to commence funding some pilot scald reclamation work with a view to further developing and piloting the farm hazard/risk assessment tools, and exploring how subsequent grazing management strategies could be incorporated within other extension and management activities. Once the details of the necessary farm level activities have been more clearly defined, and following the outcomes of the geotechnical review recommended above, a more comprehensive funding package could be rolled out to priority areas. 4. We recommend that best-practice grazing management training currently on offer should be enhanced with information about salinity risk in scald-prone areas, and ways of minimising the likelihood of scald formation. 5. We recommend that course material be developed for local students in Years 6 and 7, and that arrangements be made with local schools to present this information. Given the constraints of existing syllabi, we envisage that negotiations may have to be undertaken with the Department of Education in order for this material to be permitted to be used. We have contact with key people who could help in this if required. 6. We recommend that SW NRM continue to support existing extension activities such as Grazing Land Management and the Monitoring Made Easy tools. These aids should be able to be easily expanding to incorporate techniques for monitoring, addressing and preventing salinity and scalding. At the time of writing staff of SW NRM were actively involved in this process. It is important that these activities are adequately resourced to facilitate the uptake by landholders of the perception that salinity is an issue that needs to be addressed as part of everyday management. 7. We recommend that SW NRM consider investing in the development and deployment of a scenario-modelling learning support tool as part of the awareness raising and education activities. Secondary salinity is a dynamic process that results from ongoing human activity which mobilises and/or exposes salt occurring naturally in the landscape. Time scales can be short to very long, and the benefits of management actions can similarly have immediate or very long time frames. One way to help explain the dynamics of these processes is through scenario modelling.
Resumo:
The presence of High Speed Rail (HSR) systems influences market shares of road and air transport, and the development of cities and regions they serve. With the deployment of HSR infrastructure, changes in accessibility have occurred. These changes have lead researchers to investigate effects on the economic and spatial derived variables. Contention exists when managing the trade off between efficiency, and access points which are usually in the range of hundreds of kilometres apart. In short, it is argued that intermediate cities, bypassed by HSR services, suffer a decline in their accessibility and developmental opportunities. The present Chapter will analyse possible impacts derived from the presence of HSR infrastructure. In particular, it will consider small and medium agglomerations in the vicinity of HSR corridors, not always served by HSR stations. Thus, a methodology is developed to quantify accessibility benefits and their distribution. These benefits will be investigated in relation to different rail transit strategies integrating HSR infrastructure where a HSR station cannot be positioned. These strategies are selected principally for the type of service offered: (i) cadenced, (ii) express, (iii) frequent or (iv) non-stopping. Furthermore, to ground the theoretical approach linking accessibility and competitiveness, a case study in the North-Eastern Italian regions will be used for the application of the accessibility distributive patterns between the HSR infrastructure and the selected strategies. Results indicate that benefits derive from well informed decisions on HSR station positioning and the appropriate blend of complementary services in the whole region to interface HSR infrastructure. The results are significant for all countries in Europe and worldwide, not only for investing in HSR infrastructure, but mostly in terms of building territorial cohesion, while seeking international recognition for developing successful new technology and systems.