406 resultados para Speech-processing technologies
Resumo:
The requirement to monitor the rapid pace of environmental change due to global warming and to human development is producing large volumes of data but placing much stress on the capacity of ecologists to store, analyse and visualise that data. To date, much of the data has been provided by low level sensors monitoring soil moisture, dissolved nutrients, light intensity, gas composition and the like. However, a significant part of an ecologist’s work is to obtain information about species diversity, distributions and relationships. This task typically requires the physical presence of an ecologist in the field, listening and watching for species of interest. It is an extremely difficult task to automate because of the higher order difficulties in bandwidth, data management and intelligent analysis if one wishes to emulate the highly trained eyes and ears of an ecologist. This paper is concerned with just one part of the bigger challenge of environmental monitoring – the acquisition and analysis of acoustic recordings of the environment. Our intention is to provide helpful tools to ecologists – tools that apply information technologies and computational technologies to all aspects of the acoustic environment. The on-line system which we are building in conjunction with ecologists offers an integrated approach to recording, data management and analysis. The ecologists we work with have different requirements and therefore we have adopted the toolbox approach, that is, we offer a number of different web services that can be concatenated according to need. In particular, one group of ecologists is concerned with identifying the presence or absence of species and their distributions in time and space. Another group, motivated by legislative requirements for measuring habitat condition, are interested in summary indices of environmental health. In both case, the key issues are scalability and automation.
Resumo:
The innovation diffusion and knowledge management literature strongly supports the importance of communities of practice (COP) for enabling knowledge about how to use and adopt innovation initiatives. One of the most powerful tools for innovation diffusion is word-of-mouth wisdom from committed individuals who mentor and support each other. Close proximity for face-to-face interaction is highly effective, however, many organisations are geographically dispersed with projects being virtual linked sub-organisations using ICT to communicate. ICT has also introduced a useful facilitating technology for developing knowledge networks. This paper presents findings from a research program concentrating on ICT innovation diffusion in the Australian construction industry. One way in which ICT diffusion is taking place was found to be through within-company communities of practice. We undertook in-depth unstructured interviews with three of the major 10 to 15 contractors in Australia to discuss their ICT diffusion strategies. We discovered that in all three cases,within company networked communities of practice was a central strategy. Further, effective diffusion of ICT groupware tools can be critical in developing COP where they are geographically dispersed.
Resumo:
The progress of a nationally representative sample of 3632 children was followed from early childhood through to primary school, using data from the Longitudinal Study of Australian Children (LSAC). The aim was to examine the predictive effects of different aspects of communicative ability, and of early vs. sustained identification of speech and language impairment, on children's achievement and adjustment at school. Four indicators identified speech and language impairment: parent-rated expressive language concern; parent-rated receptive language concern; use of speech-language pathology services; below average scores on the adapted Peabody Picture Vocabulary Test-III. School outcomes were assessed by teachers' ratings of language/literacy ability, numeracy/mathematical thinking and approaches to learning. Comparison of group differences, using ANOVA, provided clear evidence that children who were identified as having speech and language impairment in their early childhood years did not perform as well at school, two years later, as their non-impaired peers on all three outcomes: Language and Literacy, Mathematical Thinking, and Approaches to Learning. The effects of early speech and language status on literacy, numeracy, and approaches to learning outcomes were similar in magnitude to the effect of family socio-economic factors, after controlling for child characteristics. Additionally, early identification of speech and language impairment (at age 4-5) was found to be a better predictor of school outcomes than sustained identification (at aged 4-5 and 6-7 years). Parent-reports of speech and language impairment in early childhood are useful in foreshadowing later difficulties with school and providing early intervention and targeted support from speech-language pathologists and specialist teachers.
Resumo:
This thesis is a documented energy audit and long term study of energy and water reduction in a ghee factory. Global production of ghee exceeds 4 million tonnes annually. The factory in this study refines dairy products by non-traditional centrifugal separation and produces 99.9% pure, canned, crystallised Anhydrous Milk Fat (Ghee). Ghee is traditionally made by batch processing methods. The traditional method is less efficient, than centrifugal separation. An in depth systematic investigation was conducted of each item of major equipment including; ammonia refrigeration, a steam boiler, canning equipment, pumps, heat exchangers and compressed air were all fine-tuned. Continuous monitoring of electrical usage showed that not every initiative worked, others had pay back periods of less than a year. In 1994-95 energy consumption was 6,582GJ and in 2003-04 it was 5,552GJ down 16% for a similar output. A significant reduction in water usage was achieved by reducing the airflow in the refrigeration evaporative condensers to match the refrigeration load. Water usage has fallen 68% from18ML in 1994-95 to 5.78ML in 2003-04. The methods reported in this thesis could be applied to other industries, which have similar equipment, and other ghee manufacturers.
Resumo:
This work aims to take advantage of recent developments in joint factor analysis (JFA) in the context of a phonetically conditioned GMM speaker verification system. Previous work has shown performance advantages through phonetic conditioning, but this has not been shown to date with the JFA framework. Our focus is particularly on strategies for combining the phone-conditioned systems. We show that the classic fusion of the scores is suboptimal when using multiple GMM systems. We investigate several combination strategies in the model space, and demonstrate improvement over score-level combination as well as over a non-phonetic baseline system. This work was conducted during the 2008 CLSP Workshop at Johns Hopkins University.
Resumo:
Synthetic polymers have attracted much attention in tissue engineering due to their ability to modulate biomechanical properties. This study investigated the feasibility of processing poly(varepsilon-caprolactone) (PCL) homopolymer, PCL-poly(ethylene glycol) (PEG) diblock, and PCL-PEG-PCL triblock copolymers into three-dimensional porous scaffolds. Properties of the various polymers were investigated by dynamic thermal analysis. The scaffolds were manufactured using the desktop robot-based rapid prototyping technique. Gross morphology and internal three-dimensional structure of scaffolds were identified by scanning electron microscopy and micro-computed tomography, which showed excellent fusion at the filament junctions, high uniformity, and complete interconnectivity of pore networks. The influences of process parameters on scaffolds' morphological and mechanical characteristics were studied. Data confirmed that the process parameters directly influenced the pore size, porosity, and, consequently, the mechanical properties of the scaffolds. The in vitro cell culture study was performed to investigate the influence of polymer nature and scaffold architecture on the adhesion of the cells onto the scaffolds using rabbit smooth muscle cells. Light, scanning electron, and confocal laser microscopy showed cell adhesion, proliferation, and extracellular matrix formation on the surface as well as inside the structure of both scaffold groups. The completely interconnected and highly regular honeycomb-like pore morphology supported bridging of the pores via cell-to-cell contact as well as production of extracellular matrix at later time points. The results indicated that the incorporation of hydrophilic PEG into hydrophobic PCL enhanced the overall hydrophilicity and cell culture performance of PCL-PEG copolymer. However, the scaffold architecture did not significantly influence the cell culture performance in this study.
Resumo:
Following an early claim by Nelson & McEvoy suggesting that word associations can display `spooky action at a distance behaviour', a serious investigation of the potentially quantum nature of such associations is currently underway. In this paper quantum theory is proposed as a framework suitable for modelling the mental lexicon, specifically the results obtained from both intralist and extralist word association experiments. Some initial models exploring this hypothesis are discussed, and they appear to be capable of substantial agreement with pre-existing experimental data. The paper concludes with a discussion of some experiments that will be performed in order to test these models.
Resumo:
Real-Time Kinematic (RTK) positioning is a technique used to provide precise positioning services at centimetre accuracy level in the context of Global Navigation Satellite Systems (GNSS). While a Network-based RTK (N-RTK) system involves multiple continuously operating reference stations (CORS), the simplest form of a NRTK system is a single-base RTK. In Australia there are several NRTK services operating in different states and over 1000 single-base RTK systems to support precise positioning applications for surveying, mining, agriculture, and civil construction in regional areas. Additionally, future generation GNSS constellations, including modernised GPS, Galileo, GLONASS, and Compass, with multiple frequencies have been either developed or will become fully operational in the next decade. A trend of future development of RTK systems is to make use of various isolated operating network and single-base RTK systems and multiple GNSS constellations for extended service coverage and improved performance. Several computational challenges have been identified for future NRTK services including: • Multiple GNSS constellations and multiple frequencies • Large scale, wide area NRTK services with a network of networks • Complex computation algorithms and processes • Greater part of positioning processes shifting from user end to network centre with the ability to cope with hundreds of simultaneous users’ requests (reverse RTK) There are two major requirements for NRTK data processing based on the four challenges faced by future NRTK systems, expandable computing power and scalable data sharing/transferring capability. This research explores new approaches to address these future NRTK challenges and requirements using the Grid Computing facility, in particular for large data processing burdens and complex computation algorithms. A Grid Computing based NRTK framework is proposed in this research, which is a layered framework consisting of: 1) Client layer with the form of Grid portal; 2) Service layer; 3) Execution layer. The user’s request is passed through these layers, and scheduled to different Grid nodes in the network infrastructure. A proof-of-concept demonstration for the proposed framework is performed in a five-node Grid environment at QUT and also Grid Australia. The Networked Transport of RTCM via Internet Protocol (Ntrip) open source software is adopted to download real-time RTCM data from multiple reference stations through the Internet, followed by job scheduling and simplified RTK computing. The system performance has been analysed and the results have preliminarily demonstrated the concepts and functionality of the new NRTK framework based on Grid Computing, whilst some aspects of the performance of the system are yet to be improved in future work.
Resumo:
This chapter analyses the affordances and constraints of an online literacy program designed for Indigenous Australian youth through a partnership between the Indigenous community, university staff and local schools. The after-school program sought to build on the cultural resources and experiences of the young people through a dialogic process of planning, negotiating, implementing, reflecting, and renegotiating the program with participants and a range of stakeholders. In the majority of cases, students presented themselves as part of pervasive global popular cultures, often hot-linking their webpages to pop icons and local sports stars. Elders regarded their competency as a potential cultural tool and community resource.
Resumo:
In Web service based systems, new value-added Web services can be constructed by integrating existing Web services. A Web service may have many implementations, which are functionally identical, but have different Quality of Service (QoS) attributes, such as response time, price, reputation, reliability, availability and so on. Thus, a significant research problem in Web service composition is how to select an implementation for each of the component Web services so that the overall QoS of the composite Web service is optimal. This is so called QoS-aware Web service composition problem. In some composite Web services there are some dependencies and conflicts between the Web service implementations. However, existing approaches cannot handle the constraints. This paper tackles the QoS-aware Web service composition problem with inter service dependencies and conflicts using a penalty-based genetic algorithm (GA). Experimental results demonstrate the effectiveness and the scalability of the penalty-based GA.
Resumo:
What are the ethical and political implications when the very foundations of life —things of awe and spiritual significance — are translated into products accessible to few people? This book critically analyses this historic recontextualisation. Through mediation — when meaning moves ‘from one text to another, from one discourse to another’ — biotechnology is transformed into analysable data and into public discourses. The unique book links biotechnology with media and citizenship. As with any ‘commodity’, biological products have been commodified. Because enormous speculative investment rests on this, risk will be understated and benefit will be overstated. Benefits will be unfairly distributed. Already, the bioprospecting of Southern megadiverse nations, legally sanctioned by U.S. property rights conventions, has led to wealth and health benefits in the North. Crucial to this development are biotechnological discourses that shift meanings from a “language of life” into technocratic discourses, infused with neo-liberal economic assumptions that promise progress and benefits for all. Crucial in this is the mass media’s representation of biotechnology for an audience with poor scientific literacy. Yet, even apparently benign biotechnology spawned by the Human Genome Project such as prenatal screening has eugenic possibilities, and genetic codes for illness are eagerly sought by insurance companies seeking to exclude certain people. These issues raise important questions about a citizenship that is founded on moral responsibility for the wellbeing of society now and into the future. After all, biotechnology is very much concerned with the essence of life itself. This book provides a space for alternative and dissident voices beyond the hype that surrounds biotechnology.
Resumo:
English has long been the subject where print text has reigned supreme. Increasingly in our networked and electronically connected world, however, we can be using digital technologies to create and respond to texts studied in English classrooms. The current approach to English includes the concept of ‘multiliteracies,’ which suggests that print texts alone are necessary but not sufficient’ (E.Q, 2000) and that literacy includes the flexible and sustainable mastery of a repertoire of practices. This also includes the decoding and deployment of media technologies (E.Q, 2000). This has become more possible in Australia as secondary students have increasing access to computers and online platforms at home and at school. With the advent of web 2.0., with its interactive platforms and free media making software, teachers and students can use this software to access information and emerging online literature in English covering a range of text types and new forms for authentic audiences and contexts. This chapter is concerned with responding to literary and mediated texts through the use of technologies. If we remain open to trying out new textual forms and see our digital ‘native students’ (Prensky, 2007) as our best resource, we can move beyond technophobia, become digital travellers’ ourselves and embrace new digital forms in our classrooms.
Resumo:
Structural health monitoring (SHM) is the term applied to the procedure of monitoring a structure’s performance, assessing its condition and carrying out appropriate retrofitting so that it performs reliably, safely and efficiently. Bridges form an important part of a nation’s infrastructure. They deteriorate due to age and changing load patterns and hence early detection of damage helps in prolonging the lives and preventing catastrophic failures. Monitoring of bridges has been traditionally done by means of visual inspection. With recent developments in sensor technology and availability of advanced computing resources, newer techniques have emerged for SHM. Acoustic emission (AE) is one such technology that is attracting attention of engineers and researchers all around the world. This paper discusses the use of AE technology in health monitoring of bridge structures, with a special focus on analysis of recorded data. AE waves are stress waves generated by mechanical deformation of material and can be recorded by means of sensors attached to the surface of the structure. Analysis of the AE signals provides vital information regarding the nature of the source of emission. Signal processing of the AE waveform data can be carried out in several ways and is predominantly based on time and frequency domains. Short time Fourier transform and wavelet analysis have proved to be superior alternatives to traditional frequency based analysis in extracting information from recorded waveform. Some of the preliminary results of the application of these analysis tools in signal processing of recorded AE data will be presented in this paper.
Resumo:
Investigated human visual processing of simple two-colour patterns using a delayed match to sample paradigm with positron emission tomography (PET). This study is unique in that the authors specifically designed the visual stimuli to be the same for both pattern and colour recognition with all patterns being abstract shapes not easily verbally coded composed of two-colour combinations. The authors did this to explore those brain regions required for both colour and pattern processing and to separate those areas of activation required for one or the other. 10 right-handed male volunteers aged 18–35 yrs were recruited. The authors found that both tasks activated similar occipital regions, the major difference being more extensive activation in pattern recognition. A right-sided network that involved the inferior parietal lobule, the head of the caudate nucleus, and the pulvinar nucleus of the thalamus was common to both paradigms. Pattern recognition also activated the left temporal pole and right lateral orbital gyrus, whereas colour recognition activated the left fusiform gyrus and several right frontal regions.