960 resultados para Syntactic Projection
Resumo:
Participatory design has the moral and pragmatic tenet of including those who will be most affected by a design into the design process. However, good participation is hard to achieve and results linking project success and degree of participation are inconsistent. Through three case studies examining some of the challenges that different properties of knowledge – novelty, difference, dependence – can impose on the participatory endeavour we examine some of the consequences to the participatory process of failing to bridge across knowledge boundaries – syntactic, semantic, and pragmatic. One pragmatic consequence, disrupting the user’s feeling of involvement to the project, has been suggested as a possible explanation for the inconsistent results linking participation and project success. To aid in addressing these issues a new form of participatory research, called embedded research, is proposed and examined within the framework of the case studies and knowledge framework with a call for future research into its possibilities.
Resumo:
This paper presents Scatter Difference Nuisance Attribute Projection (SD-NAP) as an enhancement to NAP for SVM-based speaker verification. While standard NAP may inadvertently remove desirable speaker variability, SD-NAP explicitly de-emphasises this variability by incorporating a weighted version of the between-class scatter into the NAP optimisation criterion. Experimental evaluation of SD-NAP with a variety of SVM systems on the 2006 and 2008 NIST SRE corpora demonstrate that SD-NAP provides improved verification performance over standard NAP in most cases, particularly at the EER operating point.
Resumo:
Mapping the physical world, the arrangement of continents and oceans, cities and villages, mountains and deserts, while not without its own contentious aspects, can at least draw upon centuries of previous work in cartography and discovery. To map virtual spaces is another challenge altogether. Are cartographic conventions applicable to depictions of the blogosphere, or the internet in general? Is a more mathematical approach required to even start to make sense of the shape of the blogosphere, to understand the network created by and between blogs? With my research comparing information flows in the Australian and French political blogs, visualising the data obtained is important as it can demonstrate the spread of ideas and topics across blogs. However, how best to depict the flows, links, and the spaces between is still unclear. Is network theory and systems of hubs and nodes more relevant than mass communication theories to the research at hand, influencing the nature of any map produced? Is it even a good idea to try and apply boundaries like ‘Australian’ and ‘French’ to parts of a map that does not reflect international borders or the Mercator projection? While drawing upon some of my work-in-progress, this paper will also evaluate previous maps of the blogosphere and approaches to depicting networks of blogs. As such, the paper will provide a greater awareness of the tools available and the strengths and limitations of mapping methodologies, helping to shape the direction of my research in a field still very much under development.
Resumo:
As climate change will entail new conditions for the built environment, the thermal behaviour of air-conditioned office buildings may also change. Using building computer simulations, the impact of warmer weather is evaluated on the design and performance of air-conditioned office buildings in Australia, including the increased cooling loads and probable indoor temperature increases due to a possibly undersized air-conditioning system, as well as the possible change in energy use. It is found that existing office buildings would generally be able to adapt to the increasing warmth of year 2030 Low and High scenarios projections and the year 2070 Low scenario projection. However, for the 2070 High scenario, the study indicates that the existing office buildings in all capital cities of Australia would suffer from overheating problems. For existing buildings designed for current climate conditions, it is shown that there is a nearly linear correlation between the increase of average external air temperature and the increase of building cooling load. For the new buildings designed for warmer scenarios, a 28-59% increase of cooling capacity under the 2070 High scenario would be required.
Resumo:
X-ray computed tomography (CT) is a medical imaging technique that produces images of trans-axial planes through the human body. When compared with a conventional radiograph, which is an image of many planes superimposed on each other, a CT image exhibits significantly improved contrast although this is at the expense of reduced spatial resolution.----- A CT image is reconstructed mathematically from a large number of one dimensional projections of the chosen plane. These projections are acquired electronically using a linear array of solid-state detectors and an x ray source that rotates around the patient.----- X-ray computed tomography is used routinely in radiological examinations. It has also be found to be useful in special applications such as radiotherapy treatment planning and three-dimensional imaging for surgical planning.
Resumo:
This paper reports on a study in which 29 Year 6 students (selected from the top 30% of 176 Year 6 students) were individually interviewed to explore their ability to reunitise hundredths as tenths (Behr, Harel, Post & Lesh, 1992) when represented by prototypic (PRO) and nonprototypic (NPRO) models. The results showed that 55.2% of the students were able to unitise both models and that reunitising was more successful with the PRO model. The interviews revealed that many of these students had incomplete, fragmented or non-existent structural knowledge of the reunitising process and often relied on syntactic clues to complete the tasks. The implication for teaching is that instruction should not be limited to PRO representations of the part/whole notion of fraction and that the basic structures (equal parts, link between name and number of equal parts) of the part/whole notion needs to be revisited often.
Resumo:
The main objective of this PhD was to further develop Bayesian spatio-temporal models (specifically the Conditional Autoregressive (CAR) class of models), for the analysis of sparse disease outcomes such as birth defects. The motivation for the thesis arose from problems encountered when analyzing a large birth defect registry in New South Wales. The specific components and related research objectives of the thesis were developed from gaps in the literature on current formulations of the CAR model, and health service planning requirements. Data from a large probabilistically-linked database from 1990 to 2004, consisting of fields from two separate registries: the Birth Defect Registry (BDR) and Midwives Data Collection (MDC) were used in the analyses in this thesis. The main objective was split into smaller goals. The first goal was to determine how the specification of the neighbourhood weight matrix will affect the smoothing properties of the CAR model, and this is the focus of chapter 6. Secondly, I hoped to evaluate the usefulness of incorporating a zero-inflated Poisson (ZIP) component as well as a shared-component model in terms of modeling a sparse outcome, and this is carried out in chapter 7. The third goal was to identify optimal sampling and sample size schemes designed to select individual level data for a hybrid ecological spatial model, and this is done in chapter 8. Finally, I wanted to put together the earlier improvements to the CAR model, and along with demographic projections, provide forecasts for birth defects at the SLA level. Chapter 9 describes how this is done. For the first objective, I examined a series of neighbourhood weight matrices, and showed how smoothing the relative risk estimates according to similarity by an important covariate (i.e. maternal age) helped improve the model’s ability to recover the underlying risk, as compared to the traditional adjacency (specifically the Queen) method of applying weights. Next, to address the sparseness and excess zeros commonly encountered in the analysis of rare outcomes such as birth defects, I compared a few models, including an extension of the usual Poisson model to encompass excess zeros in the data. This was achieved via a mixture model, which also encompassed the shared component model to improve on the estimation of sparse counts through borrowing strength across a shared component (e.g. latent risk factor/s) with the referent outcome (caesarean section was used in this example). Using the Deviance Information Criteria (DIC), I showed how the proposed model performed better than the usual models, but only when both outcomes shared a strong spatial correlation. The next objective involved identifying the optimal sampling and sample size strategy for incorporating individual-level data with areal covariates in a hybrid study design. I performed extensive simulation studies, evaluating thirteen different sampling schemes along with variations in sample size. This was done in the context of an ecological regression model that incorporated spatial correlation in the outcomes, as well as accommodating both individual and areal measures of covariates. Using the Average Mean Squared Error (AMSE), I showed how a simple random sample of 20% of the SLAs, followed by selecting all cases in the SLAs chosen, along with an equal number of controls, provided the lowest AMSE. The final objective involved combining the improved spatio-temporal CAR model with population (i.e. women) forecasts, to provide 30-year annual estimates of birth defects at the Statistical Local Area (SLA) level in New South Wales, Australia. The projections were illustrated using sixteen different SLAs, representing the various areal measures of socio-economic status and remoteness. A sensitivity analysis of the assumptions used in the projection was also undertaken. By the end of the thesis, I will show how challenges in the spatial analysis of rare diseases such as birth defects can be addressed, by specifically formulating the neighbourhood weight matrix to smooth according to a key covariate (i.e. maternal age), incorporating a ZIP component to model excess zeros in outcomes and borrowing strength from a referent outcome (i.e. caesarean counts). An efficient strategy to sample individual-level data and sample size considerations for rare disease will also be presented. Finally, projections in birth defect categories at the SLA level will be made.
Resumo:
Automatic Speech Recognition (ASR) has matured into a technology which is becoming more common in our everyday lives, and is emerging as a necessity to minimise driver distraction when operating in-car systems such as navigation and infotainment. In “noise-free” environments, word recognition performance of these systems has been shown to approach 100%, however this performance degrades rapidly as the level of background noise is increased. Speech enhancement is a popular method for making ASR systems more ro- bust. Single-channel spectral subtraction was originally designed to improve hu- man speech intelligibility and many attempts have been made to optimise this algorithm in terms of signal-based metrics such as maximised Signal-to-Noise Ratio (SNR) or minimised speech distortion. Such metrics are used to assess en- hancement performance for intelligibility not speech recognition, therefore mak- ing them sub-optimal ASR applications. This research investigates two methods for closely coupling subtractive-type enhancement algorithms with ASR: (a) a computationally-efficient Mel-filterbank noise subtraction technique based on likelihood-maximisation (LIMA), and (b) in- troducing phase spectrum information to enable spectral subtraction in the com- plex frequency domain. Likelihood-maximisation uses gradient-descent to optimise parameters of the enhancement algorithm to best fit the acoustic speech model given a word se- quence known a priori. Whilst this technique is shown to improve the ASR word accuracy performance, it is also identified to be particularly sensitive to non-noise mismatches between the training and testing data. Phase information has long been ignored in spectral subtraction as it is deemed to have little effect on human intelligibility. In this work it is shown that phase information is important in obtaining highly accurate estimates of clean speech magnitudes which are typically used in ASR feature extraction. Phase Estimation via Delay Projection is proposed based on the stationarity of sinusoidal signals, and demonstrates the potential to produce improvements in ASR word accuracy in a wide range of SNR. Throughout the dissertation, consideration is given to practical implemen- tation in vehicular environments which resulted in two novel contributions – a LIMA framework which takes advantage of the grounding procedure common to speech dialogue systems, and a resource-saving formulation of frequency-domain spectral subtraction for realisation in field-programmable gate array hardware. The techniques proposed in this dissertation were evaluated using the Aus- tralian English In-Car Speech Corpus which was collected as part of this work. This database is the first of its kind within Australia and captures real in-car speech of 50 native Australian speakers in seven driving conditions common to Australian environments.
Resumo:
A promenade performance. This research produced a unique combination of performance using electronically augmented costuming, site-specific discrete electronic lighting and video projection and sustained mountainside/top choreography. The work was examined and expanded in two subsequent peer reviewed papers which scoped out the emerging field of ‘Grounded Media’. Curator and writer Kevin Murray further accorded and enhanced these ideas in subsequent critical writing and the work was also featured in a two page major profile in RealtimeThe work was commissioned by the long established Floating Land Festival and involved extensive on-site work as well as a residency, production and artist talk series at the Noosa Art Gallery. A documentary film of the work was subsequently presented in the three-month exhibition ‘Lines of Sight’ for the Nishi Ogi Machi Media Festival, Nishiogikubo Station Platform 1, Tokyo, Japan, curated by Youkobo Art Space.
Resumo:
Participatory design has the moral and pragmatic tenet of including those who will be most affected by a design into the design process. However, good participation is hard to achieve and results linking project success and degree of participation are inconsistent. Through three case studies examining some of the challenges that different properties of knowledge - novelty, difference, dependence - can impose on the participatory endeavour we examine some of the consequences to the participatory process of failing to bridge across knowledge boundaries - syntactic, semantic, and pragmatic. One pragmatic consequence, disrupting the user's feeling of involvement to the project, has been suggested as a possible explanation for the inconsistent results linking participation and project success. To aid in addressing these issues a new form of participatory research, called embedded research, is proposed and examined within the framework of the case studies and knowledge framework with a call for future research into its possibilities.
Resumo:
An Interactive Installation with holographic 3D projections, satellite imagery, surround sound and intuitive body driven interactivity. Remnant (v.1) was commissioned by the 2010 TreeLine ecoArt event - an initiative of the Sunshine Coast Council and presented at a remnant block of subtropical rainforest called ‘Mary Cairncross Scenic Reserve’ - located 100kms north of Brisbane near the township of Maleny. V2 was later commissioned for KickArts Gallery, Cairns, re-presenting the work in a new open format which allowed audiences to both experience the original power of the work and to also understand the construction of the work's powerful illusory, visual spaces. This art-science project focused upon the idea of remnant landscapes - isolated blocks of forest (or other vegetation types) typically set within a patchwork quilt of surrounding farmed land. Participants peer into a mysterious, long tunnel of imagery whilst navigating entirely through gentle head movements - allowing them to both 'steer' in three dimensions and also 'alight', as a butterfly might, upon a sector of landscape - which in turn reveals an underlying 'landscape of mind'. The work challenges audiences to re-imagine our conceptions of country in ways that will lead us to better reconnect and sustain today’s heavily divided landscapes. The research field involved developing new digital image projection methods, alternate embodied interaction and engagement strategies for eco-political media arts practice. The context was the creation of improved embodied and improvisational experiences for participants, further informed by ‘eco-philosophical’ and sustainment theories. By engaging with deep conceptions of connectivity between apparently disparate elements, the work considered novel strategies for fostering new desires, for understanding and re-thinking the requisite physical and ecological links between ‘things’ that have been historically shattered. The methodology was primarily practice-led and in concert with underlying theories. The work’s knowledge contribution was to question how new media interactive experience and embodied interaction might prompt participants to reflect upon appropriate resources and knowledges required to generate this substantive desire for new approaches to sustainment. This accentuated through the power of learning implied by the works' strongly visual and kinaesthetic interface (i.e. the tunnel of imagery and the head and torso operated navigation). The work was commissioned by the 2010 TreeLine ecoArt event - an initiative of the Sunshine Coast Council and the second version was commissioned by Kickarts Gallery, Cairns, specifically funded by a national optometrist chain. It was also funded in development by Arts Queensland and reviewed in Realtime.
Resumo:
The Pedestrian Interaction Patch Project (PIPP) seeks to exert influence over and encourage abnormal pedestrian behavior. By placing an unadvertised (and non recording) interactive video manipulation system and projection source in a high traffic public area, the PIPP allows pedestrians to privately (and publically) re-engage with a previously inactive physical environment, like a commonly used walkway or corridor. This system, the results of which are projected in real time on the architectural surface, inadvertently provides pedestrians with questions around preconceived notions of self and space. In an attempt to re-activate our relationship with the physical surrounds we occupy each day the PIPP creates a new set of memories to be recalled as we re-enter known environments once PIPP has moved on and as such re-enlivens our relationship with the everyday architecture we stroll past everyday. The PIPP environment is controlled using the software program Isadora, devised by Mark Coniglio at Troika Ranch, and contains a series of video manipulation patches that are designed to not only grab the pedestrians attention but to also encourage a sense of play and interaction between the architecture, the digital environment, the initially unsuspecting participant(s) and the pedestrian audience. The PIPP was included as part of the planned walking tour for the “Playing in Urban Spaces” seminar day, and was an installation that ran for the length of the symposium in a reclaimed pedestrian space that was encountered by both the participants and general public during the course of the day long event. Ideally once discovered PIPP encouraged pedestrians to return through the course of the seminar day to see if the environmental patches had changed or altered, and changed their standard route to include the PIPP installation or to avoid it, either way, encouraging an active response to the pathways normally traveled or newly discovered each day.
Resumo:
Robust image hashing seeks to transform a given input image into a shorter hashed version using a key-dependent non-invertible transform. These image hashes can be used for watermarking, image integrity authentication or image indexing for fast retrieval. This paper introduces a new method of generating image hashes based on extracting Higher Order Spectral features from the Radon projection of an input image. The feature extraction process is non-invertible, non-linear and different hashes can be produced from the same image through the use of random permutations of the input. We show that the transform is robust to typical image transformations such as JPEG compression, noise, scaling, rotation, smoothing and cropping. We evaluate our system using a verification-style framework based on calculating false match, false non-match likelihoods using the publicly available Uncompressed Colour Image database (UCID) of 1320 images. We also compare our results to Swaminathan’s Fourier-Mellin based hashing method with at least 1% EER improvement under noise, scaling and sharpening.
Resumo:
To date, studies have focused on the acquisition of alphabetic second languages (L2s) in alphabetic first language (L1) users, demonstrating significant transfer effects. The present study examined the process from a reverse perspective, comparing logographic (Mandarin-Chinese) and alphabetic (English) L1 users in the acquisition of an artificial logographic script, in order to determine whether similar language-specific advantageous transfer effects occurred. English monolinguals, English-French bilinguals and Chinese-English bilinguals learned a small set of symbols in an artificial logographic script and were subsequently tested on their ability to process this script in regard to three main perspectives: L2 reading, L2 working memory (WM), and inner processing strategies. In terms of L2 reading, a lexical decision task on the artificial symbols revealed markedly faster response times in the Chinese-English bilinguals, indicating a logographic transfer effect suggestive of a visual processing advantage. A syntactic decision task evaluated the degree to which the new language was mastered beyond the single word level. No L1-specific transfer effects were found for artificial language strings. In order to investigate visual processing of the artificial logographs further, a series of WM experiments were conducted. Artificial logographs were recalled under concurrent auditory and visuo-spatial suppression conditions to disrupt phonological and visual processing, respectively. No L1-specific transfer effects were found, indicating no visual processing advantage of the Chinese-English bilinguals. However, a bilingual processing advantage was found indicative of a superior ability to control executive functions. In terms of L1 WM, the Chinese-English bilinguals outperformed the alphabetic L1 users when processing L1 words, indicating a language experience-specific advantage. Questionnaire data on the cognitive strategies that were deployed during the acquisition and processing of the artificial logographic script revealed that the Chinese-English bilinguals rated their inner speech as lower than the alphabetic L1 users, suggesting that they were transferring their phonological processing skill set to the acquisition and use of an artificial script. Overall, evidence was found to indicate that language learners transfer specific L1 orthographic processing skills to L2 logographic processing. Additionally, evidence was also found indicating that a bilingual history enhances cognitive performance in L2.