858 resultados para Phoneme representations
Resumo:
The aim of this paper is to provide a comparison of various algorithms and parameters to build reduced semantic spaces. The effect of dimension reduction, the stability of the representation and the effect of word order are examined in the context of the five algorithms bearing on semantic vectors: Random projection (RP), singular value decom- position (SVD), non-negative matrix factorization (NMF), permutations and holographic reduced representations (HRR). The quality of semantic representation was tested by means of synonym finding task using the TOEFL test on the TASA corpus. Dimension reduction was found to improve the quality of semantic representation but it is hard to find the optimal parameter settings. Even though dimension reduction by RP was found to be more generally applicable than SVD, the semantic vectors produced by RP are somewhat unstable. The effect of encoding word order into the semantic vector representation via HRR did not lead to any increase in scores over vectors constructed from word co-occurrence in context information. In this regard, very small context windows resulted in better semantic vectors for the TOEFL test.
Resumo:
This chapter provides an overview of the contribution of feminist criminologies to understandings of the complex intersections between sex, gender and crime. Dozens of scholars and activists have participated in these debates over the past four decades. For our contribution to this handbook, we interviewed ten distinguished scholars whose contributions are recognized internationally. Through the commentary provided by these scholars, this chapter examines some of the distinctive contributions of feminism to our knowledge about sex, gender, and crime, as well as some of the challenges it continues to face in the field of criminology. We conclude that feminist work within criminology continues to face a number of lingering challenges, most notably in relation to the struggle to maintain relevance in a world where concerns about gender inequality are marginalized and considered as historical relics not contemporary issues; where there are on-going tensions around the best strategies for change, as well as difficulties in challenging distorted representations of female crime and violence; and where a backlash, anti-feminist politics seeks to discredit explanations that draw a link between sex, gender, and crime. This chapter critically reviews these lingering challenges—locating feminist approaches (of which there are many) at the centre and not the periphery of advancing knowledge about gender, sex, and crime.
Resumo:
At common law, a corporation may be liable vicariously for the conduct of its appointed agents, employees or directors. This generally requires the agent or employee to be acting in the course of his or her agency or employment and, in the case of representations, to have actual or implied authority to make the representations. The circumstances in which a corporation may be liable for the conduct of its agents, employees or directors is broadened under the Australian Consumer Law (ACL) to where one of these parties engages in conduct “on behalf of” the corporation. As the decision in Bennett v Elysium Noosa Pty Ltd (in liq) demonstrates, this may extend to liability for the misleading conduct of a salesperson for the joint venture to parties who are not formal members of the joint venture, but where the joint venture activities are within the course of the entity’s “business, affairs or activities”.
Resumo:
This paper presents a graph-based method to weight medical concepts in documents for the purposes of information retrieval. Medical concepts are extracted from free-text documents using a state-of-the-art technique that maps n-grams to concepts from the SNOMED CT medical ontology. In our graph-based concept representation, concepts are vertices in a graph built from a document, edges represent associations between concepts. This representation naturally captures dependencies between concepts, an important requirement for interpreting medical text, and a feature lacking in bag-of-words representations. We apply existing graph-based term weighting methods to weight medical concepts. Using concepts rather than terms addresses vocabulary mismatch as well as encapsulates terms belonging to a single medical entity into a single concept. In addition, we further extend previous graph-based approaches by injecting domain knowledge that estimates the importance of a concept within the global medical domain. Retrieval experiments on the TREC Medical Records collection show our method outperforms both term and concept baselines. More generally, this work provides a means of integrating background knowledge contained in medical ontologies into data-driven information retrieval approaches.
Resumo:
The challenge of persistent appearance-based navigation and mapping is to develop an autonomous robotic vision system that can simultaneously localize, map and navigate over the lifetime of the robot. However, the computation time and memory requirements of current appearance-based methods typically scale not only with the size of the environment but also with the operation time of the platform; also, repeated revisits to locations will develop multiple competing representations which reduce recall performance. In this paper we present a solution to the persistent localization, mapping and global path planning problem in the context of a delivery robot in an office environment over a one-week period. Using a graphical appearance-based SLAM algorithm, CAT-Graph, we demonstrate constant time and memory loop closure detection with minimal degradation during repeated revisits to locations, along with topological path planning that improves over time without using a global metric representation. We compare the localization performance of CAT-Graph to openFABMAP, an appearance-only SLAM algorithm, and the path planning performance to occupancy-grid based metric SLAM. We discuss the limitations of the algorithm with regard to environment change over time and illustrate how the topological graph representation can be coupled with local movement behaviors for persistent autonomous robot navigation.
Resumo:
Understanding complex systems within the human body presents a unique challenge for medical engineers and health practitioners. One significant issue is the ability to communicate their research findings to audiences with limited medical knowledge or understanding of the behaviour and composition of such structures. Much of what is known about the human body is currently communicated through abstract representations which include raw data sets, hand drawn illustrations or cellular automata. The development of 3D Computer Graphics Animation has provided a new medium for communicating these abstract concepts to audiences in new ways. This paper presents an approach for the visualisation of human articular cartilage deterioration using 3D Computer Graphics Animation. The animated outcome of this research introduces the complex interior structure of human cartilage to audiences with limited medical engineering knowledge.
Resumo:
Image representations derived from simplified models of the primary visual cortex (V1), such as HOG and SIFT, elicit good performance in a myriad of visual classification tasks including object recognition/detection, pedestrian detection and facial expression classification. A central question in the vision, learning and neuroscience communities regards why these architectures perform so well. In this paper, we offer a unique perspective to this question by subsuming the role of V1-inspired features directly within a linear support vector machine (SVM). We demonstrate that a specific class of such features in conjunction with a linear SVM can be reinterpreted as inducing a weighted margin on the Kronecker basis expansion of an image. This new viewpoint on the role of V1-inspired features allows us to answer fundamental questions on the uniqueness and redundancies of these features, and offer substantial improvements in terms of computational and storage efficiency.
Resumo:
Real-world AI systems have been recently deployed which can automatically analyze the plan and tactics of tennis players. As the game-state is updated regularly at short intervals (i.e. point-level), a library of successful and unsuccessful plans of a player can be learnt over time. Given the relative strengths and weaknesses of a player’s plans, a set of proven plans or tactics from the library that characterize a player can be identified. For low-scoring, continuous team sports like soccer, such analysis for multi-agent teams does not exist as the game is not segmented into “discretized” plays (i.e. plans), making it difficult to obtain a library that characterizes a team’s behavior. Additionally, as player tracking data is costly and difficult to obtain, we only have partial team tracings in the form of ball actions which makes this problem even more difficult. In this paper, we propose a method to overcome these issues by representing team behavior via play-segments, which are spatio-temporal descriptions of ball movement over fixed windows of time. Using these representations we can characterize team behavior from entropy maps, which give a measure of predictability of team behaviors across the field. We show the efficacy and applicability of our method on the 2010-2011 English Premier League soccer data.
Resumo:
Abstract. In recent years, sparse representation based classification(SRC) has received much attention in face recognition with multipletraining samples of each subject. However, it cannot be easily applied toa recognition task with insufficient training samples under uncontrolledenvironments. On the other hand, cohort normalization, as a way of mea-suring the degradation effect under challenging environments in relationto a pool of cohort samples, has been widely used in the area of biometricauthentication. In this paper, for the first time, we introduce cohort nor-malization to SRC-based face recognition with insufficient training sam-ples. Specifically, a user-specific cohort set is selected to normalize theraw residual, which is obtained from comparing the test sample with itssparse representations corresponding to the gallery subject, using poly-nomial regression. Experimental results on AR and FERET databases show that cohort normalization can bring SRC much robustness against various forms of degradation factors for undersampled face recognition.
Resumo:
To recognize faces in video, face appearances have been widely modeled as piece-wise local linear models which linearly approximate the smooth yet non-linear low dimensional face appearance manifolds. The choice of representations of the local models is crucial. Most of the existing methods learn each local model individually meaning that they only anticipate variations within each class. In this work, we propose to represent local models as Gaussian distributions which are learned simultaneously using the heteroscedastic probabilistic linear discriminant analysis (PLDA). Each gallery video is therefore represented as a collection of such distributions. With the PLDA, not only the within-class variations are estimated during the training, the separability between classes is also maximized leading to an improved discrimination. The heteroscedastic PLDA itself is adapted from the standard PLDA to approximate face appearance manifolds more accurately. Instead of assuming a single global within-class covariance, the heteroscedastic PLDA learns different within-class covariances specific to each local model. In the recognition phase, a probe video is matched against gallery samples through the fusion of point-to-model distances. Experiments on the Honda and MoBo datasets have shown the merit of the proposed method which achieves better performance than the state-of-the-art technique.
Resumo:
“Supermassive” is a synchronised four-channel video installation with sound. Each video channel shows a different camera view of an animated three-dimensional scene, which visually references galactic or astral imagery. This scene is comprised of forty-four separate clusters of slowly orbiting white text. Each cluster refers to a different topic that has been sourced online. The topics are diverse with recurring subjects relating to spirituality, science, popular culture, food and experiences of contemporary urban life. The slow movements of the text and camera views are reinforced through a rhythmic, contemplative soundtrack. As an immersive installation, “Supermassive” operates somewhere between a meditational mind map and a representation of a contemporary data stream. “Supermassive” contributes to studies in the field of contemporary art. It is particularly concerned with the ways that graphic representations of language can operate in the exploration of contemporary lived experiences, whether actual or virtual. Artists such as Ed Ruscha and Christopher Wool have long explored the emotive and psychological potentials of graphic text. Other artists such as Doug Aitken and Pipilotti Rist have engaged with the physical and spatial potentials of audio-visual installations to create emotive and symbolic experiences for their audiences. Using a practice-led research methodology, “Supermassive” extends these creative inquiries. By creating a reflective atmosphere in which divergent textual subjects are pictured together, the work explores not only how we navigate information, but also how such navigations inform understandings of our physical and psychological realities. “Supermassive” has been exhibited internationally at LA Louver Gallery, Venice, California in 2013 and nationally with GBK as part of Art Month Sydney, also in 2013. It has been critically reviewed in The Los Angeles Times.
Resumo:
“Tranquility Falls” depicts a computer-generated waterfall set to sentimental stock music. As the water gushes, text borrowed from a popular talk show host’s self-help advice fade in and out graphically down the screen. As the animated phrases increase in tempo, the sounds of the waterfall begin to overwhelm the tender music. By creating overtly fabricated sensations of inspiration and awe, the work questions how and where we experience contemplation, wonderment and guidance in a contemporary context. “Tranquility Falls” contributes to studies in the field of contemporary art. It is particularly concerned with representations of spirituality and nature. These have been important themes in art practice for some time. For example, artists such as Olafur Eliasson and James Turrell have created artificial insertions in nature in order to question contemporary experiences of the natural environment. Other artists such as Nam Jun Paik have more directly addressed the changing relationship between spirituality and popular culture. Using a practice-led research methodology, “Tranquility Falls” extends these creative inquiries. By presenting an overtly synthetic but strangely evocative pun on a ‘fountain of knowledge’, it questions whether we are informed less by traditional engagements with organised religions and natural wonder, and instead, increasingly reliant on the mechanisms of popular culture for moments of insight and reflection. “Tranquility Falls” has been exhibited internationally at LA Louver Gallery, Venice, California in 2013 and nationally with GBK as part of Art Month Sydney, also in 2013. It has been critically reviewed in The Los Angeles Times.
Resumo:
Australian dramatic literature of the 1950s and 1960s heralded a new wave in theatre and canonised a unique Australian identity on local and international stages. In previous decades, Australian theatre had been abound with the mythology of the wide brown land and the outback hero. This rural setting proved remote to audiences and sat uneasily within the conventions of the naturalist theatre. It was the suburban home that provided the back drop for this postwar evolution in Australian drama. While there were a number of factors that contributed to this watershed in Australian theatre, little has been written about how the spatial context may have influenced this movement. With the combined effects of postwar urbanization and shifting ideologies around domesticity, a new literary landscape had been created for playwrights to explore. Australian playwrights such as Dorothy Hewett, Ray Lawler and David Williamson transcended the outback hero by relocating him inside the postwar home. The Australian home of the 1960s slowly started subscribing to a new aesthetic of continuous living spaces and patios that extended from the exterior to the interior. These mass produced homes employed diluted spatial principles of houses designed by architects, Le Corbusier, Ludwig Mies Van der Rohe and Adolf Loos in the 1920s and 1930s. In writing about Adolf Loos’ architecture, Beatriz Colomina described the “house as a stage for the family theatre”. She also wrote that the inhabitants of Loos’ houses were “both actors and spectators of the family scene involved”. It has not been investigated as to whether this new capacity to spectate within the home was a catalyst for playwrights to reflect upon, and translate the domestic environment to the stage. Audiences were also accustomed to being spectators of domesticity and could relate to the representations of home in the theatre. Additionally, the domestic setting provided a space for gender discourse; a space in which contestations of masculine and feminine identities could be played out. This research investigates whether spectating within the domestic setting contributed to the revolution in Australian dramatic literature of the 1950s and 1960s. The concept of the spectator in domesticity is underpinned by the work of Beatriz Colomina and Mark Wigley. An understanding of how playwrights may have been influenced by spectatorship within the home is ascertained through interviews and biographical research. The paper explores playwrights’ own domestic experiences and those that have influenced the plays they wrote and endeavours to determine whether seeing into the home played a vital role in canonising the Australian identity on the stage.
Resumo:
Using Monte Carlo simulation for radiotherapy dose calculation can provide more accurate results when compared to the analytical methods usually found in modern treatment planning systems, especially in regions with a high degree of inhomogeneity. These more accurate results acquired using Monte Carlo simulation however, often require orders of magnitude more calculation time so as to attain high precision, thereby reducing its utility within the clinical environment. This work aims to improve the utility of Monte Carlo simulation within the clinical environment by developing techniques which enable faster Monte Carlo simulation of radiotherapy geometries. This is achieved principally through the use new high performance computing environments and simpler alternative, yet equivalent representations of complex geometries. Firstly the use of cloud computing technology and it application to radiotherapy dose calculation is demonstrated. As with other super-computer like environments, the time to complete a simulation decreases as 1=n with increasing n cloud based computers performing the calculation in parallel. Unlike traditional super computer infrastructure however, there is no initial outlay of cost, only modest ongoing usage fees; the simulations described in the following are performed using this cloud computing technology. The definition of geometry within the chosen Monte Carlo simulation environment - Geometry & Tracking 4 (GEANT4) in this case - is also addressed in this work. At the simulation implementation level, a new computer aided design interface is presented for use with GEANT4 enabling direct coupling between manufactured parts and their equivalent in the simulation environment, which is of particular importance when defining linear accelerator treatment head geometry. Further, a new technique for navigating tessellated or meshed geometries is described, allowing for up to 3 orders of magnitude performance improvement with the use of tetrahedral meshes in place of complex triangular surface meshes. The technique has application in the definition of both mechanical parts in a geometry as well as patient geometry. Static patient CT datasets like those found in typical radiotherapy treatment plans are often very large and present a significant performance penalty on a Monte Carlo simulation. By extracting the regions of interest in a radiotherapy treatment plan, and representing them in a mesh based form similar to those used in computer aided design, the above mentioned optimisation techniques can be used so as to reduce the time required to navigation the patient geometry in the simulation environment. Results presented in this work show that these equivalent yet much simplified patient geometry representations enable significant performance improvements over simulations that consider raw CT datasets alone. Furthermore, this mesh based representation allows for direct manipulation of the geometry enabling motion augmentation for time dependant dose calculation for example. Finally, an experimental dosimetry technique is described which allows the validation of time dependant Monte Carlo simulation, like the ones made possible by the afore mentioned patient geometry definition. A bespoke organic plastic scintillator dose rate meter is embedded in a gel dosimeter thereby enabling simultaneous 3D dose distribution and dose rate measurement. This work demonstrates the effectiveness of applying alternative and equivalent geometry definitions to complex geometries for the purposes of Monte Carlo simulation performance improvement. Additionally, these alternative geometry definitions allow for manipulations to be performed on otherwise static and rigid geometry.
Resumo:
Parametric and generative modelling methods are ways in which computer models are made more flexible, and of formalising domain-specific knowledge. At present, no open standard exists for the interchange of parametric and generative information. The Industry Foundation Classes (IFC) which are an open standard for interoperability in building information models is presented as the base for an open standard in parametric modelling. The advantage of allowing parametric and generative representations are that the early design process can allow for more iteration and changes can be implemented quicker than with traditional models. This paper begins with a formal definition of what constitutes to be parametric and generative modelling methods and then proceeds to describe an open standard in which the interchange of components could be implemented. As an illustrative example of generative design, Frazer’s ‘Reptiles’ project from 1968 is reinterpreted.