408 resultados para Orthographic representations
Resumo:
Image representations derived from simplified models of the primary visual cortex (V1), such as HOG and SIFT, elicit good performance in a myriad of visual classification tasks including object recognition/detection, pedestrian detection and facial expression classification. A central question in the vision, learning and neuroscience communities regards why these architectures perform so well. In this paper, we offer a unique perspective to this question by subsuming the role of V1-inspired features directly within a linear support vector machine (SVM). We demonstrate that a specific class of such features in conjunction with a linear SVM can be reinterpreted as inducing a weighted margin on the Kronecker basis expansion of an image. This new viewpoint on the role of V1-inspired features allows us to answer fundamental questions on the uniqueness and redundancies of these features, and offer substantial improvements in terms of computational and storage efficiency.
Resumo:
Real-world AI systems have been recently deployed which can automatically analyze the plan and tactics of tennis players. As the game-state is updated regularly at short intervals (i.e. point-level), a library of successful and unsuccessful plans of a player can be learnt over time. Given the relative strengths and weaknesses of a player’s plans, a set of proven plans or tactics from the library that characterize a player can be identified. For low-scoring, continuous team sports like soccer, such analysis for multi-agent teams does not exist as the game is not segmented into “discretized” plays (i.e. plans), making it difficult to obtain a library that characterizes a team’s behavior. Additionally, as player tracking data is costly and difficult to obtain, we only have partial team tracings in the form of ball actions which makes this problem even more difficult. In this paper, we propose a method to overcome these issues by representing team behavior via play-segments, which are spatio-temporal descriptions of ball movement over fixed windows of time. Using these representations we can characterize team behavior from entropy maps, which give a measure of predictability of team behaviors across the field. We show the efficacy and applicability of our method on the 2010-2011 English Premier League soccer data.
Resumo:
Abstract. In recent years, sparse representation based classification(SRC) has received much attention in face recognition with multipletraining samples of each subject. However, it cannot be easily applied toa recognition task with insufficient training samples under uncontrolledenvironments. On the other hand, cohort normalization, as a way of mea-suring the degradation effect under challenging environments in relationto a pool of cohort samples, has been widely used in the area of biometricauthentication. In this paper, for the first time, we introduce cohort nor-malization to SRC-based face recognition with insufficient training sam-ples. Specifically, a user-specific cohort set is selected to normalize theraw residual, which is obtained from comparing the test sample with itssparse representations corresponding to the gallery subject, using poly-nomial regression. Experimental results on AR and FERET databases show that cohort normalization can bring SRC much robustness against various forms of degradation factors for undersampled face recognition.
Resumo:
To recognize faces in video, face appearances have been widely modeled as piece-wise local linear models which linearly approximate the smooth yet non-linear low dimensional face appearance manifolds. The choice of representations of the local models is crucial. Most of the existing methods learn each local model individually meaning that they only anticipate variations within each class. In this work, we propose to represent local models as Gaussian distributions which are learned simultaneously using the heteroscedastic probabilistic linear discriminant analysis (PLDA). Each gallery video is therefore represented as a collection of such distributions. With the PLDA, not only the within-class variations are estimated during the training, the separability between classes is also maximized leading to an improved discrimination. The heteroscedastic PLDA itself is adapted from the standard PLDA to approximate face appearance manifolds more accurately. Instead of assuming a single global within-class covariance, the heteroscedastic PLDA learns different within-class covariances specific to each local model. In the recognition phase, a probe video is matched against gallery samples through the fusion of point-to-model distances. Experiments on the Honda and MoBo datasets have shown the merit of the proposed method which achieves better performance than the state-of-the-art technique.
Resumo:
“Supermassive” is a synchronised four-channel video installation with sound. Each video channel shows a different camera view of an animated three-dimensional scene, which visually references galactic or astral imagery. This scene is comprised of forty-four separate clusters of slowly orbiting white text. Each cluster refers to a different topic that has been sourced online. The topics are diverse with recurring subjects relating to spirituality, science, popular culture, food and experiences of contemporary urban life. The slow movements of the text and camera views are reinforced through a rhythmic, contemplative soundtrack. As an immersive installation, “Supermassive” operates somewhere between a meditational mind map and a representation of a contemporary data stream. “Supermassive” contributes to studies in the field of contemporary art. It is particularly concerned with the ways that graphic representations of language can operate in the exploration of contemporary lived experiences, whether actual or virtual. Artists such as Ed Ruscha and Christopher Wool have long explored the emotive and psychological potentials of graphic text. Other artists such as Doug Aitken and Pipilotti Rist have engaged with the physical and spatial potentials of audio-visual installations to create emotive and symbolic experiences for their audiences. Using a practice-led research methodology, “Supermassive” extends these creative inquiries. By creating a reflective atmosphere in which divergent textual subjects are pictured together, the work explores not only how we navigate information, but also how such navigations inform understandings of our physical and psychological realities. “Supermassive” has been exhibited internationally at LA Louver Gallery, Venice, California in 2013 and nationally with GBK as part of Art Month Sydney, also in 2013. It has been critically reviewed in The Los Angeles Times.
Resumo:
“Tranquility Falls” depicts a computer-generated waterfall set to sentimental stock music. As the water gushes, text borrowed from a popular talk show host’s self-help advice fade in and out graphically down the screen. As the animated phrases increase in tempo, the sounds of the waterfall begin to overwhelm the tender music. By creating overtly fabricated sensations of inspiration and awe, the work questions how and where we experience contemplation, wonderment and guidance in a contemporary context. “Tranquility Falls” contributes to studies in the field of contemporary art. It is particularly concerned with representations of spirituality and nature. These have been important themes in art practice for some time. For example, artists such as Olafur Eliasson and James Turrell have created artificial insertions in nature in order to question contemporary experiences of the natural environment. Other artists such as Nam Jun Paik have more directly addressed the changing relationship between spirituality and popular culture. Using a practice-led research methodology, “Tranquility Falls” extends these creative inquiries. By presenting an overtly synthetic but strangely evocative pun on a ‘fountain of knowledge’, it questions whether we are informed less by traditional engagements with organised religions and natural wonder, and instead, increasingly reliant on the mechanisms of popular culture for moments of insight and reflection. “Tranquility Falls” has been exhibited internationally at LA Louver Gallery, Venice, California in 2013 and nationally with GBK as part of Art Month Sydney, also in 2013. It has been critically reviewed in The Los Angeles Times.
Resumo:
Australian dramatic literature of the 1950s and 1960s heralded a new wave in theatre and canonised a unique Australian identity on local and international stages. In previous decades, Australian theatre had been abound with the mythology of the wide brown land and the outback hero. This rural setting proved remote to audiences and sat uneasily within the conventions of the naturalist theatre. It was the suburban home that provided the back drop for this postwar evolution in Australian drama. While there were a number of factors that contributed to this watershed in Australian theatre, little has been written about how the spatial context may have influenced this movement. With the combined effects of postwar urbanization and shifting ideologies around domesticity, a new literary landscape had been created for playwrights to explore. Australian playwrights such as Dorothy Hewett, Ray Lawler and David Williamson transcended the outback hero by relocating him inside the postwar home. The Australian home of the 1960s slowly started subscribing to a new aesthetic of continuous living spaces and patios that extended from the exterior to the interior. These mass produced homes employed diluted spatial principles of houses designed by architects, Le Corbusier, Ludwig Mies Van der Rohe and Adolf Loos in the 1920s and 1930s. In writing about Adolf Loos’ architecture, Beatriz Colomina described the “house as a stage for the family theatre”. She also wrote that the inhabitants of Loos’ houses were “both actors and spectators of the family scene involved”. It has not been investigated as to whether this new capacity to spectate within the home was a catalyst for playwrights to reflect upon, and translate the domestic environment to the stage. Audiences were also accustomed to being spectators of domesticity and could relate to the representations of home in the theatre. Additionally, the domestic setting provided a space for gender discourse; a space in which contestations of masculine and feminine identities could be played out. This research investigates whether spectating within the domestic setting contributed to the revolution in Australian dramatic literature of the 1950s and 1960s. The concept of the spectator in domesticity is underpinned by the work of Beatriz Colomina and Mark Wigley. An understanding of how playwrights may have been influenced by spectatorship within the home is ascertained through interviews and biographical research. The paper explores playwrights’ own domestic experiences and those that have influenced the plays they wrote and endeavours to determine whether seeing into the home played a vital role in canonising the Australian identity on the stage.
Resumo:
Using Monte Carlo simulation for radiotherapy dose calculation can provide more accurate results when compared to the analytical methods usually found in modern treatment planning systems, especially in regions with a high degree of inhomogeneity. These more accurate results acquired using Monte Carlo simulation however, often require orders of magnitude more calculation time so as to attain high precision, thereby reducing its utility within the clinical environment. This work aims to improve the utility of Monte Carlo simulation within the clinical environment by developing techniques which enable faster Monte Carlo simulation of radiotherapy geometries. This is achieved principally through the use new high performance computing environments and simpler alternative, yet equivalent representations of complex geometries. Firstly the use of cloud computing technology and it application to radiotherapy dose calculation is demonstrated. As with other super-computer like environments, the time to complete a simulation decreases as 1=n with increasing n cloud based computers performing the calculation in parallel. Unlike traditional super computer infrastructure however, there is no initial outlay of cost, only modest ongoing usage fees; the simulations described in the following are performed using this cloud computing technology. The definition of geometry within the chosen Monte Carlo simulation environment - Geometry & Tracking 4 (GEANT4) in this case - is also addressed in this work. At the simulation implementation level, a new computer aided design interface is presented for use with GEANT4 enabling direct coupling between manufactured parts and their equivalent in the simulation environment, which is of particular importance when defining linear accelerator treatment head geometry. Further, a new technique for navigating tessellated or meshed geometries is described, allowing for up to 3 orders of magnitude performance improvement with the use of tetrahedral meshes in place of complex triangular surface meshes. The technique has application in the definition of both mechanical parts in a geometry as well as patient geometry. Static patient CT datasets like those found in typical radiotherapy treatment plans are often very large and present a significant performance penalty on a Monte Carlo simulation. By extracting the regions of interest in a radiotherapy treatment plan, and representing them in a mesh based form similar to those used in computer aided design, the above mentioned optimisation techniques can be used so as to reduce the time required to navigation the patient geometry in the simulation environment. Results presented in this work show that these equivalent yet much simplified patient geometry representations enable significant performance improvements over simulations that consider raw CT datasets alone. Furthermore, this mesh based representation allows for direct manipulation of the geometry enabling motion augmentation for time dependant dose calculation for example. Finally, an experimental dosimetry technique is described which allows the validation of time dependant Monte Carlo simulation, like the ones made possible by the afore mentioned patient geometry definition. A bespoke organic plastic scintillator dose rate meter is embedded in a gel dosimeter thereby enabling simultaneous 3D dose distribution and dose rate measurement. This work demonstrates the effectiveness of applying alternative and equivalent geometry definitions to complex geometries for the purposes of Monte Carlo simulation performance improvement. Additionally, these alternative geometry definitions allow for manipulations to be performed on otherwise static and rigid geometry.
Resumo:
Parametric and generative modelling methods are ways in which computer models are made more flexible, and of formalising domain-specific knowledge. At present, no open standard exists for the interchange of parametric and generative information. The Industry Foundation Classes (IFC) which are an open standard for interoperability in building information models is presented as the base for an open standard in parametric modelling. The advantage of allowing parametric and generative representations are that the early design process can allow for more iteration and changes can be implemented quicker than with traditional models. This paper begins with a formal definition of what constitutes to be parametric and generative modelling methods and then proceeds to describe an open standard in which the interchange of components could be implemented. As an illustrative example of generative design, Frazer’s ‘Reptiles’ project from 1968 is reinterpreted.
Resumo:
Topic modeling has been widely utilized in the fields of information retrieval, text mining, text classification etc. Most existing statistical topic modeling methods such as LDA and pLSA generate a term based representation to represent a topic by selecting single words from multinomial word distribution over this topic. There are two main shortcomings: firstly, popular or common words occur very often across different topics that bring ambiguity to understand topics; secondly, single words lack coherent semantic meaning to accurately represent topics. In order to overcome these problems, in this paper, we propose a two-stage model that combines text mining and pattern mining with statistical modeling to generate more discriminative and semantic rich topic representations. Experiments show that the optimized topic representations generated by the proposed methods outperform the typical statistical topic modeling method LDA in terms of accuracy and certainty.
Resumo:
Paris 1947 is the site of one of twentieth century fashion’s fictive highpoints. The New Look combined drama and poetics through an abiding rhetoric of elegance. In doing so it employed traditional modes of femininity, casting the woman of fashion in the guise of an ambiguous ‘new’ figure: half fairytale princess, half evil witch. This fashionable ideal was widely disseminated through key photographic representations, Willy Maywald’s 1947 image of the Bar Suit being a case in point. It was precisely such mythic formulations of ‘woman’ which Simone de Beauvoir was to take to task just two years later with the publication of The Second Sex. Driven by frustration with the status quo of real women, de Beauvoir recognised the role of fictive representations, both textual and visual in defining women. This paper reads key sections of The Second Sex through a comparative analysis of two iconic images of French women from 1947; Cartier-Bresson’s classic portrait of de Beauvoir and Willy Mayhold’s spectacular evocation of Christian Dior’s New Look. Cued by a compelling range of similarities between these images this paper explores links between fashion, feminism and fiction in mid-century French culture.
Resumo:
This paper focuses on Australian texts with Asian representations, which will be discussed in terms of Ethical Intelligence (Weinstein, 2011) explored through drama. This approach aligns with the architecture of the Australian Curriculum: English (AC:E, v5, 2013); in particular the general capabilities of 'ethical understanding' and 'intercultural understandings.' It also addresses one aspect of the Cross Curriculum Priorities which is to include texts about peoples from Asia. The selected texts not only show the struggles undergone by the authors and protagonists, but also the positive contributions that diverse writers from Asian and Middle Eastern countries have made to Australia.
Resumo:
In this study we develop a theorization of an Internet dating site as a cultural artifact. The site, Gaydar, is targeted at gay men. We argue that contemporary received representations of their sexuality figure heavily in the site’s focus by providing a cultural logic for the apparent ad hoc development trajectories of its varied commercial and non-‐commercial services. More specifically, we suggest that the growing sets of services related to the website are heavily enmeshed within current social practices and meanings. These practices and meanings are, in turn, shaped by the interactions and preferences of a variety of diverse groups involved in what is routinely seen within the mainstream literature as a singularly specific sexuality and cultural project. Thus, we attend to two areas – the influence of the various social engagements associated with Gaydar together with the further extension of its trajectory ‘beyond the web’. Through the case of Gaydar, we contribute a study that recognizes the need for attention to sexuality in information systems research and one which illustrates sexuality as a pivotal aspect of culture. We also draw from anthropology to theorize ICTs as cultural artifacts and provide insights into the contemporary phenomena of ICT enabled social networking.
Resumo:
The representation of business process models has been a continuing research topic for many years now. However, many process model representations have not developed beyond minimally interactive 2D icon-based representations of directed graphs and networks, with little or no annotation for information overlays. In addition, very few of these representations have undergone a thorough analysis or design process with reference to psychological theories on data and process visualization. This dearth of visualization research, we believe, has led to problems with BPM uptake in some organizations, as the representations can be difficult for stakeholders to understand, and thus remains an open research question for the BPM community. In addition, business analysts and process modeling experts themselves need visual representations that are able to assist with key BPM life cycle tasks in the process of generating optimal solutions. With the rise of desktop computers and commodity mobile devices capable of supporting rich interactive 3D environments, we believe that much of the research performed in computer human interaction, virtual reality, games and interactive entertainment have much potential in areas of BPM; to engage, provide insight, and to promote collaboration amongst analysts and stakeholders alike. We believe this is a timely topic, with research emerging in a number of places around the globe, relevant to this workshop. This is the second TAProViz workshop being run at BPM. The intention this year is to consolidate on the results of last year's successful workshop by further developing this important topic, identifying the key research topics of interest to the BPM visualization community.
Resumo:
The output harmonic quality of N series connected full-bridge dc-ac inverters is investigated. The inverters are pulse width modulated using a common reference signal but randomly phased carrier signals. Through analysis and simulation, probability distributions for inverter output harmonics and vector representations of N carrier phases are combined and assessed. It is concluded that a low total harmonic distortion is most likely to occur and will decrease further as N increases.