764 resultados para Labels.
Resumo:
This paper presents a framework for performing real-time recursive estimation of landmarks’ visual appearance. Imaging data in its original high dimensional space is probabilistically mapped to a compressed low dimensional space through the definition of likelihood functions. The likelihoods are subsequently fused with prior information using a Bayesian update. This process produces a probabilistic estimate of the low dimensional representation of the landmark visual appearance. The overall filtering provides information complementary to the conventional position estimates which is used to enhance data association. In addition to robotics observations, the filter integrates human observations in the appearance estimates. The appearance tracks as computed by the filter allow landmark classification. The set of labels involved in the classification task is thought of as an observation space where human observations are made by selecting a label. The low dimensional appearance estimates returned by the filter allow for low cost communication in low bandwidth sensor networks. Deployment of the filter in such a network is demonstrated in an outdoor mapping application involving a human operator, a ground and an air vehicle.
Resumo:
The XML Document Mining track was launched for exploring two main ideas: (1) identifying key problems and new challenges of the emerging field of mining semi-structured documents, and (2) studying and assessing the potential of Machine Learning (ML) techniques for dealing with generic ML tasks in the structured domain, i.e., classification and clustering of semi-structured documents. This track has run for six editions during INEX 2005, 2006, 2007, 2008, 2009 and 2010. The first five editions have been summarized in previous editions and we focus here on the 2010 edition. INEX 2010 included two tasks in the XML Mining track: (1) unsupervised clustering task and (2) semi-supervised classification task where documents are organized in a graph. The clustering task requires the participants to group the documents into clusters without any knowledge of category labels using an unsupervised learning algorithm. On the other hand, the classification task requires the participants to label the documents in the dataset into known categories using a supervised learning algorithm and a training set. This report gives the details of clustering and classification tasks.
Resumo:
Women and Representation in Local Government opens up an opportunity to critique and move beyond suppositions and labels in relation to women in local government. Presenting a wealth of new empirical material, this book brings together international experts to examine and compare the presence of women at this level and features case studies on the US, UK, France, Germany, Spain, Finland, Uganda, China, Australia and New Zealand. Divided into four main sections, each explores a key theme related to the subject of women and representation in local government and engages with contemporary gender theory and the broader literature on women and politics. The contributors explore local government as a gendered environment; critiquing strategies to address the limited number of elected female members in local government and examine the impact of significant recent changes on local government through a gender lens. Addressing key questions of how gender equality can be achieved in this sector, it will be of strong interest to students and academics working in the fields of gender studies, local government and international politics.
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical VC dimension, empirical VC entropy, and margin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.
Resumo:
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space - classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.
Resumo:
We consider the problem of structured classification, where the task is to predict a label y from an input x, and y has meaningful internal structure. Our framework includes supervised training of Markov random fields and weighted context-free grammars as special cases. We describe an algorithm that solves the large-margin optimization problem defined in [12], using an exponential-family (Gibbs distribution) representation of structured objects. The algorithm is efficient—even in cases where the number of labels y is exponential in size—provided that certain expectations under Gibbs distributions can be calculated efficiently. The method for structured labels relies on a more general result, specifically the application of exponentiated gradient updates [7, 8] to quadratic programs.
Resumo:
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space -- classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semi-definite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -- using the labelled part of the data one can learn an embedding also for the unlabelled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method to learn the 2-norm soft margin parameter in support vector machines, solving another important open problem. Finally, the novel approach presented in the paper is supported by positive empirical results.
Resumo:
In this paper we describe a body of work aimed at extending the reach of mobile navigation and mapping. We describe how running topological and metric mapping and pose estimation processes concurrently, using vision and laser ranging, has produced a full six-degree-of-freedom outdoor navigation system. It is capable of producing intricate three-dimensional maps over many kilometers and in real time. We consider issues concerning the intrinsic quality of the built maps and describe our progress towards adding semantic labels to maps via scene de-construction and labeling. We show how our choices of representation, inference methods and use of both topological and metric techniques naturally allow us to fuse maps built from multiple sessions with no need for manual frame alignment or data association.
Resumo:
A system to segment and recognize Australian 4-digit postcodes from address labels on parcels is described. Images of address labels are preprocessed and adaptively thresholded to reduce noise. Projections are used to segment the line and then the characters comprising the postcode. Individual digits are recognized using bispectral features extracted from their parallel beam projections. These features are insensitive to translation, scaling and rotation, and robust to noise. Results on scanned images are presented. The system is currently being improved and implemented to work on-line.
Resumo:
Since manually constructing domain-specific sentiment lexicons is extremely time consuming and it may not even be feasible for domains where linguistic expertise is not available. Research on the automatic construction of domain-specific sentiment lexicons has become a hot topic in recent years. The main contribution of this paper is the illustration of a novel semi-supervised learning method which exploits both term-to-term and document-to-term relations hidden in a corpus for the construction of domain specific sentiment lexicons. More specifically, the proposed two-pass pseudo labeling method combines shallow linguistic parsing and corpusbase statistical learning to make domain-specific sentiment extraction scalable with respect to the sheer volume of opinionated documents archived on the Internet these days. Another novelty of the proposed method is that it can utilize the readily available user-contributed labels of opinionated documents (e.g., the user ratings of product reviews) to bootstrap the performance of sentiment lexicon construction. Our experiments show that the proposed method can generate high quality domain-specific sentiment lexicons as directly assessed by human experts. Moreover, the system generated domain-specific sentiment lexicons can improve polarity prediction tasks at the document level by 2:18% when compared to other well-known baseline methods. Our research opens the door to the development of practical and scalable methods for domain-specific sentiment analysis.
Resumo:
In this paper, I outline a new approach towards media and diaspora using the concept of the ‘franchise nation’. It is my contention that current theories on migration, media and diaspora with their emphasis on exile, multiple belongings, hybrid identities and their representations are inadequate to the task of explaining the emergence of a new trend in diaspora, home and host nation relationship. This, I suggest, is a recent shift most notable in the attitudes of the Chinese and Indian governments toward their diasporas. From earlier eras where Chinese sojourners were regarded as disloyal and Indians overseas left to fend for themselves, Chinese and Indian migrants are today directly addressed and wooed by their nations of origin. This change is motivated in part by the realisation that diasporic populations are, in fact, resources that can bring significant influence to bear on home nation interests within host nations. Such sway in foreign lands gains greater importance as China and India are, by virtue of their economic rise and prominence on the world stage, subject to ever more intense international scrutiny. Members of these diasporas have willingly responded to these changes by claiming and cultivating pivotal roles for themselves within host nations as spokespersons, informants and representatives, trading on their assumed familiarity with home cultures, language and commerce. As a result, China and India have initiated a number of statecraft strategies in recent years to (re)engage their diasporas. Both nations have identified media as amongst the key instruments of their strategies. New media enhances the ability of all parties—home and host states, institutions and individuals—to participate, interact and reciprocate. While China’s centralised government has utilised the notion of soft power (ruan shili) to describe its practices, India’s efforts are diffused along the lines of nation branding via myriad labels like India Inc. and the Global Indian. To explain this emergent trend, I propose a new framework, franchise nation, defined as a reciprocal relationship between nation and diaspora that is characterised by mutual obligations and benefits. In appropriating this phrase from Stephenson, I liken contemporary statecraft operating in China and India to a business franchising system wherein benefits may be economic or cultural and; those thus connected signal their willingness for mutual exchange and concede a sense of obligation. As such, franchise nation is not concerned with remote, unidirectional interference in home nation affairs a la Anderson’s ‘long-distance nationalism’. Rather, it is a framework that seeks to reflect more closely the dynamism of the relationship between diaspora, home and host nations.
Resumo:
Australia’s mass market fashion labels have traditionally benefitted from their peripheral location to the world’s fashion centres. Operating a season behind, Australian mass market designers and buyers were well-placed to watch trends play out overseas before testing them in the Australian marketplace. For this reason, often a designer’s role was to source and oversee the manufacture of ‘knock-offs’, or close copies of Northern hemisphere mass market garments. Both Weller (2007) and Walsh (2009) have commented on this practice. The knock-on effect from this continues to be a cautious, derivative fashion sensibility within Australian mass market fashion design, where any new trend or product is first tested and proved overseas months earlier. However, there is evidence that this is changing. The rapid online dissemination of global fashion trends, coupled with the Australian consumer’s willingness to shop online, has meant that the ‘knock-off’ is less viable. For this reason, a number of mass market companies are moving away from the practice of direct sourcing and are developing product in-house under a Northern hemisphere model. This shift is also witnessed in the trend for mass market companies to develop collections in partnership with independent Australian designers. This paper explores the current and potential effects of these shifts within Australian mass market design practice, and discusses how they may impact on designers, consumers and on the wider culture of Australian fashion.
Resumo:
In ‘me as al, you as bobby, me as bobby, you as al’, appropriated footage is looped and supplemented with superimposed text, creating a scenario where Robert De Niro and Al Pacino endlessly stalk each other, with their readied-guns chased by hovering words. These titans of Hollywood screen acting represent opposing approaches to the construction of filmic identity, and as the text labels loosely adhere to one weapon and the next, the action on screen becomes an investigation of the subjective and objective potential within screen surrogate constructions of personalized identity. The work was included in the group show 'Vernacular Terrain' (curated by Lubi Thomas and Steven Danzig) for the Songzhuang Art Museum, Beijing.
Resumo:
Creative Practice exhibited at the Brisbane Square Library Illustrating Fashion exhibition. Accompanied works from acclaimed fashion labels, Easton Pearson Julie Tengdhal and Dogstar.
Resumo:
Process modeling is an important design practice in organizational improvement projects. In this paper, we examine the design of business process diagrams in contexts where novice analysts only have basic design tools such as paper and pencils available, and little to no understanding of formalized modeling approaches. Based on a quasi-experimental study with 89 BPM students, we identify five distinct process design archetypes ranging from textual to hybrid and graphical representation forms. We examine the quality of the designs and identify which representation formats enable an analyst to articulate business rules, states, events, activities, temporal and geospatial information in a process model. We found that the quality of the process designs decreases with the increased use of graphics and that hybrid designs featuring appropriate text labels and abstract graphical forms appear well-suited to describe business processes. We further examine how process design preferences predict formalized process modeling ability. Our research has implications for practical process design work in industry as well as for academic curricula on process design.