901 resultados para Page name matching


Relevância:

20.00% 20.00%

Publicador:

Resumo:

State-of-the-art image-set matching techniques typically implicitly model each image-set with a Gaussian distribution. Here, we propose to go beyond these representations and model image-sets as probability distribution functions (PDFs) using kernel density estimators. To compare and match image-sets, we exploit Csiszar´ f-divergences, which bear strong connections to the geodesic distance defined on the space of PDFs, i.e., the statistical manifold. Furthermore, we introduce valid positive definite kernels on the statistical manifold, which let us make use of more powerful classification schemes to match image-sets. Finally, we introduce a supervised dimensionality reduction technique that learns a latent space where f-divergences reflect the class labels of the data. Our experiments on diverse problems, such as video-based face recognition and dynamic texture classification, evidence the benefits of our approach over the state-of-the-art image-set matching methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses the art practice of Australian artists Catherine Sagin and Kate Woodcroft, who have been working collaboratively under various monikers since 2008. The duo define their artworks in terms of winning and losing, and play out the division of labour in an artistic practice that employs video, performance, photography and sculpture. Catherine or Kate utilise combative and comparative processes, which challenge notions of artistic collaboration and highlight the inherent tensions and competitive nature of working together.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present a new feature-based approach for mosaicing of camera-captured document images. A novel block-based scheme is employed to ensure that corners can be reliably detected over a wide range of images. 2-D discrete cosine transform is computed for image blocks defined around each of the detected corners and a small subset of the coefficients is used as a feature vector A 2-pass feature matching is performed to establish point correspondences from which the homography relating the input images could be computed. The algorithm is tested on a number of complex document images casually taken from a hand-held camera yielding convincing results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A promotional brochure celebrating the completion of the Seagram Building in spring 1957 features on its cover intense portraits of seven men bisected by a single line of bold text that asks, “Who are these Men?” The answer appears on the next page: “They Dreamed of a Tower of Light” (Figures 1, 2). Each photograph is reproduced with the respective man’s name and project credit: architects, Mies van der Rohe and Philip Johnson; associate architect, Eli Jacques Kahn; electrical contractor, Harry F. Fischbach; lighting consultant, Richard Kelly; and electrical engineer, Clifton E. Smith. To the right, a rendering of the new Seagram Tower anchors the composition, standing luminous against a star-speckled night sky; its glass walls and bronze mullions are transformed into a gossamer skin that reveals the tower’s structural skeleton. Lightolier, the contract lighting manufacturer, produced the brochure to promote its role in the lighting of the Seagram Building, but Lightolier’s promotional copy was not far from the truth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The core aim of machine learning is to make a computer program learn from the experience. Learning from data is usually defined as a task of learning regularities or patterns in data in order to extract useful information, or to learn the underlying concept. An important sub-field of machine learning is called multi-view learning where the task is to learn from multiple data sets or views describing the same underlying concept. A typical example of such scenario would be to study a biological concept using several biological measurements like gene expression, protein expression and metabolic profiles, or to classify web pages based on their content and the contents of their hyperlinks. In this thesis, novel problem formulations and methods for multi-view learning are presented. The contributions include a linear data fusion approach during exploratory data analysis, a new measure to evaluate different kinds of representations for textual data, and an extension of multi-view learning for novel scenarios where the correspondence of samples in the different views or data sets is not known in advance. In order to infer the one-to-one correspondence of samples between two views, a novel concept of multi-view matching is proposed. The matching algorithm is completely data-driven and is demonstrated in several applications such as matching of metabolites between humans and mice, and matching of sentences between documents in two languages.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis analyzes how matching takes place at the Finnish labor market from three different angles. The Finnish labor market has undergone severe structural changes following the economic crisis in the early 1990s. The labor market has had problems adjusting from these changes and hence a high and persistent unemployment has followed. In this thesis I analyze if matching problems, and in particular if changes in matching, can explain some of this persistence. The thesis consists of three essays. In the first essay Finnish Evidence of Changes in the Labor Market Matching Process the matching process at the Finnish labor market is analyzed. The key finding is that the matching process has changed thoroughly between the booming 1980s and the post-crisis period. The importance of the number of unemployed, and in particular long-term unemployed, for the matching process has vanished. More unemployed do not increase matching as theory predicts but rather the opposite. In the second essay, The Aggregate Matching Function and Directed Search -Finnish Evidence, stock-flow matching as a potential micro foundation of the aggregate matching function is studied. In the essay I show that newly unemployed match mainly with the stock of vacancies while longer term unemployed match with the inflow of vacancies. When aggregating I still find evidence of the traditional aggregate matching function. This could explain the huge support the aggregate matching function has received despite its odd randomness assumption. The third essay, How do Registered Job Seekers really match? -Finnish occupational level Evidence, studies matching for nine occupational groups and finds that very different matching problems exist for different occupations. In this essay also misspecification stemming from non-corresponding variables is dealt with through the introduction of a completely new set of variables. The new outflow measure used is vacancies filled with registered job seekers and it is matched by the supply side measure registered job seekers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Extraction of text areas from the document images with complex content and layout is one of the challenging tasks. Few texture based techniques have already been proposed for extraction of such text blocks. Most of such techniques are greedy for computation time and hence are far from being realizable for real time implementation. In this work, we propose a modification to two of the existing texture based techniques to reduce the computation. This is accomplished with Harris corner detectors. The efficiency of these two textures based algorithms, one based on Gabor filters and other on log-polar wavelet signature, are compared. A combination of Gabor feature based texture classification performed on a smaller set of Harris corner detected points is observed to deliver the accuracy and efficiency.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose to compress weighted graphs (networks), motivated by the observation that large networks of social, biological, or other relations can be complex to handle and visualize. In the process also known as graph simplication, nodes and (unweighted) edges are grouped to supernodes and superedges, respectively, to obtain a smaller graph. We propose models and algorithms for weighted graphs. The interpretation (i.e. decompression) of a compressed, weighted graph is that a pair of original nodes is connected by an edge if their supernodes are connected by one, and that the weight of an edge is approximated to be the weight of the superedge. The compression problem now consists of choosing supernodes, superedges, and superedge weights so that the approximation error is minimized while the amount of compression is maximized. In this paper, we formulate this task as the 'simple weighted graph compression problem'. We then propose a much wider class of tasks under the name of 'generalized weighted graph compression problem'. The generalized task extends the optimization to preserve longer-range connectivities between nodes, not just individual edge weights. We study the properties of these problems and propose a range of algorithms to solve them, with dierent balances between complexity and quality of the result. We evaluate the problems and algorithms experimentally on real networks. The results indicate that weighted graphs can be compressed efficiently with relatively little compression error.

Relevância:

20.00% 20.00%

Publicador: