875 resultados para texture segmentation
Resumo:
PatchCity is a new approach to the procedural generation of city models. The algorithm uses texture synthesis to create a city layout in the visual style of one or more input examples. Data is provided in vector graphic form from either real or synthetic city definitions. The paper describes the PatchCity algorithm, illustrates its use, and identifies its strengths and limitations. The technique provides a greater range of features and styles of city layout than existing generative methods, thereby achieving results that are more realistic. An open source implementation of the algorithm is available.
Resumo:
We consider the problem of segmenting text documents that have a
two-part structure such as a problem part and a solution part. Documents
of this genre include incident reports that typically involve
description of events relating to a problem followed by those pertaining
to the solution that was tried. Segmenting such documents
into the component two parts would render them usable in knowledge
reuse frameworks such as Case-Based Reasoning. This segmentation
problem presents a hard case for traditional text segmentation
due to the lexical inter-relatedness of the segments. We develop
a two-part segmentation technique that can harness a corpus
of similar documents to model the behavior of the two segments
and their inter-relatedness using language models and translation
models respectively. In particular, we use separate language models
for the problem and solution segment types, whereas the interrelatedness
between segment types is modeled using an IBM Model
1 translation model. We model documents as being generated starting
from the problem part that comprises of words sampled from
the problem language model, followed by the solution part whose
words are sampled either from the solution language model or from
a translation model conditioned on the words already chosen in the
problem part. We show, through an extensive set of experiments on
real-world data, that our approach outperforms the state-of-the-art
text segmentation algorithms in the accuracy of segmentation, and
that such improved accuracy translates well to improved usability
in Case-based Reasoning systems. We also analyze the robustness
of our technique to varying amounts and types of noise and empirically
illustrate that our technique is quite noise tolerant, and
degrades gracefully with increasing amounts of noise
Resumo:
Biological colonization of stone is a major concern in the preservation and presentation of cultural heritage. Colonization is typically associated with unpleasant soiling, and varying degrees of biodeterioration. A better understanding of why organisms grow where they do, will aid in
developing preventative, and treatment methods for biosoiling of cultural heritage. Sandstone exposure trials were set up at nine different locations across Northern Ireland to investigate the influences of local climate, local environmental,and micro-climatic factors on the early stages (up to 21 months) of biological colonization.
Results showed that, green and yellow soiling occurred on tooled stone surfaces, whereas darkening occurred preferentially on smooth surfaces. It is likely that different populations of organisms occur on these surfaces with green algae occurring on tooled surfaces due to slower drying rates (i.e. prolonged moisture retention), and cyanobacteria and fungi thriving on smooth surfaces due to their ability to withstand moisture fluctuation.
Resumo:
The algorithm developed uses an octree pyramid in which noise is reduced at the expense of the spatial resolution. At a certain level an unsupervised clustering without spatial connectivity constraints is applied. After the classification, isolated voxels and insignificant regions are removed by assigning them to their neighbours. The spatial resolution is then increased by the downprojection of the regions, level by level. At each level the uncertainty of the boundary voxels is minimised by a dynamic selection and classification of these, using an adaptive 3D filtering. The algorithm is tested using different data sets, including NMR data.
Resumo:
The analysis of seabed structure is important in a wide variety of scientific and industrial applications. In this paper, underwater acoustic data produced by bottom-penetrating sonar (Topas) are analyzed using unsupervised volumetric segmentation, based on a three dimensional Gibbs-Markov model. The result is a concise and accurate description of the seabed, in which key structures are emphasized. This description is also very well suited to further operations, such as the enhancement and automatic recognition of important structures. Experimental results demonstrating the effectiveness of this approach are shown, using Topas data gathered in the North Sea off Horten, Norway.
Resumo:
We compare the effect of different text segmentation strategies on speech based passage retrieval of video. Passage retrieval has mainly been studied to improve document retrieval and to enable question answering. In these domains best results were obtained using passages defined by the paragraph structure of the source documents or by using arbitrary overlapping passages. For the retrieval of relevant passages in a video, using speech transcripts, no author defined segmentation is available. We compare retrieval results from 4 different types of segments based on the speech channel of the video: fixed length segments, a sliding window, semantically coherent segments and prosodic segments. We evaluated the methods on the corpus of the MediaEval 2011 Rich Speech Retrieval task. Our main conclusion is that the retrieval results highly depend on the right choice for the segment length. However, results using the segmentation into semantically coherent parts depend much less on the segment length. Especially, the quality of fixed length and sliding window segmentation drops fast when the segment length increases, while quality of the semantically coherent segments is much more stable. Thus, if coherent segments are defined, longer segments can be used and consequently less segments have to be considered at retrieval time.
Resumo:
Thesis (Ph.D.)--University of Washington, 2015
Resumo:
Towards a holistic perspective of CRM, this project aims to diagnose and propose a strategy and market segmentation for Siemens Healthcare. The main underlying principle is to apply a full customer-centric outlook taking own business properties into consideration while preserving Siemens Healthcare’s culture and vision. Mainly focused on market segmentation, this project goes beyond established boundaries by employing an unbiased perspective of CRM while challenging current strategy, goals, processes, tools, initiatives and KPIs. In order to promote a sustainable business excellence strategy, this project aspires to streamline CRM strategic importance and driving the company one step forward.
Resumo:
A sample of 445 consumers resident in distinct Lisbon areas was analyzed through direct observations in order to discover each lifestyle’s current proportion, applying the Whitaker Lifestyle™ Method. The findings of the conducted hypothesis tests on the population proportion unveil that Neo-Traditional and Modern Whitaker lifestyles have the significantly highest proportion, while the overall presence of different lifestyles varies across neighborhoods. The research further demonstrates the validity of Whitaker observation techniques, media consumption differences among lifestyles and the importance of style and aesthetics while segmenting consumers by lifestyles. Finally, market opportunities are provided for firms operating in Lisbon.
Resumo:
This dissertation consists of three essays on the labour market impact of firing and training costs. The modelling framework resorts to the search and matching literature. The first chapter introduces firing costs, both liner and non-linear, in a new Keynesian model, analysing business cycle effects for different wage rigidity degrees. The second chapter adds training costs in a model of a segmented labour market, accessing the interaction between these two features and the skill composition of the labour force. Finally, the third chapter analyses empirically some of the issues raised in the second chapter.
Resumo:
The long term goal of this research is to develop a program able to produce an automatic segmentation and categorization of textual sequences into discourse types. In this preliminary contribution, we present the construction of an algorithm which takes a segmented text as input and attempts to produce a categorization of sequences, such as narrative, argumentative, descriptive and so on. Also, this work aims at investigating a possible convergence between the typological approach developed in particular in the field of text and discourse analysis in French by Adam (2008) and Bronckart (1997) and unsupervised statistical learning.