960 resultados para swd: Image segmentation
Resumo:
Magdeburg, Univ., Fak. für Elektrotechnik und Informationstechnik, Diss., 2013
Resumo:
[s.c.]
Resumo:
Magdeburg, Univ., Fak. für Informatik, Diss., 2015
Resumo:
Michael Friebe, editor ; Otto-von-Guericke-Universität Magdeburg, Institut für Medizintechnik, Lehrstuhl Kathetertechnologie und bildgesteuerte Therapie (INKA - Intelligente Katheter), Forschungscampus STIMULATE (Solution Centre for Image Guided Local Therapies)
Resumo:
En aquest article es fa una descripció dels procediments realitzats per enregistrar dues imatges geomètricament, de forma automàtica, si es pren la primera com a imatge de referència. Es comparen els resultats obtinguts mitjançant tres mètodes. El primer mètode és el d’enregistrament clàssic en domini espacial maximitzant la correlació creuada (MCC)[1]. El segon mètode es basa en aplicar l’enregistrament MCC conjuntament amb un anàlisi multiescala a partir de transformades wavelet [2]. El tercer mètode és una variant de l’anterior que es situa a mig camí dels dos. Per cada un dels mètodes s’obté una estimació dels coeficients de la transformació que relaciona les dues imatges. A continuació es transforma per cada cas la segona imatge i es georeferencia respecte la primera. I per acabar es proposen unes mesures quantitatives que permeten discutir i comparar els resultats obtinguts amb cada mètode.
Resumo:
The long term goal of this research is to develop a program able to produce an automatic segmentation and categorization of textual sequences into discourse types. In this preliminary contribution, we present the construction of an algorithm which takes a segmented text as input and attempts to produce a categorization of sequences, such as narrative, argumentative, descriptive and so on. Also, this work aims at investigating a possible convergence between the typological approach developed in particular in the field of text and discourse analysis in French by Adam (2008) and Bronckart (1997) and unsupervised statistical learning.
Resumo:
This paper presents a semisupervised support vector machine (SVM) that integrates the information of both labeled and unlabeled pixels efficiently. Method's performance is illustrated in the relevant problem of very high resolution image classification of urban areas. The SVM is trained with the linear combination of two kernels: a base kernel working only with labeled examples is deformed by a likelihood kernel encoding similarities between labeled and unlabeled examples. Results obtained on very high resolution (VHR) multispectral and hyperspectral images show the relevance of the method in the context of urban image classification. Also, its simplicity and the few parameters involved make the method versatile and workable by unexperienced users.
Resumo:
We consider a general equilibrium model a la Bhaskar (Review of Economic Studies 2002): there are complementarities across sectors, each of which comprise (many) heterogenous monopolistically competitive firms. Bhaskar's model is extended in two directions: production requires capital, and labour markets are segmented. Labour market segmentation models the difficulties of labour migrating across international barriers (in a trade context) or from a poor region to a richer one (in a regional context), whilst the assumption of a single capital market means that capital flows freely between countries or regions. The model is solved analytically and a closed form solution is provided. Adding labour market segmentation to Bhaskar's two-tier industrial structure allows us to study, inter alia, the impact of competition regulations on wages and - financial flows both in the regional and international context, and the output, welfare and financial implications of relaxing immigration laws. The analytical approach adopted allows us, not only to sign the effect of policies, but also to quantify their effects. Introducing capital as a factor of production improves the realism of the model and refi nes its empirically testable implications.
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
The investigation of perceptual and cognitive functions with non-invasive brain imaging methods critically depends on the careful selection of stimuli for use in experiments. For example, it must be verified that any observed effects follow from the parameter of interest (e.g. semantic category) rather than other low-level physical features (e.g. luminance, or spectral properties). Otherwise, interpretation of results is confounded. Often, researchers circumvent this issue by including additional control conditions or tasks, both of which are flawed and also prolong experiments. Here, we present some new approaches for controlling classes of stimuli intended for use in cognitive neuroscience, however these methods can be readily extrapolated to other applications and stimulus modalities. Our approach is comprised of two levels. The first level aims at equalizing individual stimuli in terms of their mean luminance. Each data point in the stimulus is adjusted to a standardized value based on a standard value across the stimulus battery. The second level analyzes two populations of stimuli along their spectral properties (i.e. spatial frequency) using a dissimilarity metric that equals the root mean square of the distance between two populations of objects as a function of spatial frequency along x- and y-dimensions of the image. Randomized permutations are used to obtain a minimal value between the populations to minimize, in a completely data-driven manner, the spectral differences between image sets. While another paper in this issue applies these methods in the case of acoustic stimuli (Aeschlimann et al., Brain Topogr 2008), we illustrate this approach here in detail for complex visual stimuli.