991 resultados para Segmentation methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The texture segmentation techniques are diversified by the existence of several approaches. In this paper, we propose fuzzy features for the segmentation of texture image. For this purpose, a membership function is constructed to represent the effect of the neighboring pixels on the current pixel in a window. Using these membership function values, we find a feature by weighted average method for the current pixel. This is repeated for all pixels in the window treating each time one pixel as the current pixel. Using these fuzzy based features, we derive three descriptors such as maximum, entropy, and energy for each window. To segment the texture image, the modified mountain clustering that is unsupervised and fuzzy c-means clustering have been used. The performance of the proposed features is compared with that of fractal features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We are concerned with the problem of image segmentation in which each pixel is assigned to one of a predefined finite number of classes. In Bayesian image analysis, this requires fusing together local predictions for the class labels with a prior model of segmentations. Markov Random Fields (MRFs) have been used to incorporate some of this prior knowledge, but this not entirely satisfactory as inference in MRFs is NP-hard. The multiscale quadtree model of Bouman and Shapiro (1994) is an attractive alternative, as this is a tree-structured belief network in which inference can be carried out in linear time (Pearl 1988). It is an hierarchical model where the bottom-level nodes are pixels, and higher levels correspond to downsampled versions of the image. The conditional-probability tables (CPTs) in the belief network encode the knowledge of how the levels interact. In this paper we discuss two methods of learning the CPTs given training data, using (a) maximum likelihood and the EM algorithm and (b) emphconditional maximum likelihood (CML). Segmentations obtained using networks trained by CML show a statistically-significant improvement in performance on synthetic images. We also demonstrate the methods on a real-world outdoor-scene segmentation task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis introduces a flexible visual data exploration framework which combines advanced projection algorithms from the machine learning domain with visual representation techniques developed in the information visualisation domain to help a user to explore and understand effectively large multi-dimensional datasets. The advantage of such a framework to other techniques currently available to the domain experts is that the user is directly involved in the data mining process and advanced machine learning algorithms are employed for better projection. A hierarchical visualisation model guided by a domain expert allows them to obtain an informed segmentation of the input space. Two other components of this thesis exploit properties of these principled probabilistic projection algorithms to develop a guided mixture of local experts algorithm which provides robust prediction and a model to estimate feature saliency simultaneously with the training of a projection algorithm.Local models are useful since a single global model cannot capture the full variability of a heterogeneous data space such as the chemical space. Probabilistic hierarchical visualisation techniques provide an effective soft segmentation of an input space by a visualisation hierarchy whose leaf nodes represent different regions of the input space. We use this soft segmentation to develop a guided mixture of local experts (GME) algorithm which is appropriate for the heterogeneous datasets found in chemoinformatics problems. Moreover, in this approach the domain experts are more involved in the model development process which is suitable for an intuition and domain knowledge driven task such as drug discovery. We also derive a generative topographic mapping (GTM) based data visualisation approach which estimates feature saliency simultaneously with the training of a visualisation model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the problem of automatically obtaining the object/background segmentation of a rigid 3D object observed in a set of images that have been calibrated for camera pose and intrinsics. Such segmentations can be used to obtain a shape representation of a potentially texture-less object by computing a visual hull. We propose an automatic approach where the object to be segmented is identified by the pose of the cameras instead of user input such as 2D bounding rectangles or brush-strokes. The key behind our method is a pairwise MRF framework that combines (a) foreground/background appearance models, (b) epipolar constraints and (c) weak stereo correspondence into a single segmentation cost function that can be efficiently solved by Graph-cuts. The segmentation thus obtained is further improved using silhouette coherency and then used to update the foreground/background appearance models which are fed into the next Graph-cut computation. These two steps are iterated until segmentation convergences. Our method can automatically provide a 3D surface representation even in texture-less scenes where MVS methods might fail. Furthermore, it confers improved performance in images where the object is not readily separable from the background in colour space, an area that previous segmentation approaches have found challenging. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measurement of lung ventilation is one of the most reliable techniques in diagnosing pulmonary diseases. The time-consuming and bias-prone traditional methods using hyperpolarized H 3He and 1H magnetic resonance imageries have recently been improved by an automated technique based on 'multiple active contour evolution'. This method involves a simultaneous evolution of multiple initial conditions, called 'snakes', eventually leading to their 'merging' and is entirely independent of the shapes and sizes of snakes or other parametric details. The objective of this paper is to show, through a theoretical analysis, that the functional dynamics of merging as depicted in the active contour method has a direct analogue in statistical physics and this explains its 'universality'. We show that the multiple active contour method has an universal scaling behaviour akin to that of classical nucleation in two spatial dimensions. We prove our point by comparing the numerically evaluated exponents with an equivalent thermodynamic model. © IOP Publishing Ltd and Deutsche Physikalische Gesellschaft.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The offered paper deals with the problems of color images preliminary procession. Among these are: interference control (local ones and noise) and extraction of the object from the background on the stage preceding the process of contours extraction. It was considered for a long time that execution of smoothing in segmentation through the boundary extraction is inadmissible, but the described methods and the obtained results evidence about expedience of using the noise control methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is concerned with understanding how Emergency Management Agencies (EMAs) influence public preparedness for mass evacuation across seven countries. Due to the lack of cross-national research (Tierney et al., 2001), there is a lack of knowledge on EMAs perspectives and approaches to the governance of public preparedness. This thesis seeks to address this gap through cross-national research that explores and contributes towards understanding the governance of public preparedness. The research draws upon the risk communication (Wood et al., 2011; Tierney et al., 2001) social marketing (Marshall et al., 2007; Kotler and Lee, 2008; Ramaprasad, 2005), risk governance (Walker et al., 2010, 2013; Kuhlicke et al., 2011; IRGC, 2005, 2007; Renn et al., 2011; Klinke and Renn, 2012), risk society (Beck, 1992, 1999, 2002) and governmentality (Foucault, 1978, 2003, 2009) literature to explain this governance and how EMAs responsibilize the public for their preparedness. EMAs from seven countries (Belgium, Denmark, Germany, Iceland, Japan, Sweden, the United Kingdom) explain how they prepare their public for mass evacuation in response to different types of risk. A cross-national (Hantrais, 1999) interpretive research approach, using qualitative methods including semi-structured interviews, documents and observation, was used to collect data. The data analysis process (Miles and Huberman, 1999) identified how the concepts of risk, knowledge and responsibility are critical for theorising how EMAs influence public preparedness for mass evacuation. The key findings grounded in these concepts include: - Theoretically, risk is multi-functional in the governance of public preparedness. It regulates behaviour, enables surveillance and acts as a technique of exclusion. - EMAs knowledge and how this influenced their assessment of risk, together with how they share the responsibility for public preparedness across institutions and the public, are key to the governance of public preparedness for mass evacuation. This resulted in a form of public segmentation common to all countries, whereby the public were prepared unequally.  - EMAs use their prior knowledge and assessments of risk to target public preparedness in response to particular known hazards. However, this strategy places the non-targeted public at greater risk in relation to unknown hazards, such as a man-made disaster. - A cross-national conceptual framework of four distinctive governance practices (exclusionary, informing, involving and influencing) are utilised to influence public preparedness. - The uncertainty associated with particular types of risk limits the application of social marketing as a strategy for influencing the public to take responsibility and can potentially increase the risk to the public.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most pressing demands on electrophysiology applied to the diagnosis of epilepsy is the non-invasive localization of the neuronal generators responsible for brain electrical and magnetic fields (the so-called inverse problem). These neuronal generators produce primary currents in the brain, which together with passive currents give rise to the EEG signal. Unfortunately, the signal we measure on the scalp surface doesn't directly indicate the location of the active neuronal assemblies. This is the expression of the ambiguity of the underlying static electromagnetic inverse problem, partly due to the relatively limited number of independent measures available. A given electric potential distribution recorded at the scalp can be explained by the activity of infinite different configurations of intracranial sources. In contrast, the forward problem, which consists of computing the potential field at the scalp from known source locations and strengths with known geometry and conductivity properties of the brain and its layers (CSF/meninges, skin and skull), i.e. the head model, has a unique solution. The head models vary from the computationally simpler spherical models (three or four concentric spheres) to the realistic models based on the segmentation of anatomical images obtained using magnetic resonance imaging (MRI). Realistic models – computationally intensive and difficult to implement – can separate different tissues of the head and account for the convoluted geometry of the brain and the significant inter-individual variability. In real-life applications, if the assumptions of the statistical, anatomical or functional properties of the signal and the volume in which it is generated are meaningful, a true three-dimensional tomographic representation of sources of brain electrical activity is possible in spite of the ‘ill-posed’ nature of the inverse problem (Michel et al., 2004). The techniques used to achieve this are now referred to as electrical source imaging (ESI) or magnetic source imaging (MSI). The first issue to influence reconstruction accuracy is spatial sampling, i.e. the number of EEG electrodes. It has been shown that this relationship is not linear, reaching a plateau at about 128 electrodes, provided spatial distribution is uniform. The second factor is related to the different properties of the source localization strategies used with respect to the hypothesized source configuration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A certain type of bacterial inclusion, known as a bacterial microcompartment, was recently identified and imaged through cryo-electron tomography. A reconstructed 3D object from single-axis limited angle tilt-series cryo-electron tomography contains missing regions and this problem is known as the missing wedge problem. Due to missing regions on the reconstructed images, analyzing their 3D structures is a challenging problem. The existing methods overcome this problem by aligning and averaging several similar shaped objects. These schemes work well if the objects are symmetric and several objects with almost similar shapes and sizes are available. Since the bacterial inclusions studied here are not symmetric, are deformed, and show a wide range of shapes and sizes, the existing approaches are not appropriate. This research develops new statistical methods for analyzing geometric properties, such as volume, symmetry, aspect ratio, polyhedral structures etc., of these bacterial inclusions in presence of missing data. These methods work with deformed and non-symmetric varied shaped objects and do not necessitate multiple objects for handling the missing wedge problem. The developed methods and contributions include: (a) an improved method for manual image segmentation, (b) a new approach to 'complete' the segmented and reconstructed incomplete 3D images, (c) a polyhedral structural distance model to predict the polyhedral shapes of these microstructures, (d) a new shape descriptor for polyhedral shapes, named as polyhedron profile statistic, and (e) the Bayes classifier, linear discriminant analysis and support vector machine based classifiers for supervised incomplete polyhedral shape classification. Finally, the predicted 3D shapes for these bacterial microstructures belong to the Johnson solids family, and these shapes along with their other geometric properties are important for better understanding of their chemical and biological characteristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work aims to define a typology of trawler f1eet in Sète, the main fishing harbour along the French Mediterranean coast, using several multivariate analysis methods. The fishing ships taken to account are represented by annual profiles of landing specific compositions. Five fishing strategies have been identified. A segmentation method using symbolic objects allows a formaI characterisation of the different strategies. These strategies are studied according to several general characteristics usually used for management rules elaboration (power, length, ship age). The typological analysis allows to characterise two main exploitation ways, one directed to the catch of a few species (Engraulis encrasicolus, Sardina pilchardus), the other characterised by the exploitation of a great diversity of species. By this way, it is possible to estimate how the catch of low represented species can significantly contribute to the exploitation of a resource.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, we propose to infer pixel-level labelling in video by utilising only object category information, exploiting the intrinsic structure of video data. Our motivation is the observation that image-level labels are much more easily to be acquired than pixel-level labels, and it is natural to find a link between the image level recognition and pixel level classification in video data, which would transfer learned recognition models from one domain to the other one. To this end, this thesis proposes two domain adaptation approaches to adapt the deep convolutional neural network (CNN) image recognition model trained from labelled image data to the target domain exploiting both semantic evidence learned from CNN, and the intrinsic structures of unlabelled video data. Our proposed approaches explicitly model and compensate for the domain adaptation from the source domain to the target domain which in turn underpins a robust semantic object segmentation method for natural videos. We demonstrate the superior performance of our methods by presenting extensive evaluations on challenging datasets comparing with the state-of-the-art methods.