864 resultados para Segmentation algorithms
Resumo:
Introduction: Gamma Knife surgery (GKS) is a noninvasive neurosurgical stereotactic procedure, increasingly used as an alternative to open functional procedures. This includes the targeting of the ventrointermediate nucleus of the thalamus (e.g., Vim) for tremor. Objective: To enhance anatomic imaging for Vim GKS using high-field (7 T) MRI and Diffusion Weighted Imaging (DWI). Methods: Five young healthy subjects and two patients were scanned both on 3 and 7 T MRI. The protocol was the same in all cases, and included: T1-weighted (T1w) and DWI at 3T; susceptibility weighted images (SWI) at 7T for the visualization of thalamic subparts. SWI was further integrated into the Gamma Plan Software® (LGP, Elekta Instruments, AB, Sweden) and co-registered with 3T images. A simulation of targeting of the Vim was done using the quadrilatere of Guyot. Furthermore, a correlation with the position of the found target on SWI and also on DWI (after clustering of the different thalamic nuclei) was performed. Results: For the 5 healthy subjects, there was a good correlation between the position of the Vim on SWI, DWI and the GKS targeting. For the patients, on the pretherapeutic acquisitions, SWI helped in positioning the target. For posttherapeutic sequences, SWI supposed position of the Vim matched the corresponding contrast enhancement seen at follow-up MRI. Additionally, on the patient's follow-up T1w images, we could observe a small area of contrast-enhancement corresponding to the target used in GKS (e.g., Vim), which belongs to the Ventral-Lateral-Ventral (VLV) nuclei group. Our clustering method resulted in seven thalamic groups. Conclusion: The use of SWI provided us with a superior resolution and an improved image contrast within the central gray matter, enabling us to directly visualize the Vim. We additionally propose a novel robust method for segmenting the thalamus in seven anatomical groups based on DWI. The localization of the GKS target on the follow-up T1w images, as well as the position of the Vim on 7 T, have been used as a gold standard for the validation of VLV cluster's emplacement. The contrast enhancement corresponding to the targeted area was always localized inside the expected cluster, providing strong evidence of the VLV segmentation accuracy. The anatomical correlation between the direct visualization on 7T and the current targeting methods on 3T (e.g., quadrilatere of Guyot, histological atlases, DWI) seems to show a very good anatomical matching.
A new approach to segmentation based on fusing circumscribed contours, region growing and clustering
Resumo:
One of the major problems in machine vision is the segmentation of images of natural scenes. This paper presents a new proposal for the image segmentation problem which has been based on the integration of edge and region information. The main contours of the scene are detected and used to guide the posterior region growing process. The algorithm places a number of seeds at both sides of a contour allowing stating a set of concurrent growing processes. A previous analysis of the seeds permits to adjust the homogeneity criterion to the regions's characteristics. A new homogeneity criterion based on clustering analysis and convex hull construction is proposed
Resumo:
In this paper a colour texture segmentation method, which unifies region and boundary information, is proposed. The algorithm uses a coarse detection of the perceptual (colour and texture) edges of the image to adequately place and initialise a set of active regions. Colour texture of regions is modelled by the conjunction of non-parametric techniques of kernel density estimation (which allow to estimate the colour behaviour) and classical co-occurrence matrix based texture features. Therefore, region information is defined and accurate boundary information can be extracted to guide the segmentation process. Regions concurrently compete for the image pixels in order to segment the whole image taking both information sources into account. Furthermore, experimental results are shown which prove the performance of the proposed method
Resumo:
An unsupervised approach to image segmentation which fuses region and boundary information is presented. The proposed approach takes advantage of the combined use of 3 different strategies: the guidance of seed placement, the control of decision criterion, and the boundary refinement. The new algorithm uses the boundary information to initialize a set of active regions which compete for the pixels in order to segment the whole image. The method is implemented on a multiresolution representation which ensures noise robustness as well as computation efficiency. The accuracy of the segmentation results has been proven through an objective comparative evaluation of the method
Resumo:
Image segmentation of natural scenes constitutes a major problem in machine vision. This paper presents a new proposal for the image segmentation problem which has been based on the integration of edge and region information. This approach begins by detecting the main contours of the scene which are later used to guide a concurrent set of growing processes. A previous analysis of the seed pixels permits adjustment of the homogeneity criterion to the region's characteristics during the growing process. Since the high variability of regions representing outdoor scenes makes the classical homogeneity criteria useless, a new homogeneity criterion based on clustering analysis and convex hull construction is proposed. Experimental results have proven the reliability of the proposed approach
Resumo:
Network virtualisation is considerably gaining attentionas a solution to ossification of the Internet. However, thesuccess of network virtualisation will depend in part on how efficientlythe virtual networks utilise substrate network resources.In this paper, we propose a machine learning-based approachto virtual network resource management. We propose to modelthe substrate network as a decentralised system and introducea learning algorithm in each substrate node and substrate link,providing self-organization capabilities. We propose a multiagentlearning algorithm that carries out the substrate network resourcemanagement in a coordinated and decentralised way. The taskof these agents is to use evaluative feedback to learn an optimalpolicy so as to dynamically allocate network resources to virtualnodes and links. The agents ensure that while the virtual networkshave the resources they need at any given time, only the requiredresources are reserved for this purpose. Simulations show thatour dynamic approach significantly improves the virtual networkacceptance ratio and the maximum number of accepted virtualnetwork requests at any time while ensuring that virtual networkquality of service requirements such as packet drop rate andvirtual link delay are not affected.
Resumo:
In the literature on housing market areas, different approaches can be found to defining them, for example, using travel-to-work areas and, more recently, making use of migration data. Here we propose a simple exercise to shed light on which approach performs better. Using regional data from Catalonia, Spain, we have computed housing market areas with both commuting data and migration data. In order to decide which procedure shows superior performance, we have looked at uniformity of prices within areas. The main finding is that commuting algorithms present more homogeneous areas in terms of housing prices.
Resumo:
In this work we study the classification of forest types using mathematics based image analysis on satellite data. We are interested in improving classification of forest segments when a combination of information from two or more different satellites is used. The experimental part is based on real satellite data originating from Canada. This thesis gives summary of the mathematics basics of the image analysis and supervised learning , methods that are used in the classification algorithm. Three data sets and four feature sets were investigated in this thesis. The considered feature sets were 1) histograms (quantiles) 2) variance 3) skewness and 4) kurtosis. Good overall performances were achieved when a combination of ASTERBAND and RADARSAT2 data sets was used.
Resumo:
Identification of order of an Autoregressive Moving Average Model (ARMA) by the usual graphical method is subjective. Hence, there is a need of developing a technique to identify the order without employing the graphical investigation of series autocorrelations. To avoid subjectivity, this thesis focuses on determining the order of the Autoregressive Moving Average Model using Reversible Jump Markov Chain Monte Carlo (RJMCMC). The RJMCMC selects the model from a set of the models suggested by better fitting, standard deviation errors and the frequency of accepted data. Together with deep analysis of the classical Box-Jenkins modeling methodology the integration with MCMC algorithms has been focused through parameter estimation and model fitting of ARMA models. This helps to verify how well the MCMC algorithms can treat the ARMA models, by comparing the results with graphical method. It has been seen that the MCMC produced better results than the classical time series approach.
Resumo:
This study presents an automatic, computer-aided analytical method called Comparison Structure Analysis (CSA), which can be applied to different dimensions of music. The aim of CSA is first and foremost practical: to produce dynamic and understandable representations of musical properties by evaluating the prevalence of a chosen musical data structure through a musical piece. Such a comparison structure may refer to a mathematical vector, a set, a matrix or another type of data structure and even a combination of data structures. CSA depends on an abstract systematic segmentation that allows for a statistical or mathematical survey of the data. To choose a comparison structure is to tune the apparatus to be sensitive to an exclusive set of musical properties. CSA settles somewhere between traditional music analysis and computer aided music information retrieval (MIR). Theoretically defined musical entities, such as pitch-class sets, set-classes and particular rhythm patterns are detected in compositions using pattern extraction and pattern comparison algorithms that are typical within the field of MIR. In principle, the idea of comparison structure analysis can be applied to any time-series type data and, in the music analytical context, to polyphonic as well as homophonic music. Tonal trends, set-class similarities, invertible counterpoints, voice-leading similarities, short-term modulations, rhythmic similarities and multiparametric changes in musical texture were studied. Since CSA allows for a highly accurate classification of compositions, its methods may be applicable to symbolic music information retrieval as well. The strength of CSA relies especially on the possibility to make comparisons between the observations concerning different musical parameters and to combine it with statistical and perhaps other music analytical methods. The results of CSA are dependent on the competence of the similarity measure. New similarity measures for tonal stability, rhythmic and set-class similarity measurements were proposed. The most advanced results were attained by employing the automated function generation – comparable with the so-called genetic programming – to search for an optimal model for set-class similarity measurements. However, the results of CSA seem to agree strongly, independent of the type of similarity function employed in the analysis.
Resumo:
Segmentointi on perinteisesti ollut erityisesti kuluttajamarkkinoinnin työkalu, mutta siirtymä tuotteista palveluihin on lisännyt segmentointitarvetta myös teollisilla markkinoilla. Tämän tutkimuksen tavoite on löytää selkeästi toisistaan erottuvia asiakasryhmiä suomalaisen liikkeenjohdon konsultointiyritys Synocus Groupin tarjoaman case-materiaalin pohjalta. K-means-klusteroinnin avulla löydetään kolme potentiaalista markkinasegmenttiä perustuen siihen, mitkä tarjoamaelementit 105 valikoitua suomalaisen kone- ja metallituoteteollisuuden asiakasta ovat maininneet tärkeimmiksi. Ensimmäinen klusteri on hintatietoiset asiakkaat, jotka laskevat yksikkökohtaisia hintoja. Toinen klusteri koostuu huolto-orientoituneista asiakkaista, jotka laskevat tuntikustannuksia ja maksimoivat konekannan käyttötunteja. Tälle kohderyhmälle kannattaisi ehkä markkinoida teknisiä palveluja ja huoltosopimuksia. Kolmas klusteri on tuottavuussuuntautuneet asiakkaat, jotka ovat kiinnostuneita suorituskyvyn kehittämisestä ja laskevat tonnikohtaisia kustannuksia. He tavoittelevat alempia kokonaiskustannuksia lisääntyneen suorituskyvyn, pidemmän käyttöiän ja alempien huoltokustannusten kautta.
Resumo:
Speaker diarization is the process of sorting speeches according to the speaker. Diarization helps to search and retrieve what a certain speaker uttered in a meeting. Applications of diarization systemsextend to other domains than meetings, for example, lectures, telephone, television, and radio. Besides, diarization enhances the performance of several speech technologies such as speaker recognition, automatic transcription, and speaker tracking. Methodologies previously used in developing diarization systems are discussed. Prior results and techniques are studied and compared. Methods such as Hidden Markov Models and Gaussian Mixture Models that are used in speaker recognition and other speech technologies are also used in speaker diarization. The objective of this thesis is to develop a speaker diarization system in meeting domain. Experimental part of this work indicates that zero-crossing rate can be used effectively in breaking down the audio stream into segments, and adaptive Gaussian Models fit adequately short audio segments. Results show that 35 Gaussian Models and one second as average length of each segment are optimum values to build a diarization system for the tested data. Uniting the segments which are uttered by same speaker is done in a bottom-up clustering by a newapproach of categorizing the mixture weights.