938 resultados para Graph-based segmentation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this master's thesis was to analyse a competitive advantage of an Internet based marketing research company based on a competitive strategy oriented way. First Internet panel was compared to mostly used marketing research method, telephone interview. Secondly fourteen potential clients were interviewed personally. Intention was to find out what the potential clients thinkabout Zapera Finland Ltd and what kind of competitive strategy could be chosen considering costs, product differentiation, competition, research method, segmentation of business line and substitution. Finally the interviews were analysed and some strategic suggestions were made based on the competitive advatage(s). Conclusion was that Zapera Finland Ltd can choose a competitive strategy based on both the cost advantage and the product differentiation in a narrow competition scope.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the evaluation results of the methods submitted to Challenge US: Biometric Measurements from Fetal Ultrasound Images, a segmentation challenge held at the IEEE International Symposium on Biomedical Imaging 2012. The challenge was set to compare and evaluate current fetal ultrasound image segmentation methods. It consisted of automatically segmenting fetal anatomical structures to measure standard obstetric biometric parameters, from 2D fetal ultrasound images taken on fetuses at different gestational ages (21 weeks, 28 weeks, and 33 weeks) and with varying image quality to reflect data encountered in real clinical environments. Four independent sub-challenges were proposed, according to the objects of interest measured in clinical practice: abdomen, head, femur, and whole fetus. Five teams participated in the head sub-challenge and two teams in the femur sub-challenge, including one team who tackled both. Nobody attempted the abdomen and whole fetus sub-challenges. The challenge goals were two-fold and the participants were asked to submit the segmentation results as well as the measurements derived from the segmented objects. Extensive quantitative (region-based, distance-based, and Bland-Altman measurements) and qualitative evaluation was performed to compare the results from a representative selection of current methods submitted to the challenge. Several experts (three for the head sub-challenge and two for the femur sub-challenge), with different degrees of expertise, manually delineated the objects of interest to define the ground truth used within the evaluation framework. For the head sub-challenge, several groups produced results that could be potentially used in clinical settings, with comparable performance to manual delineations. The femur sub-challenge had inferior performance to the head sub-challenge due to the fact that it is a harder segmentation problem and that the techniques presented relied more on the femur's appearance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Paperin pinnan karheus on yksi paperin laatukriteereistä. Sitä mitataan fyysisestipaperin pintaa mittaavien laitteiden ja optisten laitteiden avulla. Mittaukset vaativat laboratorioolosuhteita, mutta nopeammille, suoraan linjalla tapahtuville mittauksilla olisi tarvetta paperiteollisuudessa. Paperin pinnan karheus voidaan ilmaista yhtenä näytteelle kohdistuvana karheusarvona. Tässä työssä näyte on jaettu merkitseviin alueisiin, ja jokaiselle alueelle on laskettu erillinen karheusarvo. Karheuden mittaukseen on käytetty useita menetelmiä. Yleisesti hyväksyttyä tilastollista menetelmää on käytetty tässä työssä etäisyysmuunnoksen lisäksi. Paperin pinnan karheudenmittauksessa on ollut tarvetta jakaa analysoitava näyte karheuden perusteella alueisiin. Aluejaon avulla voidaan rajata näytteestä selvästi karheampana esiintyvät alueet. Etäisyysmuunnos tuottaa alueita, joita on analysoitu. Näistä alueista on muodostettu yhtenäisiä alueita erilaisilla segmentointimenetelmillä. PNN -menetelmään (Pairwise Nearest Neighbor) ja naapurialueiden yhdistämiseen perustuvia algoritmeja on käytetty.Alueiden jakamiseen ja yhdistämiseen perustuvaa lähestymistapaa on myös tarkasteltu. Segmentoitujen kuvien validointi on yleensä tapahtunut ihmisen tarkastelemana. Tämän työn lähestymistapa on verrata yleisesti hyväksyttyä tilastollista menetelmää segmentoinnin tuloksiin. Korkea korrelaatio näiden tulosten välillä osoittaa onnistunutta segmentointia. Eri kokeiden tuloksia on verrattu keskenään hypoteesin testauksella. Työssä on analysoitu kahta näytesarjaa, joidenmittaukset on suoritettu OptiTopolla ja profilometrillä. Etäisyysmuunnoksen aloitusparametrit, joita muutettiin kokeiden aikana, olivat aloituspisteiden määrä ja sijainti. Samat parametrimuutokset tehtiin kaikille algoritmeille, joita käytettiin alueiden yhdistämiseen. Etäisyysmuunnoksen jälkeen korrelaatio oli voimakkaampaa profilometrillä mitatuille näytteille kuin OptiTopolla mitatuille näytteille. Segmentoiduilla OptiTopo -näytteillä korrelaatio parantui voimakkaammin kuin profilometrinäytteillä. PNN -menetelmän tuottamilla tuloksilla korrelaatio oli paras.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: Proper delineation of ocular anatomy in 3-dimensional (3D) imaging is a big challenge, particularly when developing treatment plans for ocular diseases. Magnetic resonance imaging (MRI) is presently used in clinical practice for diagnosis confirmation and treatment planning for treatment of retinoblastoma in infants, where it serves as a source of information, complementary to the fundus or ultrasonographic imaging. Here we present a framework to fully automatically segment the eye anatomy for MRI based on 3D active shape models (ASM), and we validate the results and present a proof of concept to automatically segment pathological eyes. METHODS AND MATERIALS: Manual and automatic segmentation were performed in 24 images of healthy children's eyes (3.29 ± 2.15 years of age). Imaging was performed using a 3-T MRI scanner. The ASM consists of the lens, the vitreous humor, the sclera, and the cornea. The model was fitted by first automatically detecting the position of the eye center, the lens, and the optic nerve, and then aligning the model and fitting it to the patient. We validated our segmentation method by using a leave-one-out cross-validation. The segmentation results were evaluated by measuring the overlap, using the Dice similarity coefficient (DSC) and the mean distance error. RESULTS: We obtained a DSC of 94.90 ± 2.12% for the sclera and the cornea, 94.72 ± 1.89% for the vitreous humor, and 85.16 ± 4.91% for the lens. The mean distance error was 0.26 ± 0.09 mm. The entire process took 14 seconds on average per eye. CONCLUSION: We provide a reliable and accurate tool that enables clinicians to automatically segment the sclera, the cornea, the vitreous humor, and the lens, using MRI. We additionally present a proof of concept for fully automatically segmenting eye pathology. This tool reduces the time needed for eye shape delineation and thus can help clinicians when planning eye treatment and confirming the extent of the tumor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sharing instead of buying is regaining traction among today's consumers. This study aims at identifying segments of sharing consumers to unearth potentially viable clusters of a consumer behavior that is a market of growing economic relevance. By means of a qualitative study and a survey with a roughly representative sample of 1121 Swiss-German and German consumers, a set of trait-related, motivational, and perceived socioeconomic variables is identified that can be used to group individuals into segments that differ with regard to their approach to sharing. A cluster analysis based on these variables suggests four potential clusters of sharing consumers-sharing idealists, sharing opponents, sharing pragmatists, and sharing normatives. Two sets of testable propositions are derived that can guide further research in this domain and pave the way to a more targeted approach to the growing market of "sharing" businesses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Markkinasegmentointi nousi esiin ensi kerran jo 50-luvulla ja se on ollut siitä lähtien yksi markkinoinnin peruskäsitteistä. Suuri osa segmentointia käsittelevästä tutkimuksesta on kuitenkin keskittynyt kuluttajamarkkinoiden segmentointiin yritys- ja teollisuusmarkkinoiden segmentoinnin jäädessä vähemmälle huomiolle. Tämän tutkimuksen tavoitteena on luoda segmentointimalli teollismarkkinoille tietotekniikan tuotteiden ja palveluiden tarjoajan näkökulmasta. Tarkoituksena on selvittää mahdollistavatko case-yrityksen nykyiset asiakastietokannat tehokkaan segmentoinnin, selvittää sopivat segmentointikriteerit sekä arvioida tulisiko tietokantoja kehittää ja kuinka niitä tulisi kehittää tehokkaamman segmentoinnin mahdollistamiseksi. Tarkoitus on luoda yksi malli eri liiketoimintayksiköille yhteisesti. Näin ollen eri yksiköiden tavoitteet tulee ottaa huomioon eturistiriitojen välttämiseksi. Tutkimusmetodologia on tapaustutkimus. Lähteinä tutkimuksessa käytettiin sekundäärisiä lähteitä sekä primäärejä lähteitä kuten case-yrityksen omia tietokantoja sekä haastatteluita. Tutkimuksen lähtökohtana oli tutkimusongelma: Voiko tietokantoihin perustuvaa segmentointia käyttää kannattavaan asiakassuhdejohtamiseen PK-yritys sektorilla? Tavoitteena on luoda segmentointimalli, joka hyödyntää tietokannoissa olevia tietoja tinkimättä kuitenkaan tehokkaan ja kannattavan segmentoinnin ehdoista. Teoriaosa tutkii segmentointia yleensä painottuen kuitenkin teolliseen markkinasegmentointiin. Tarkoituksena on luoda selkeä kuva erilaisista lähestymistavoista aiheeseen ja syventää näkemystä tärkeimpien teorioiden osalta. Tietokantojen analysointi osoitti selviä puutteita asiakastiedoissa. Peruskontaktitiedot löytyvät mutta segmentointia varten tietoa on erittäin rajoitetusti. Tietojen saantia jälleenmyyjiltä ja tukkureilta tulisi parantaa loppuasiakastietojen saannin takia. Segmentointi nykyisten tietojen varassa perustuu lähinnä sekundäärisiin tietoihin kuten toimialaan ja yrityskokoon. Näitäkään tietoja ei ole saatavilla kaikkien tietokannassa olevien yritysten kohdalta.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Gamma Knife surgery (GKS) is a noninvasive neurosurgical stereotactic procedure, increasingly used as an alternative to open functional procedures. This includes the targeting of the ventrointermediate nucleus of the thalamus (e.g., Vim) for tremor. Objective: To enhance anatomic imaging for Vim GKS using high-field (7 T) MRI and Diffusion Weighted Imaging (DWI). Methods: Five young healthy subjects and two patients were scanned both on 3 and 7 T MRI. The protocol was the same in all cases, and included: T1-weighted (T1w) and DWI at 3T; susceptibility weighted images (SWI) at 7T for the visualization of thalamic subparts. SWI was further integrated into the Gamma Plan Software® (LGP, Elekta Instruments, AB, Sweden) and co-registered with 3T images. A simulation of targeting of the Vim was done using the quadrilatere of Guyot. Furthermore, a correlation with the position of the found target on SWI and also on DWI (after clustering of the different thalamic nuclei) was performed. Results: For the 5 healthy subjects, there was a good correlation between the position of the Vim on SWI, DWI and the GKS targeting. For the patients, on the pretherapeutic acquisitions, SWI helped in positioning the target. For posttherapeutic sequences, SWI supposed position of the Vim matched the corresponding contrast enhancement seen at follow-up MRI. Additionally, on the patient's follow-up T1w images, we could observe a small area of contrast-enhancement corresponding to the target used in GKS (e.g., Vim), which belongs to the Ventral-Lateral-Ventral (VLV) nuclei group. Our clustering method resulted in seven thalamic groups. Conclusion: The use of SWI provided us with a superior resolution and an improved image contrast within the central gray matter, enabling us to directly visualize the Vim. We additionally propose a novel robust method for segmenting the thalamus in seven anatomical groups based on DWI. The localization of the GKS target on the follow-up T1w images, as well as the position of the Vim on 7 T, have been used as a gold standard for the validation of VLV cluster's emplacement. The contrast enhancement corresponding to the targeted area was always localized inside the expected cluster, providing strong evidence of the VLV segmentation accuracy. The anatomical correlation between the direct visualization on 7T and the current targeting methods on 3T (e.g., quadrilatere of Guyot, histological atlases, DWI) seems to show a very good anatomical matching.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a colour texture segmentation method, which unifies region and boundary information, is proposed. The algorithm uses a coarse detection of the perceptual (colour and texture) edges of the image to adequately place and initialise a set of active regions. Colour texture of regions is modelled by the conjunction of non-parametric techniques of kernel density estimation (which allow to estimate the colour behaviour) and classical co-occurrence matrix based texture features. Therefore, region information is defined and accurate boundary information can be extracted to guide the segmentation process. Regions concurrently compete for the image pixels in order to segment the whole image taking both information sources into account. Furthermore, experimental results are shown which prove the performance of the proposed method

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In image processing, segmentation algorithms constitute one of the main focuses of research. In this paper, new image segmentation algorithms based on a hard version of the information bottleneck method are presented. The objective of this method is to extract a compact representation of a variable, considered the input, with minimal loss of mutual information with respect to another variable, considered the output. First, we introduce a split-and-merge algorithm based on the definition of an information channel between a set of regions (input) of the image and the intensity histogram bins (output). From this channel, the maximization of the mutual information gain is used to optimize the image partitioning. Then, the merging process of the regions obtained in the previous phase is carried out by minimizing the loss of mutual information. From the inversion of the above channel, we also present a new histogram clustering algorithm based on the minimization of the mutual information loss, where now the input variable represents the histogram bins and the output is given by the set of regions obtained from the above split-and-merge algorithm. Finally, we introduce two new clustering algorithms which show how the information bottleneck method can be applied to the registration channel obtained when two multimodal images are correctly aligned. Different experiments on 2-D and 3-D images show the behavior of the proposed algorithms

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we study the classification of forest types using mathematics based image analysis on satellite data. We are interested in improving classification of forest segments when a combination of information from two or more different satellites is used. The experimental part is based on real satellite data originating from Canada. This thesis gives summary of the mathematics basics of the image analysis and supervised learning , methods that are used in the classification algorithm. Three data sets and four feature sets were investigated in this thesis. The considered feature sets were 1) histograms (quantiles) 2) variance 3) skewness and 4) kurtosis. Good overall performances were achieved when a combination of ASTERBAND and RADARSAT2 data sets was used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, dispersive liquid-liquid microextraction based on the solidification of floating organic droplets was used for the preconcentration and determination of thorium in the water samples. In this method, acetone and 1-undecanol were used as disperser and extraction solvents, respectively, and the ligand 1-(2-thenoyl)-3,3,3-trifluoracetone reagent (TTA) and Aliquat 336 was used as a chelating agent and an ion-paring reagent, for the extraction of thorium, respectively. Inductively coupled plasma-optical emission spectrometry was applied for the quantitation of the analyte after preconcentration. The effect of various factors, such as the extraction and disperser solvent, sample pH, concentration of TTA and concentration of aliquat336 were investigated. Under the optimum conditions, the calibration graph was linear within the thorium content range of 1.0-250 µg L-1 with a detection limit of 0.2 µg L-1. The method was also successfully applied for the determination of thorium in the different water samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Segmentointi on perinteisesti ollut erityisesti kuluttajamarkkinoinnin työkalu, mutta siirtymä tuotteista palveluihin on lisännyt segmentointitarvetta myös teollisilla markkinoilla. Tämän tutkimuksen tavoite on löytää selkeästi toisistaan erottuvia asiakasryhmiä suomalaisen liikkeenjohdon konsultointiyritys Synocus Groupin tarjoaman case-materiaalin pohjalta. K-means-klusteroinnin avulla löydetään kolme potentiaalista markkinasegmenttiä perustuen siihen, mitkä tarjoamaelementit 105 valikoitua suomalaisen kone- ja metallituoteteollisuuden asiakasta ovat maininneet tärkeimmiksi. Ensimmäinen klusteri on hintatietoiset asiakkaat, jotka laskevat yksikkökohtaisia hintoja. Toinen klusteri koostuu huolto-orientoituneista asiakkaista, jotka laskevat tuntikustannuksia ja maksimoivat konekannan käyttötunteja. Tälle kohderyhmälle kannattaisi ehkä markkinoida teknisiä palveluja ja huoltosopimuksia. Kolmas klusteri on tuottavuussuuntautuneet asiakkaat, jotka ovat kiinnostuneita suorituskyvyn kehittämisestä ja laskevat tonnikohtaisia kustannuksia. He tavoittelevat alempia kokonaiskustannuksia lisääntyneen suorituskyvyn, pidemmän käyttöiän ja alempien huoltokustannusten kautta.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this master’s thesis was to examine technology-based smart home devices and services. Topic was approached through basic theories, transaction cost theory and resource-based view in order to build basis for this thesis. Conceptual framework was discussed by means of networks, value networks and service systems which provide a useful framework for service development. The needs of the elderly living at home were discussed in order to find out which technology-based services could be used to satisfy the needs. Segmentation and need data collected previously during proactive home visits was exploited and additionally a survey targeted to experts and professionals of social and health care sector was done to verify the needs. Finally, the results of the survey were analyzed using quality function deployment method to figure out the most important and suitable service offerings for the elderly. As a conclusion of analysis, social media and monitoring services are the most useful technology-based services. However, traditional home services will still maintain their necessity too.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of domain-specific languages (DSLs) has been proposed as an approach to cost-e ectively develop families of software systems in a restricted application domain. Domain-specific languages in combination with the accumulated knowledge and experience of previous implementations, can in turn be used to generate new applications with unique sets of requirements. For this reason, DSLs are considered to be an important approach for software reuse. However, the toolset supporting a particular domain-specific language is also domain-specific and is per definition not reusable. Therefore, creating and maintaining a DSL requires additional resources that could be even larger than the savings associated with using them. As a solution, di erent tool frameworks have been proposed to simplify and reduce the cost of developments of DSLs. Developers of tool support for DSLs need to instantiate, customize or configure the framework for a particular DSL. There are di erent approaches for this. An approach is to use an application programming interface (API) and to extend the basic framework using an imperative programming language. An example of a tools which is based on this approach is Eclipse GEF. Another approach is to configure the framework using declarative languages that are independent of the underlying framework implementation. We believe this second approach can bring important benefits as this brings focus to specifying what should the tool be like instead of writing a program specifying how the tool achieves this functionality. In this thesis we explore this second approach. We use graph transformation as the basic approach to customize a domain-specific modeling (DSM) tool framework. The contributions of this thesis includes a comparison of di erent approaches for defining, representing and interchanging software modeling languages and models and a tool architecture for an open domain-specific modeling framework that e ciently integrates several model transformation components and visual editors. We also present several specific algorithms and tool components for DSM framework. These include an approach for graph query based on region operators and the star operator and an approach for reconciling models and diagrams after executing model transformation programs. We exemplify our approach with two case studies MICAS and EFCO. In these studies we show how our experimental modeling tool framework has been used to define tool environments for domain-specific languages.