864 resultados para Segmentation algorithms
Resumo:
Among the challenges of pig farming in today's competitive market, there is factor of the product traceability that ensures, among many points, animal welfare. Vocalization is a valuable tool to identify situations of stress in pigs, and it can be used in welfare records for traceability. The objective of this work was to identify stress in piglets using vocalization, calling this stress on three levels: no stress, moderate stress, and acute stress. An experiment was conducted on a commercial farm in the municipality of Holambra, São Paulo State , where vocalizations of twenty piglets were recorded during the castration procedure, and separated into two groups: without anesthesia and local anesthesia with lidocaine base. For the recording of acoustic signals, a unidirectional microphone was connected to a digital recorder, in which signals were digitized at a frequency of 44,100 Hz. For evaluation of sound signals, Praat® software was used, and different data mining algorithms were applied using Weka® software. The selection of attributes improved model accuracy, and the best attribute selection was used by applying Wrapper method, while the best classification algorithms were the k-NN and Naive Bayes. According to the results, it was possible to classify the level of stress in pigs through their vocalization.
Resumo:
The aim of this thesis is to study segmentation in industrial markets and develop a segmenting method proposal and criteria case study for a labelstock manufacturing company. An industrial company is facing many different customers with varying needs. Market segmentation is a process for dividing a market into smaller groups in which customers have the same or similar needs. Segmentation gives tools to the marketer to better match the product or service more closely to the needs of the target market. In this thesis a segmentation tool proposal and segmenting criteria is case studied for labelstock company’s Europe, Middle East and Africa business area customers and market. In the developed matrix tool different customers are planned to be evaluated based on customer characteristic variables. The criteria for the evaluating matrix are based on the customer’s buying organizations characteristics and buying behaviour. There are altogether 13 variables in the evaluating matrix. As an example of variables there are loyalty, size of the customer, estimated growth of the customer purchases and customer’s decision-making and buying behaviour. These characteristic variables will help to identify market segments to target and the customers belonging to those segments.
Resumo:
Ett ämne som väckt intresse både inom industrin och forskningen är hantering av kundförhållanden (CRM, eng. Customer Relationship Management), dvs. en kundorienterad affärsstrategi där företagen från att ha varit produktorienterade väljer att bli mera kundcentrerade. Numera kan kundernas beteende och aktiviteter lätt registreras och sparas med hjälp av integrerade affärssystem (ERP, eng. Enterprise Resource Planning) och datalager (DW, eng. Data Warehousing). Kunder med olika preferenser och köpbeteende skapar sin egen ”signatur” i synnerhet via användningen av kundkort, vilket möjliggör mångsidig modellering av kundernas köpbeteende. För att få en översikt av kundernas köpbeteende och deras lönsamhet, används ofta kundsegmentering som en metod för att indela kunderna i grupper utgående från deras likheter. De mest använda metoderna för kundsegmentering är analytiska modeller konstruerade för en viss tidsperiod. Dessa modeller beaktar inte att kundernas beteende kan förändras med tiden. I föreliggande avhandling skapas en holistisk översikt av kundernas karaktär och köpbeteende som utöver de konventionella segmenteringsmodellerna även beaktar dynamiken i köpbeteendet. Dynamiken i en kundsegmenteringsmodell innefattar förändringar i segmentens struktur och innehåll, samt förändringen av individuella kunders tillhörighet i ett segment (s.k migrationsanalyser). Vardera förändringen modelleras, analyseras och exemplifieras med visuella datautvinningstekniker, främst med självorganiserande kartor (SOM, eng. Self-Organizing Maps) och självorganiserande tidskartor (SOTM), en vidareutveckling av SOM. Visualiseringen anteciperas underlätta tolkningen av identifierade mönster och göra processen med kunskapsöverföring mellan den som gör analysen och beslutsfattaren smidigare. Asiakkuudenhallinta (CRM) eli organisaation muuttaminen tuotepainotteisesta asiakaskeskeiseksi on herättänyt mielenkiintoa niin yliopisto- kuin yritysmaailmassakin. Asiakkaiden käyttäytymistä ja toimintaa pystytään nykyään helposti tallentamaan ja varastoimaan toiminnanohjausjärjestelmien ja tietovarastojen avulla; asiakkaat jättävät jatkuvasti piirteistään ja ostokäyttäytymisestään kertovia tietojälkiä, joita voidaan analysoida. On tavallista, että asiakkaat poikkeavat toisistaan eri tavoin, ja heidän mieltymyksensä kuten myös ostokäyttäytymisensä saattavat olla hyvinkin erilaisia. Asiakaskäyttäytymisen monimuotoisuuteen ja tuottavuuteen paneuduttaessa käytetäänkin laajalti asiakassegmentointia eli asiakkaiden jakamista ryhmiin samankaltaisuuden perusteella. Perinteiset asiakassegmentoinnin ratkaisut ovat usein yksittäisiä analyyttisia malleja, jotka on tehty tietyn aikajakson perusteella. Tämän vuoksi ne monesti jättävät huomioimatta sen, että asiakkaiden käyttäytyminen saattaa ajan kuluessa muuttua. Tässä väitöskirjassa pyritäänkin tarjoamaan holistinen kuva asiakkaiden ominaisuuksista ja ostokäyttäytymisestä tarkastelemalla kahta muutosvoimaa tiettyyn aikarajaukseen perustuvien perinteisten segmentointimallien lisäksi. Nämä kaksi asiakassegmentointimallin dynamiikkaa ovat muutokset segmenttien rakenteessa ja muutokset yksittäisten asiakkaiden kuulumisessa ryhmään. Ensimmäistä dynamiikkaa lähestytään ajallisen asiakassegmentoinnin avulla, jossa visualisoidaan ajan kuluessa tapahtuvat muutokset segmenttien rakenteissa ja profiileissa. Toista dynamiikkaa taas lähestytään käyttäen nk. segmenttisiirtymien analyysia, jossa visuaalisin keinoin tunnistetaan samantyyppisesti segmentistä toiseen vaihtavat asiakkaat. Visualisoinnin tehtävänä on tukea havaittujen kaavojen tulkitsemista sekä helpottaa tiedonsiirtoa analysoijan ja päättäjien välillä. Visuaalisia tiedonlouhintamenetelmiä, kuten itseorganisoivia karttoja ja niiden laajennuksia, käytetään osoittamaan näiden menetelmien hyödyllisyys sekä asiakkuudenhallinnassa yleisesti että erityisesti asiakassegmentoinnissa.
Resumo:
Objective of this thesis was to map possibilities for systematic supplier management in field of chemical process industry. Through this study it was aimed to develop a tool for supplier management that could be integrated with operations in business unit. With developed tool suppliers should be able to be segmented based on their willingness and capability, and segmentation could be applied in purchasing decisions. In this thesis there was made a survey of methods that are recognized in literature to manage and allocate suppliers. This thesis recognizes segmentation as a method to group and select suppliers in procurement. Based on literature, a proposal for segmentation framework and evaluation criteria factors will be constituted. Based on theoretical proposal, in an expertise workshop a final segmentation framework was constituted, which covers segments with descriptions and evaluation part. Evaluation part includes an evaluation framework which helps to score suppliers with selected factors and leads to total grades in willingness and capability. These total grades will be the coordinates and they determine the segment where the supplier under evaluation belongs. In this thesis segments definitions, objectives, and road maps will be described.
Resumo:
Video transcoding refers to the process of converting a digital video from one format into another format. It is a compute-intensive operation. Therefore, transcoding of a large number of simultaneous video streams requires a large amount of computing resources. Moreover, to handle di erent load conditions in a cost-e cient manner, the video transcoding service should be dynamically scalable. Infrastructure as a Service Clouds currently offer computing resources, such as virtual machines, under the pay-per-use business model. Thus the IaaS Clouds can be leveraged to provide a coste cient, dynamically scalable video transcoding service. To use computing resources e ciently in a cloud computing environment, cost-e cient virtual machine provisioning is required to avoid overutilization and under-utilization of virtual machines. This thesis presents proactive virtual machine resource allocation and de-allocation algorithms for video transcoding in cloud computing. Since users' requests for videos may change at di erent times, a check is required to see if the current computing resources are adequate for the video requests. Therefore, the work on admission control is also provided. In addition to admission control, temporal resolution reduction is used to avoid jitters in a video. Furthermore, in a cloud computing environment such as Amazon EC2, the computing resources are more expensive as compared with the storage resources. Therefore, to avoid repetition of transcoding operations, a transcoded video needs to be stored for a certain time. To store all videos for the same amount of time is also not cost-e cient because popular transcoded videos have high access rate while unpopular transcoded videos are rarely accessed. This thesis provides a cost-e cient computation and storage trade-o strategy, which stores videos in the video repository as long as it is cost-e cient to store them. This thesis also proposes video segmentation strategies for bit rate reduction and spatial resolution reduction video transcoding. The evaluation of proposed strategies is performed using a message passing interface based video transcoder, which uses a coarse-grain parallel processing approach where video is segmented at group of pictures level.
Resumo:
Global illumination algorithms are at the center of realistic image synthesis and account for non-trivial light transport and occlusion within scenes, such as indirect illumination, ambient occlusion, and environment lighting. Their computationally most difficult part is determining light source visibility at each visible scene point. Height fields, on the other hand, constitute an important special case of geometry and are mainly used to describe certain types of objects such as terrains and to map detailed geometry onto object surfaces. The geometry of an entire scene can also be approximated by treating the distance values of its camera projection as a screen-space height field. In order to shadow height fields from environment lights a horizon map is usually used to occlude incident light. We reduce the per-receiver time complexity of generating the horizon map on N N height fields from O(N) of the previous work to O(1) by using an algorithm that incrementally traverses the height field and reuses the information already gathered along the path of traversal. We also propose an accurate method to integrate the incident light within the limits given by the horizon map. Indirect illumination in height fields requires information about which other points are visible to each height field point. We present an algorithm to determine this intervisibility in a time complexity that matches the space complexity of the produced visibility information, which is in contrast to previous methods which scale in the height field size. As a result the amount of computation is reduced by two orders of magnitude in common use cases. Screen-space ambient obscurance methods approximate ambient obscurance from the depth bu er geometry and have been widely adopted by contemporary real-time applications. They work by sampling the screen-space geometry around each receiver point but have been previously limited to near- field effects because sampling a large radius quickly exceeds the render time budget. We present an algorithm that reduces the quadratic per-pixel complexity of previous methods to a linear complexity by line sweeping over the depth bu er and maintaining an internal representation of the processed geometry from which occluders can be efficiently queried. Another algorithm is presented to determine ambient obscurance from the entire depth bu er at each screen pixel. The algorithm scans the depth bu er in a quick pre-pass and locates important features in it, which are then used to evaluate the ambient obscurance integral accurately. We also propose an evaluation of the integral such that results within a few percent of the ray traced screen-space reference are obtained at real-time render times.
Resumo:
Ajoneuvojen reititystä on tutkittu 1950-luvulta asti, alunperin etsiessä polttoainekuljetuksille optimaalisinta reittiä varastolta useille palveluasemille. Siitä lähtien ajoneuvon reititystehtäviä on tutkittu akateemisesti ja niistä on muodostettu kymmeniä erilaisia variaatioita. Tehtävien ratkaisumenetelmät jaetaan tyypillisesti tarkkoihin menetelmiin sekä heuristiikkoihin ja metaheuristiikkoihin. Konetehon ja heuristiikoissa käytettävien algoritmien kehittymisen myötä reitinoptimointia on alettu tarjota kaupallisesti. CO-SKY-projektin tavoitteena on kaupallistaa web-pohjainen tai toiminnanohjausjärjestelmään integroitava ajoneuvon reititys. Diplomityössä tutkitaan kuljetustensuunnittelu- ja reitinoptimointiohjelmistojen kaupallistamiseen vaikuttavia keskeisiä ominaisuuksia. Ominaisuuksia on tarkasteltu: 1) erityisesti pk-kuljetusyritysten tarpeiden ja vaatimusten pohjalta, ja 2) markkinoilla olevien ohjelmistojen tarjontaa arvioiden. Näiden pohjalta on myös pyritty arvioimaan kysynnän ja tarjonnan kohtaamista. Pilottiasiakkaita haastattelemalla ohjelmistolle on kyetty asettamaan vaatimuksia, mutta samalla on kuultu käyttäjien mielipiteitä optimoinnista. Lukuisia logistiikkaohjelmistojen tarjoajia on haastateltu logistiikkamessuilla sekä Suomessa että Saksassa. Haastattelujen perusteella on saatu käsitys kyseisistä ohjelmista sekä optimoinnin tarjonnasta että kysynnästä. Akateeminen tutkimus aiheesta on laajaa, koskien niin teknistä toteutusta kuin myös (kysely-)tutkimuksia tarjolla olevien ohjelmistojen ominaisuuksista ja laadusta. Kuljetusyritysten tarpeissa on vaihtelua yritys- ja alakohtaisesti. Perusongelmat ovat samoja, joita reitinoptimoinnin akateemisessa tutkimuksessa käsitellään ja joita kaupalliset ohjelmistot pystyvät ratkaisemaan. Vaikka reitinoptimoinnilla saatavat hyödyt ovat mitattavissa, suunnittelu etenkin pk-yrityksissä tehdään pääosin yhä käsin. Messuhaastattelujen ja loppukäyttäjien mielipiteiden perusteella voidaan todeta kaupallisten ratkaisujen olevan suunniteltu isommille kuljetusyrityksille: tyypillisen it-projektin hinta, käyttöönottoaika ja asennus sekä ratkaisun takaisinmaksuaika vaikuttavat pk-yritysten hankintapäätökseen. Kaupallistamiseen liittyen haasteet liittyvät erityisesti segmentointiin ja markkinointiin asiakasarvon todentamisen ja sen välittämisen kautta.
Resumo:
This thesis presents a framework for segmentation of clustered overlapping convex objects. The proposed approach is based on a three-step framework in which the tasks of seed point extraction, contour evidence extraction, and contour estimation are addressed. The state-of-art techniques for each step were studied and evaluated using synthetic and real microscopic image data. According to obtained evaluation results, a method combining the best performers in each step was presented. In the proposed method, Fast Radial Symmetry transform, edge-to-marker association algorithm and ellipse fitting are employed for seed point extraction, contour evidence extraction and contour estimation respectively. Using synthetic and real image data, the proposed method was evaluated and compared with two competing methods and the results showed a promising improvement over the competing methods, with high segmentation and size distribution estimation accuracy.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
The purpose of this study was to expand the applicability of supplier segmentation and development approaches to the project-driven construction industry. These practices are less exploited and not well documented in this operational environment compared to the process-centric manufacturing industry. At first, portfolio models to supply base segmentation and various supplier development efforts were investigated in literature review. A step-wise framework was structured for the empirical research. The empirical study employed multiple research methods in three case studies in a large Finnish construction company. The first study categorized the construction item classes into the purchasing portfolio and positioned suppliers to the power matrix by investigating buyer-supplier relations. Using statistical tests, the study also identified factors that affect suppliers’ performance. The final case study identified improvement areas of the interface between a main contractor and one if its largest suppliers. The final results indicate that only by assessing the supply base in a holistic manner and the power circumstances in it, buyers comprehend how to best establish appropriate supplier development strategies in the project environment.
Resumo:
In this research, the effectiveness of Naive Bayes and Gaussian Mixture Models classifiers on segmenting exudates in retinal images is studied and the results are evaluated with metrics commonly used in medical imaging. Also, a color variation analysis of retinal images is carried out to find how effectively can retinal images be segmented using only the color information of the pixels.
Resumo:
Companies require information in order to gain an improved understanding of their customers. Data concerning customers, their interests and behavior are collected through different loyalty programs. The amount of data stored in company data bases has increased exponentially over the years and become difficult to handle. This research area is the subject of much current interest, not only in academia but also in practice, as is shown by several magazines and blogs that are covering topics on how to get to know your customers, Big Data, information visualization, and data warehousing. In this Ph.D. thesis, the Self-Organizing Map and two extensions of it – the Weighted Self-Organizing Map (WSOM) and the Self-Organizing Time Map (SOTM) – are used as data mining methods for extracting information from large amounts of customer data. The thesis focuses on how data mining methods can be used to model and analyze customer data in order to gain an overview of the customer base, as well as, for analyzing niche-markets. The thesis uses real world customer data to create models for customer profiling. Evaluation of the built models is performed by CRM experts from the retailing industry. The experts considered the information gained with help of the models to be valuable and useful for decision making and for making strategic planning for the future.