44 resultados para the Fuzzy Colour Segmentation Algorithm

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The parameter setting of a differential evolution algorithm must meet several requirements: efficiency, effectiveness, and reliability. Problems vary. The solution of a particular problem can be represented in different ways. An algorithm most efficient in dealing with a particular representation may be less efficient in dealing with other representations. The development of differential evolution-based methods contributes substantially to research on evolutionary computing and global optimization in general. The objective of this study is to investigatethe differential evolution algorithm, the intelligent adjustment of its controlparameters, and its application. In the thesis, the differential evolution algorithm is first examined using different parameter settings and test functions. Fuzzy control is then employed to make control parameters adaptive based on an optimization process and expert knowledge. The developed algorithms are applied to training radial basis function networks for function approximation with possible variables including centers, widths, and weights of basis functions and both having control parameters kept fixed and adjusted by fuzzy controller. After the influence of control variables on the performance of the differential evolution algorithm was explored, an adaptive version of the differential evolution algorithm was developed and the differential evolution-based radial basis function network training approaches were proposed. Experimental results showed that the performance of the differential evolution algorithm is sensitive to parameter setting, and the best setting was found to be problem dependent. The fuzzy adaptive differential evolution algorithm releases the user load of parameter setting and performs better than those using all fixedparameters. Differential evolution-based approaches are effective for training Gaussian radial basis function networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the large number of characteristics, there is a need to extract the most relevant characteristicsfrom the input data, so that the amount of information lost in this way is minimal, and the classification realized with the projected data set is relevant with respect to the original data. In order to achieve this feature extraction, different statistical techniques, as well as the principal components analysis (PCA) may be used. This thesis describes an extension of principal components analysis (PCA) allowing the extraction ofa finite number of relevant features from high-dimensional fuzzy data and noisy data. PCA finds linear combinations of the original measurement variables that describe the significant variation in the data. The comparisonof the two proposed methods was produced by using postoperative patient data. Experiment results demonstrate the ability of using the proposed two methods in complex data. Fuzzy PCA was used in the classificationproblem. The classification was applied by using the similarity classifier algorithm where total similarity measures weights are optimized with differential evolution algorithm. This thesis presents the comparison of the classification results based on the obtained data from the fuzzy PCA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study was done with two different servo-systems. In the first system, a servo-hydraulic system was identified and then controlled by a fuzzy gainscheduling controller. The second servo-system, an electro-magnetic linear motor in suppressing the mechanical vibration and position tracking of a reference model are studied by using a neural network and an adaptive backstepping controller respectively. Followings are some descriptions of research methods. Electro Hydraulic Servo Systems (EHSS) are commonly used in industry. These kinds of systems are nonlinearin nature and their dynamic equations have several unknown parameters.System identification is a prerequisite to analysis of a dynamic system. One of the most promising novel evolutionary algorithms is the Differential Evolution (DE) for solving global optimization problems. In the study, the DE algorithm is proposed for handling nonlinear constraint functionswith boundary limits of variables to find the best parameters of a servo-hydraulic system with flexible load. The DE guarantees fast speed convergence and accurate solutions regardless the initial conditions of parameters. The control of hydraulic servo-systems has been the focus ofintense research over the past decades. These kinds of systems are nonlinear in nature and generally difficult to control. Since changing system parameters using the same gains will cause overshoot or even loss of system stability. The highly non-linear behaviour of these devices makes them ideal subjects for applying different types of sophisticated controllers. The study is concerned with a second order model reference to positioning control of a flexible load servo-hydraulic system using fuzzy gainscheduling. In the present research, to compensate the lack of dampingin a hydraulic system, an acceleration feedback was used. To compare the results, a pcontroller with feed-forward acceleration and different gains in extension and retraction is used. The design procedure for the controller and experimental results are discussed. The results suggest that using the fuzzy gain-scheduling controller decrease the error of position reference tracking. The second part of research was done on a PermanentMagnet Linear Synchronous Motor (PMLSM). In this study, a recurrent neural network compensator for suppressing mechanical vibration in PMLSM with a flexible load is studied. The linear motor is controlled by a conventional PI velocity controller, and the vibration of the flexible mechanism is suppressed by using a hybrid recurrent neural network. The differential evolution strategy and Kalman filter method are used to avoid the local minimum problem, and estimate the states of system respectively. The proposed control method is firstly designed by using non-linear simulation model built in Matlab Simulink and then implemented in practical test rig. The proposed method works satisfactorily and suppresses the vibration successfully. In the last part of research, a nonlinear load control method is developed and implemented for a PMLSM with a flexible load. The purpose of the controller is to track a flexible load to the desired position reference as fast as possible and without awkward oscillation. The control method is based on an adaptive backstepping algorithm whose stability is ensured by the Lyapunov stability theorem. The states of the system needed in the controller are estimated by using the Kalman filter. The proposed controller is implemented and tested in a linear motor test drive and responses are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pigmenttipäällystyksen tarkoituksena on parantaa painopapereiden pintaominaisuuksia. Tämän työn tarkoituksena oli löytää sopiva päällystyspasta päällystetylle coldset-paperille. Kirjallisuusosassa on käsitelty coldset-painatusta ja sen ongelmia. Päällystysmenetelmän perusteita, pastan ominaisuuksia ja niiden vaikutusta päällystystulokseen on myös käsitelty. Lisäksi on esitelty joitakin päällystetyn paperin pinnantutkimusmenetelmiä. Kokeellisessa osassa on tutkittu erilaisten pastakoostumusten ja päällystemäärien sekä kalanteroinnin vaikutusta paperin painettavuuteen. Paperit on päällystetty Helicoaterilla ja joitakin pastoja on testattu myös pilot-mittakaavaisessa päällystyksessä. Selitystä paperin käyttäytymiseen painatuksessa on etsitty päällystetyn paperin pintarakenteesta. Paras painettavuus saavutetaan päällysteellä, jossa pigmenttinä on vain karbonaatti. Painojälkeä voidaan parantaa käyttämällä kalsinoitua kaoliinia yhdessä karbonaatin kanssa, mutta tämän päällysteen pintalujuus ei ole riittävä CSWO-painatukseen. Tärkkipigmentti parantaa veden ja painovärin absorptiota ja siten tekee painetun tuotteen kuivemmaksi ja miellyttävämmän tuntuiseksi, mutta aiheuttaa smearingia. Tämä johtuu liian nopeasta musteen asettuvuudesta. "Pehmeä" SB-lateksi soveltuu paremmin offset-painatukseen kuin "kova" lateksi, joka sisältää myös PVAc:ta. "Pehmeällä" lateksilla saadaan parempi pintalujuus ja painojälki kuin "kovalla" lateksilla. Paperin pölyävyyttä painatuksessa voidaan vähentää nostamalla päällystemäärää ja laskemalla pastan kuiva-ainepitoisuutta. Kalanteroinnilla ei pintalujuutta tai painojälkeä voida parantaa. Selitys tutkimuksessa käsiteltyjen papereiden painojäljelle ja painettavuudelle löydetään tutkimalla päällysteen pintarakennetta. Painojälkeen vaikuttaa eniten päällysteen peittoaste. Huonoa peittävyyttä voidaan parantaa nostamalla päällystemäärää. Pölyäminen painatuksessa johtuu pigmenteistä, jotka eivät ole sidottuja paperin pintaan. Tämä taas johtuu pastan huonosta vesiretentiosta. Hyödyllisintä tietoa näiden papereiden pintarakenteesta saadaan tutkimalla pintaa pyyhkäisyelektonimikroskoopilla (SEM), atomivoimamikroskoopilla (AFM) ja laserindusoidulla plasmaspektrometrilla (LIPS). LIPSin etuna on se, että päällystemääräjaukauma voidaan määrittää sekä x-y- että z-suunnassa samanaikaisesti samasta kohdasta. LIPSissä myös näytteen preparointitarve on hyvin vähäinen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Segmentointi on strateginen työkalu, joka tehostaa yrityksen resurssien käyttöä ja siten vaikuttaa kaikkiin asiakkuuksiin liittyviin liiketoimintaprosesseihin. Työn tavoitteena oli muodostaa segmentointimalli (sisältää sekä segmentointiprosessin että kriteerit) yritysinternetmarkkinoille. Työn tuloksia voidaan kuitenkin tulkita ja soveltaa laajemmin korkean teknologian yrityspalvelumarkkinoille. Tämä tutkielma lisää tietämystämme ja tarjoaa uudenlaisen näkemyksen segmentointiin korkean teknologian yrityspalvelumarkkinoilla. Työssä kuvataan korkean teknologian ja yritys- sekä palvelumarkkinoinnin erityispiirteitä ja kuinka nämä tekijät vaikuttavat segmentointimallin. Tutkimuksessa selvitettiin kohdeyrityksen nykyiset segmentointikäytännöt henkilökohtaisin asiantuntijahaastatteluin. Haastatteluiden avulla luotiin kuva nykyisistä lähestymistavoista sekä niiden lähtökohdista, vahvuuksista ja haasteista. Haastatteluiden analysoinnin jälkeen perustettiin projekti segmentoinnin kehittämiseksi. Työ tuloksena luotiin segmentointimalli, joka tarjoaa vankan perustan segmentoinnin kehittämiselle jatkuvana prosessina. Työssä esitetään segmentoinnin integroimista yrityksen asiakkuuksiin liittyviin liiketoimintaprosesseihin, joka usein puuttuu aiemmista töistä, sekä informaationkulun tehostamista segmentoinnin hyödyntämiseksi tehokkaammin. Segmentointi on strateginen työkalu ja vaatii siksi ylemmän johdon tuen ja sitoutumisen. Oikein sovellettuna segmentointi tarjoaa liiketoiminnalle mahdollisuuden merkittäviin etuihin kuten asiakastyytyväisyyden ja kannattavuuden kehittämiseen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Markkinasegmentointi nousi esiin ensi kerran jo 50-luvulla ja se on ollut siitä lähtien yksi markkinoinnin peruskäsitteistä. Suuri osa segmentointia käsittelevästä tutkimuksesta on kuitenkin keskittynyt kuluttajamarkkinoiden segmentointiin yritys- ja teollisuusmarkkinoiden segmentoinnin jäädessä vähemmälle huomiolle. Tämän tutkimuksen tavoitteena on luoda segmentointimalli teollismarkkinoille tietotekniikan tuotteiden ja palveluiden tarjoajan näkökulmasta. Tarkoituksena on selvittää mahdollistavatko case-yrityksen nykyiset asiakastietokannat tehokkaan segmentoinnin, selvittää sopivat segmentointikriteerit sekä arvioida tulisiko tietokantoja kehittää ja kuinka niitä tulisi kehittää tehokkaamman segmentoinnin mahdollistamiseksi. Tarkoitus on luoda yksi malli eri liiketoimintayksiköille yhteisesti. Näin ollen eri yksiköiden tavoitteet tulee ottaa huomioon eturistiriitojen välttämiseksi. Tutkimusmetodologia on tapaustutkimus. Lähteinä tutkimuksessa käytettiin sekundäärisiä lähteitä sekä primäärejä lähteitä kuten case-yrityksen omia tietokantoja sekä haastatteluita. Tutkimuksen lähtökohtana oli tutkimusongelma: Voiko tietokantoihin perustuvaa segmentointia käyttää kannattavaan asiakassuhdejohtamiseen PK-yritys sektorilla? Tavoitteena on luoda segmentointimalli, joka hyödyntää tietokannoissa olevia tietoja tinkimättä kuitenkaan tehokkaan ja kannattavan segmentoinnin ehdoista. Teoriaosa tutkii segmentointia yleensä painottuen kuitenkin teolliseen markkinasegmentointiin. Tarkoituksena on luoda selkeä kuva erilaisista lähestymistavoista aiheeseen ja syventää näkemystä tärkeimpien teorioiden osalta. Tietokantojen analysointi osoitti selviä puutteita asiakastiedoissa. Peruskontaktitiedot löytyvät mutta segmentointia varten tietoa on erittäin rajoitetusti. Tietojen saantia jälleenmyyjiltä ja tukkureilta tulisi parantaa loppuasiakastietojen saannin takia. Segmentointi nykyisten tietojen varassa perustuu lähinnä sekundäärisiin tietoihin kuten toimialaan ja yrityskokoon. Näitäkään tietoja ei ole saatavilla kaikkien tietokannassa olevien yritysten kohdalta.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis addresses the problem of computing the minimal and maximal diameter of the Cayley graph of Coxeter groups. We first present and assert relevant parts of polytope theory and related Coxeter theory. After this, a method of contracting the orthogonal projections of a polytope from Rd onto R2 and R3, d ¸ 3 is presented. This method is the Equality Set Projection algorithm that requires a constant number of linearprogramming problems per facet of the projection in the absence of degeneracy. The ESP algorithm allows us to compute also projected geometric diameters of high-dimensional polytopes. A representation set of projected polytopes is presented to illustrate the methods adopted in this thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since its introduction, fuzzy set theory has become a useful tool in the mathematical modelling of problems in Operations Research and many other fields. The number of applications is growing continuously. In this thesis we investigate a special type of fuzzy set, namely fuzzy numbers. Fuzzy numbers (which will be considered in the thesis as possibility distributions) have been widely used in quantitative analysis in recent decades. In this work two measures of interactivity are defined for fuzzy numbers, the possibilistic correlation and correlation ratio. We focus on both the theoretical and practical applications of these new indices. The approach is based on the level-sets of the fuzzy numbers and on the concept of the joint distribution of marginal possibility distributions. The measures possess similar properties to the corresponding probabilistic correlation and correlation ratio. The connections to real life decision making problems are emphasized focusing on the financial applications. We extend the definitions of possibilistic mean value, variance, covariance and correlation to quasi fuzzy numbers and prove necessary and sufficient conditions for the finiteness of possibilistic mean value and variance. The connection between the concepts of probabilistic and possibilistic correlation is investigated using an exponential distribution. The use of fuzzy numbers in practical applications is demonstrated by the Fuzzy Pay-Off method. This model for real option valuation is based on findings from earlier real option valuation models. We illustrate the use of number of different types of fuzzy numbers and mean value concepts with the method and provide a real life application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A growing concern for organisations is how they should deal with increasing amounts of collected data. With fierce competition and smaller margins, organisations that are able to fully realize the potential in the data they collect can gain an advantage over the competitors. It is almost impossible to avoid imprecision when processing large amounts of data. Still, many of the available information systems are not capable of handling imprecise data, even though it can offer various advantages. Expert knowledge stored as linguistic expressions is a good example of imprecise but valuable data, i.e. data that is hard to exactly pinpoint to a definitive value. There is an obvious concern among organisations on how this problem should be handled; finding new methods for processing and storing imprecise data are therefore a key issue. Additionally, it is equally important to show that tacit knowledge and imprecise data can be used with success, which encourages organisations to analyse their imprecise data. The objective of the research conducted was therefore to explore how fuzzy ontologies could facilitate the exploitation and mobilisation of tacit knowledge and imprecise data in organisational and operational decision making processes. The thesis introduces both practical and theoretical advances on how fuzzy logic, ontologies (fuzzy ontologies) and OWA operators can be utilized for different decision making problems. It is demonstrated how a fuzzy ontology can model tacit knowledge which was collected from wine connoisseurs. The approach can be generalised and applied also to other practically important problems, such as intrusion detection. Additionally, a fuzzy ontology is applied in a novel consensus model for group decision making. By combining the fuzzy ontology with Semantic Web affiliated techniques novel applications have been designed. These applications show how the mobilisation of knowledge can successfully utilize also imprecise data. An important part of decision making processes is undeniably aggregation, which in combination with a fuzzy ontology provides a promising basis for demonstrating the benefits that one can retrieve from handling imprecise data. The new aggregation operators defined in the thesis often provide new possibilities to handle imprecision and expert opinions. This is demonstrated through both theoretical examples and practical implementations. This thesis shows the benefits of utilizing all the available data one possess, including imprecise data. By combining the concept of fuzzy ontology with the Semantic Web movement, it aspires to show the corporate world and industry the benefits of embracing fuzzy ontologies and imprecision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Real option valuation, in particular the fuzzy pay-off method, has proven to be useful in defining risk and visualizing imprecision of investments in various industry applications. This study examines whether the evaluation of risk and profitability for public real estate investments can be improved by using real option methodology. Firstly, the context of real option valuation in the real estate industry is examined. Further, an empirical case study is performed on 30 real estate investments of a Finnish government enterprise in order to determine whether the presently used investment analysis system can be complemented by the pay-off method. Despite challenges in the application of the pay-off method to the case company’s large investment base, real option valuation is found to create additional value and facilitate more robust risk analysis in public real estate applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This master thesis work introduces the fuzzy tolerance/equivalence relation and its application in cluster analysis. The work presents about the construction of fuzzy equivalence relations using increasing generators. Here, we investigate and research on the role of increasing generators for the creation of intersection, union and complement operators. The objective is to develop different varieties of fuzzy tolerance/equivalence relations using different varieties of increasing generators. At last, we perform a comparative study with these developed varieties of fuzzy tolerance/equivalence relations in their application to a clustering method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis begins with the classical cooperation and transfers it to the digital world. This work gives a detailed overview of the young fields of research smart city, shareconomy and crowdsourcing and links these fields with entrepreneurship. The core research aim is the finding of connections between the research fields smart city, shareconomy and crowdsourcing and entrepreneurial activities and the specific fields of application, success factors and conditions for entrepreneurs. The thesis consists of seven peer-reviewed publications. Based on primary and secondary data, the existence of entrepreneurial opportunities in the fields of smart city, shareconomy and crowdsourcing could be confirmed. The first part (publications 1-3) of the thesis are literature reviews to secure the fundamental base for further research. This part consists of newly created definitions and an extreme sharpening of the research fields for the near future. In the second part of the thesis (publications 4-7), empirical field work (in-depth interviews with entrepreneurs) and quantitative analyses (fuzzy set/qualitative comparative analysis and binary logistic regression analysis) contribute to the field of research with additional new insights. Summarizing, the insights are multi-layered: theoretical (e.g. new definitions, sharpening of the research field), methodical (e.g. first time application of the fuzzy set/qualitative comparative analysis in the field of crowdfunding) and qualitative (first time application of in-depth interviews with entrepreneurs in the fields of smart city and shareconomy). The global research question could be answered: the link between entrepreneurship and smart city, shareconomy and crowdfunding could be confirmed, concrete fields of application could be identified and further developments could be touched upon. This work strongly contributes to the young fields of research through much-needed basic work, new qualitative approaches, innovative methods and new insights and offers opportunities for discussion, criticism and support for further research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the development of electronic devices, more and more mobile clients are connected to the Internet and they generate massive data every day. We live in an age of “Big Data”, and every day we generate hundreds of million magnitude data. By analyzing the data and making prediction, we can carry out better development plan. Unfortunately, traditional computation framework cannot meet the demand, so the Hadoop would be put forward. First the paper introduces the background and development status of Hadoop, compares the MapReduce in Hadoop 1.0 and YARN in Hadoop 2.0, and analyzes the advantages and disadvantages of them. Because the resource management module is the core role of YARN, so next the paper would research about the resource allocation module including the resource management, resource allocation algorithm, resource preemption model and the whole resource scheduling process from applying resource to finishing allocation. Also it would introduce the FIFO Scheduler, Capacity Scheduler, and Fair Scheduler and compare them. The main work has been done in this paper is researching and analyzing the Dominant Resource Fair algorithm of YARN, putting forward a maximum resource utilization algorithm based on Dominant Resource Fair algorithm. The paper also provides a suggestion to improve the unreasonable facts in resource preemption model. Emphasizing “fairness” during resource allocation is the core concept of Dominant Resource Fair algorithm of YARM. Because the cluster is multiple users and multiple resources, so the user’s resource request is multiple too. The DRF algorithm would divide the user’s resources into dominant resource and normal resource. For a user, the dominant resource is the one whose share is highest among all the request resources, others are normal resource. The DRF algorithm requires the dominant resource share of each user being equal. But for these cases where different users’ dominant resource amount differs greatly, emphasizing “fairness” is not suitable and can’t promote the resource utilization of the cluster. By analyzing these cases, this thesis puts forward a new allocation algorithm based on DRF. The new algorithm takes the “fairness” into consideration but not the main principle. Maximizing the resource utilization is the main principle and goal of the new algorithm. According to comparing the result of the DRF and new algorithm based on DRF, we found that the new algorithm has more high resource utilization than DRF. The last part of the thesis is to install the environment of YARN and use the Scheduler Load Simulator (SLS) to simulate the cluster environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Localization, which is the ability of a mobile robot to estimate its position within its environment, is a key capability for autonomous operation of any mobile robot. This thesis presents a system for indoor coarse and global localization of a mobile robot based on visual information. The system is based on image matching and uses SIFT features as natural landmarks. Features extracted from training images arestored in a database for use in localization later. During localization an image of the scene is captured using the on-board camera of the robot, features are extracted from the image and the best match is searched from the database. Feature matching is done using the k-d tree algorithm. Experimental results showed that localization accuracy increases with the number of training features used in the training database, while, on the other hand, increasing number of features tended to have a negative impact on the computational time. For some parts of the environment the error rate was relatively high due to a strong correlation of features taken from those places across the environment.