64 resultados para vector quantization based Gaussian modeling
Resumo:
Detta arbete fokuserar på modellering av katalytiska gas-vätskereaktioner som genomförs i kontinuerliga packade bäddar. Katalyserade gas-vätskereaktioner hör till de mest typiska reaktionerna i kemisk industri; därför behandlas här packade bäddreaktorer som ett av de populäraste alternativen, då kontinuerlig drift eftersträvas. Tack vare en stor katalysatormängd per volym har de en kompakt struktur, separering av katalysatorn behövs inte och genom en professionell design kan den mest fördelaktiga strömningsbilden upprätthållas i reaktorn. Packade bäddreaktorer är attraktiva p.g.a. lägre investerings- och driftskostnader. Även om packade bäddar används intensivt i industri, är det mycket utmanande att modellera. Detta beror på att tre faser samexisterar och systemets geometri är komplicerad. Existensen av flera reaktioner gör den matematiska modelleringen även mera krävande. Många förenklingar blir därmed nödvändiga. Modellerna involverar typiskt flera parametrar som skall justeras på basis av experimentella data. I detta arbete studerades fem olika reaktionssystem. Systemen hade studerats experimentellt i vårt laboratorium med målet att nå en hög produktivitet och selektivitet genom ett optimalt val av katalysatorer och driftsbetingelser. Hydrering av citral, dekarboxylering av fettsyror, direkt syntes av väteperoxid samt hydrering av sockermonomererna glukos och arabinos användes som exempelsystem. Även om dessa system hade mycket gemensamt, hade de också unika egenskaper och krävde därför en skräddarsydd matematisk behandling. Citralhydrering var ett system med en dominerande huvudreaktion som producerar citronellal och citronellol som huvudprodukter. Produkterna används som en citrondoftande komponent i parfymer, tvålar och tvättmedel samt som plattform-kemikalier. Dekarboxylering av stearinsyra var ett specialfall, för vilket en reaktionsväg för produktion av långkedjade kolväten utgående från fettsyror söktes. En synnerligen hög produktselektivitet var karakteristisk för detta system. Även processuppskalning modellerades för dekarboxylerings-reaktionen. Direkt syntes av väteperoxid hade som målsättning att framta en förenklad process att producera väteperoxid genom att låta upplöst väte och syre reagera direkt i ett lämpligt lösningsmedel på en aktiv fast katalysator. I detta system förekommer tre bireaktioner, vilka ger vatten som oönskad produkt. Alla dessa tre reaktioner modellerades matematiskt med hjälp av dynamiska massbalanser. Målet med hydrering av glukos och arabinos är att framställa produkter med en hög förädlingsgrad, nämligen sockeralkoholer, genom katalytisk hydrering. För dessa två system löstes ämnesmängd- och energibalanserna simultant för att evaluera effekter inne i porösa katalysatorpartiklar. Impulsbalanser som bestämmer strömningsbetingelser inne i en kemisk reaktor, ersattes i alla modelleringsstudier med semi-empiriska korrelationsuttryck för vätskans volymandel och tryckförlust och med axiell dispersionsmodell för beskrivning av omblandningseffekter. Genom att justera modellens parametrar kunde reaktorns beteende beskrivas väl. Alla experiment var genomförda i laboratorieskala. En stor mängd av kopplade effekter samexisterade: reaktionskinetik inklusive adsorption, katalysatordeaktivering, mass- och värmeöverföring samt strömningsrelaterade effekter. En del av dessa effekter kunde studeras separat (t.ex. dispersionseffekter och bireaktioner). Inverkan av vissa fenomen kunde ibland minimeras genom en noggrann planering av experimenten. På detta sätt kunde förenklingar i modellerna bättre motiveras. Alla system som studerades var industriellt relevanta. Utveckling av nya, förenklade produktionsteknologier för existerande kemiska komponenter eller nya komponenter är ett gigantiskt uppdrag. Studierna som presenterades här fokuserade på en av den teknisk-vetenskapliga utfärdens första etapper.
Resumo:
In this Master’s thesis agent-based modeling has been used to analyze maintenance strategy related phenomena. The main research question that has been answered was: what does the agent-based model made for this study tell us about how different maintenance strategy decisions affect profitability of equipment owners and maintenance service providers? Thus, the main outcome of this study is an analysis of how profitability can be increased in industrial maintenance context. To answer that question, first, a literature review of maintenance strategy, agent-based modeling and maintenance modeling and optimization was conducted. This review provided the basis for making the agent-based model. Making the model followed a standard simulation modeling procedure. With the simulation results from the agent-based model the research question was answered. Specifically, the results of the modeling and this study are: (1) optimizing the point in which a machine is maintained increases profitability for the owner of the machine and also the maintainer with certain conditions; (2) time-based pricing of maintenance services leads to a zero-sum game between the parties; (3) value-based pricing of maintenance services leads to a win-win game between the parties, if the owners of the machines share a substantial amount of their value to the maintainers; and (4) error in machine condition measurement is a critical parameter to optimizing maintenance strategy, and there is real systemic value in having more accurate machine condition measurement systems.
Resumo:
This thesis investigated building information modeling (BIM) from a material supplier’s point of view. The objective was to gain understanding about how a building material supplier could benefit from the growing use of BIM in the AEC (architectural, engineering and construction) industry. Increasing amount of inquiries related to BIM from customers and other interest groups had awoken target company’s interest towards BIM. This thesis acts as a pre-study for the target company related to potential of BIM. First of all BIM and its meaning from a material supplier’s point of view was defined based on a literature review. To reveal the potential benefits of BIM for a material supplier a questionnaire survey and in total of 11 interviews were conducted. Based on the literature review and analyzed results it came clear that BIM offers benefits also for material suppliers. Product libraries and material databases for BIM tools can act as an important marketing channel for material suppliers. Material suppliers could also utilize the information from the BIM models to schedule their deliveries more precisely and potentially even to schedule their own production. All this needs deeper cooperation between material suppliers, contractors and other stakeholders in the AEC industry. Based on the results also first steps for the target company to utilize the growing use of BIM were defined.
Resumo:
Fireside deposits can be found in many types of utility and industrial furnaces. The deposits in furnaces are problematic because they can reduce heat transfer, block gas paths and cause corrosion. To tackle these problems, it is vital to estimate the influence of deposits on heat transfer, to minimize deposit formation and to optimize deposit removal. It is beneficial to have a good understanding of the mechanisms of fireside deposit formation. Numerical modeling is a powerful tool for investigating the heat transfer in furnaces, and it can provide valuable information for understanding the mechanisms of deposit formation. In addition, a sub-model of deposit formation is generally an essential part of a comprehensive furnace model. This work investigates two specific processes of fireside deposit formation in two industrial furnaces. The first process is the slagging wall found in furnaces with molten deposits running on the wall. A slagging wall model is developed to take into account the two-layer structure of the deposits. With the slagging wall model, the thickness and the surface temperature of the molten deposit layer can be calculated. The slagging wall model is used to predict the surface temperature and the heat transfer to a specific section of a super-heater tube panel with the boundary condition obtained from a Kraft recovery furnace model. The slagging wall model is also incorporated into the computational fluid dynamics (CFD)-based Kraft recovery furnace model and applied on the lower furnace walls. The implementation of the slagging wall model includes a grid simplification scheme. The wall surface temperature calculated with the slagging wall model is used as the heat transfer boundary condition. Simulation of a Kraft recovery furnace is performed, and it is compared with two other cases and measurements. In the two other cases, a uniform wall surface temperature and a wall surface temperature calculated with a char bed burning model are used as the heat transfer boundary conditions. In this particular furnace, the wall surface temperatures from the three cases are similar and are in the correct range of the measurements. Nevertheless, the wall surface temperature profiles with the slagging wall model and the char bed burning model are different because the deposits are represented differently in the two models. In addition, the slagging wall model is proven to be computationally efficient. The second process is deposit formation due to thermophoresis of fine particles to the heat transfer surface. This process is considered in the simulation of a heat recovery boiler of the flash smelting process. In order to determine if the small dust particles stay on the wall, a criterion based on the analysis of forces acting on the particle is applied. Time-dependent simulation of deposit formation in the heat recovery boiler is carried out and the influence of deposits on heat transfer is investigated. The locations prone to deposit formation are also identified in the heat recovery boiler. Modeling of the two processes in the two industrial furnaces enhances the overall understanding of the processes. The sub-models developed in this work can be applied in other similar deposit formation processes with carefully-defined boundary conditions.
Resumo:
Software plays an important role in our society and economy. Software development is an intricate process, and it comprises many different tasks: gathering requirements, designing new solutions that fulfill these requirements, as well as implementing these designs using a programming language into a working system. As a consequence, the development of high quality software is a core problem in software engineering. This thesis focuses on the validation of software designs. The issue of the analysis of designs is of great importance, since errors originating from designs may appear in the final system. It is considered economical to rectify the problems as early in the software development process as possible. Practitioners often create and visualize designs using modeling languages, one of the more popular being the Uni ed Modeling Language (UML). The analysis of the designs can be done manually, but in case of large systems, the need of mechanisms that automatically analyze these designs arises. In this thesis, we propose an automatic approach to analyze UML based designs using logic reasoners. This approach firstly proposes the translations of the UML based designs into a language understandable by reasoners in the form of logic facts, and secondly shows how to use the logic reasoners to infer the logical consequences of these logic facts. We have implemented the proposed translations in the form of a tool that can be used with any standard compliant UML modeling tool. Moreover, we authenticate the proposed approach by automatically validating hundreds of UML based designs that consist of thousands of model elements available in an online model repository. The proposed approach is limited in scope, but is fully automatic and does not require any expertise of logic languages from the user. We exemplify the proposed approach with two applications, which include the validation of domain specific languages and the validation of web service interfaces.
Resumo:
In this study we discuss the electronic, structural, and optical properties of titanium dioxide nanoparticles, and also the properties of Ni(II) diimine dithiolato complexes as dyes in dye-sensitized TiO2 based solar cells. The abovementioned properties have been modeled by using computational codes based on the density functional theory. The results achieved show slight evidence on the structure-dependent band gap broadening, and clear blue-shifts in absorption spectra and refractive index functions of ultra-small TiO2 particles. It is also shown that these properties are strongly dependent on the shape of the nanoparticles. Regarding the Ni(II) diimine dithiolato complexes as dyes in dye-sensitized TiO2 based solar cells, it is shown that based on the experimental electrochemical investigation and DFT studies all studied diimine derivatives could serve as potential candidates for the light harvesting, but the e ciencies of the dyes studied are not very promising.
Resumo:
The power rating of wind turbines is constantly increasing; however, keeping the voltage rating at the low-voltage level results in high kilo-ampere currents. An alternative for increasing the power levels without raising the voltage level is provided by multiphase machines. Multiphase machines are used for instance in ship propulsion systems, aerospace applications, electric vehicles, and in other high-power applications including wind energy conversion systems. A machine model in an appropriate reference frame is required in order to design an efficient control for the electric drive. Modeling of multiphase machines poses a challenge because of the mutual couplings between the phases. Mutual couplings degrade the drive performance unless they are properly considered. In certain multiphase machines there is also a problem of high current harmonics, which are easily generated because of the small current path impedance of the harmonic components. However, multiphase machines provide special characteristics compared with the three-phase counterparts: Multiphase machines have a better fault tolerance, and are thus more robust. In addition, the controlled power can be divided among more inverter legs by increasing the number of phases. Moreover, the torque pulsation can be decreased and the harmonic frequency of the torque ripple increased by an appropriate multiphase configuration. By increasing the number of phases it is also possible to obtain more torque per RMS ampere for the same volume, and thus, increase the power density. In this doctoral thesis, a decoupled d–q model of double-star permanent-magnet (PM) synchronous machines is derived based on the inductance matrix diagonalization. The double-star machine is a special type of multiphase machines. Its armature consists of two three-phase winding sets, which are commonly displaced by 30 electrical degrees. In this study, the displacement angle between the sets is considered a parameter. The diagonalization of the inductance matrix results in a simplified model structure, in which the mutual couplings between the reference frames are eliminated. Moreover, the current harmonics are mapped into a reference frame, in which they can be easily controlled. The work also presents methods to determine the machine inductances by a finite-element analysis and by voltage-source inverters on-site. The derived model is validated by experimental results obtained with an example double-star interior PM (IPM) synchronous machine having the sets displaced by 30 electrical degrees. The derived transformation, and consequently, the decoupled d–q machine model, are shown to model the behavior of an actual machine with an acceptable accuracy. Thus, the proposed model is suitable to be used for the model-based control design of electric drives consisting of double-star IPM synchronous machines.
Resumo:
The capabilities and thus, design complexity of VLSI-based embedded systems have increased tremendously in recent years, riding the wave of Moore’s law. The time-to-market requirements are also shrinking, imposing challenges to the designers, which in turn, seek to adopt new design methods to increase their productivity. As an answer to these new pressures, modern day systems have moved towards on-chip multiprocessing technologies. New architectures have emerged in on-chip multiprocessing in order to utilize the tremendous advances of fabrication technology. Platform-based design is a possible solution in addressing these challenges. The principle behind the approach is to separate the functionality of an application from the organization and communication architecture of hardware platform at several levels of abstraction. The existing design methodologies pertaining to platform-based design approach don’t provide full automation at every level of the design processes, and sometimes, the co-design of platform-based systems lead to sub-optimal systems. In addition, the design productivity gap in multiprocessor systems remain a key challenge due to existing design methodologies. This thesis addresses the aforementioned challenges and discusses the creation of a development framework for a platform-based system design, in the context of the SegBus platform - a distributed communication architecture. This research aims to provide automated procedures for platform design and application mapping. Structural verification support is also featured thus ensuring correct-by-design platforms. The solution is based on a model-based process. Both the platform and the application are modeled using the Unified Modeling Language. This thesis develops a Domain Specific Language to support platform modeling based on a corresponding UML profile. Object Constraint Language constraints are used to support structurally correct platform construction. An emulator is thus introduced to allow as much as possible accurate performance estimation of the solution, at high abstraction levels. VHDL code is automatically generated, in the form of “snippets” to be employed in the arbiter modules of the platform, as required by the application. The resulting framework is applied in building an actual design solution for an MP3 stereo audio decoder application.
Resumo:
Recently, due to the increasing total construction and transportation cost and difficulties associated with handling massive structural components or assemblies, there has been increasing financial pressure to reduce structural weight. Furthermore, advances in material technology coupled with continuing advances in design tools and techniques have encouraged engineers to vary and combine materials, offering new opportunities to reduce the weight of mechanical structures. These new lower mass systems, however, are more susceptible to inherent imbalances, a weakness that can result in higher shock and harmonic resonances which leads to poor structural dynamic performances. The objective of this thesis is the modeling of layered sheet steel elements, to accurately predict dynamic performance. During the development of the layered sheet steel model, the numerical modeling approach, the Finite Element Analysis and the Experimental Modal Analysis are applied in building a modal model of the layered sheet steel elements. Furthermore, in view of getting a better understanding of the dynamic behavior of layered sheet steel, several binding methods have been studied to understand and demonstrate how a binding method affects the dynamic behavior of layered sheet steel elements when compared to single homogeneous steel plate. Based on the developed layered sheet steel model, the dynamic behavior of a lightweight wheel structure to be used as the structure for the stator of an outer rotor Direct-Drive Permanent Magnet Synchronous Generator designed for high-power wind turbines is studied.
Resumo:
In recent decades, industrial activity growth and increasing water usage worldwide have led to the release of various pollutants, such as toxic heavy metals and nutrients, into the aquatic environment. Modified nanocellulose and microcellulose-based adsorption materials have the potential to remove these contaminants from aqueous solutions. The present research consisted of the preparation of five different nano/microcellulose-based adsorbents, their characterization, the study of adsorption kinetics and isotherms, the determination of adsorption mechanisms, and an evaluation of adsorbents’ regeneration properties. The same well known reactions and modification methods that were used for modifying conventional cellulose also worked for microfibrillated cellulose (MFC). The use of succinic anhydride modified mercerized nanocellulose, and aminosilane and hydroxyapatite modified nanostructured MFC for the removal of heavy metals from aqueous solutions exhibited promising results. Aminosilane, epoxy and hydroxyapatite modified MFC could be used as a promising alternative for H2S removal from aqueous solutions. In addition, new knowledge about the adsorption properties of carbonated hydroxyapatite modified MFC as multifunctional adsorbent for the removal of both cations and anions ions from water was obtained. The maghemite nanoparticles (Fe3O4) modified MFC was found to be a highly promising adsorbent for the removal of As(V) from aqueous solutions due to its magnetic properties, high surface area, and high adsorption capacity . The maximum removal efficiencies of each adsorbent were studied in batch mode. The results of adsorption kinetics indicated very fast removal rates for all the studied pollutants. Modeling of adsorption isotherms and adsorption kinetics using various theoretical models provided information about the adsorbent’s surface properties and the adsorption mechanisms. This knowledge is important for instance, in designing water treatment units/plants. Furthermore, the correspondence between the theory behind the model and properties of the adsorbent as well as adsorption mechanisms were also discussed. On the whole, both the experimental results and theoretical considerations supported the potential applicability of the studied nano/microcellulose-based adsorbents in water treatment applications.
Resumo:
Fluid particle breakup and coalescence are important phenomena in a number of industrial flow systems. This study deals with a gas-liquid bubbly flow in one wastewater cleaning application. Three-dimensional geometric model of a dispersion water system was created in ANSYS CFD meshing software. Then, numerical study of the system was carried out by means of unsteady simulations performed in ANSYS FLUENT CFD software. Single-phase water flow case was setup to calculate the entire flow field using the RNG k-epsilon turbulence model based on the Reynolds-averaged Navier-Stokes (RANS) equations. Bubbly flow case was based on a computational fluid dynamics - population balance model (CFD-PBM) coupled approach. Bubble breakup and coalescence were considered to determine the evolution of the bubble size distribution. Obtained results are considered as steps toward optimization of the cleaning process and will be analyzed in order to make the process more efficient.
Resumo:
This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.
Resumo:
Acid sulfate (a.s.) soils constitute a major environmental issue. Severe ecological damage results from the considerable amounts of acidity and metals leached by these soils in the recipient watercourses. As even small hot spots may affect large areas of coastal waters, mapping represents a fundamental step in the management and mitigation of a.s. soil environmental risks (i.e. to target strategic areas). Traditional mapping in the field is time-consuming and therefore expensive. Additional more cost-effective techniques have, thus, to be developed in order to narrow down and define in detail the areas of interest. The primary aim of this thesis was to assess different spatial modeling techniques for a.s. soil mapping, and the characterization of soil properties relevant for a.s. soil environmental risk management, using all available data: soil and water samples, as well as datalayers (e.g. geological and geophysical). Different spatial modeling techniques were applied at catchment or regional scale. Two artificial neural networks were assessed on the Sirppujoki River catchment (c. 440 km2) located in southwestern Finland, while fuzzy logic was assessed on several areas along the Finnish coast. Quaternary geology, aerogeophysics and slope data (derived from a digital elevation model) were utilized as evidential datalayers. The methods also required the use of point datasets (i.e. soil profiles corresponding to known a.s. or non-a.s. soil occurrences) for training and/or validation within the modeling processes. Applying these methods, various maps were generated: probability maps for a.s. soil occurrence, as well as predictive maps for different soil properties (sulfur content, organic matter content and critical sulfide depth). The two assessed artificial neural networks (ANNs) demonstrated good classification abilities for a.s. soil probability mapping at catchment scale. Slightly better results were achieved using a Radial Basis Function (RBF) -based ANN than a Radial Basis Functional Link Net (RBFLN) method, narrowing down more accurately the most probable areas for a.s. soil occurrence and defining more properly the least probable areas. The RBF-based ANN also demonstrated promising results for the characterization of different soil properties in the most probable a.s. soil areas at catchment scale. Since a.s. soil areas constitute highly productive lands for agricultural purpose, the combination of a probability map with more specific soil property predictive maps offers a valuable toolset to more precisely target strategic areas for subsequent environmental risk management. Notably, the use of laser scanning (i.e. Light Detection And Ranging, LiDAR) data enabled a more precise definition of a.s. soil probability areas, as well as the soil property modeling classes for sulfur content and the critical sulfide depth. Given suitable training/validation points, ANNs can be trained to yield a more precise modeling of the occurrence of a.s. soils and their properties. By contrast, fuzzy logic represents a simple, fast and objective alternative to carry out preliminary surveys, at catchment or regional scale, in areas offering a limited amount of data. This method enables delimiting and prioritizing the most probable areas for a.s soil occurrence, which can be particularly useful in the field. Being easily transferable from area to area, fuzzy logic modeling can be carried out at regional scale. Mapping at this scale would be extremely time-consuming through manual assessment. The use of spatial modeling techniques enables the creation of valid and comparable maps, which represents an important development within the a.s. soil mapping process. The a.s. soil mapping was also assessed using water chemistry data for 24 different catchments along the Finnish coast (in all, covering c. 21,300 km2) which were mapped with different methods (i.e. conventional mapping, fuzzy logic and an artificial neural network). Two a.s. soil related indicators measured in the river water (sulfate content and sulfate/chloride ratio) were compared to the extent of the most probable areas for a.s. soils in the surveyed catchments. High sulfate contents and sulfate/chloride ratios measured in most of the rivers demonstrated the presence of a.s. soils in the corresponding catchments. The calculated extent of the most probable a.s. soil areas is supported by independent data on water chemistry, suggesting that the a.s. soil probability maps created with different methods are reliable and comparable.
Resumo:
The objective of this thesis is to develop and generalize further the differential evolution based data classification method. For many years, evolutionary algorithms have been successfully applied to many classification tasks. Evolution algorithms are population based, stochastic search algorithms that mimic natural selection and genetics. Differential evolution is an evolutionary algorithm that has gained popularity because of its simplicity and good observed performance. In this thesis a differential evolution classifier with pool of distances is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, to determine the optimal values for all free parameters of the classifier model during the training phase of the classifier. The differential evolution classifier applies the individually optimized distance measure for each new data set to be classified is generalized to cover a pool of distances. Instead of optimizing a single distance measure for the given data set, the selection of the optimal distance measure from a predefined pool of alternative measures is attempted systematically and automatically. Furthermore, instead of only selecting the optimal distance measure from a set of alternatives, an attempt is made to optimize the values of the possible control parameters related with the selected distance measure. Specifically, a pool of alternative distance measures is first created and then the differential evolution algorithm is applied to select the optimal distance measure that yields the highest classification accuracy with the current data. After determining the optimal distance measures for the given data set together with their optimal parameters, all determined distance measures are aggregated to form a single total distance measure. The total distance measure is applied to the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; a sample belongs to the class represented by the nearest prototype vector when measured with the optimized total distance measure. During the training process the differential evolution algorithm determines the optimal class vectors, selects optimal distance metrics, and determines the optimal values for the free parameters of each selected distance measure. The results obtained with the above method confirm that the choice of distance measure is one of the most crucial factors for obtaining higher classification accuracy. The results also demonstrate that it is possible to build a classifier that is able to select the optimal distance measure for the given data set automatically and systematically. After finding optimal distance measures together with optimal parameters from the particular distance measure results are then aggregated to form a total distance, which will be used to form the deviation between the class vectors and samples and thus classify the samples. This thesis also discusses two types of aggregation operators, namely, ordered weighted averaging (OWA) based multi-distances and generalized ordered weighted averaging (GOWA). These aggregation operators were applied in this work to the aggregation of the normalized distance values. The results demonstrate that a proper combination of aggregation operator and weight generation scheme play an important role in obtaining good classification accuracy. The main outcomes of the work are the six new generalized versions of previous method called differential evolution classifier. All these DE classifier demonstrated good results in the classification tasks.
Resumo:
Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.