36 resultados para multi-objective control
Resumo:
The Laboratory of Intelligent Machine researches and develops energy-efficient power transmissions and automation for mobile construction machines and industrial processes. The laboratory's particular areas of expertise include mechatronic machine design using virtual technologies and simulators and demanding industrial robotics. The laboratory has collaborated extensively with industrial actors and it has participated in significant international research projects, particularly in the field of robotics. For years, dSPACE tools were the lonely hardware which was used in the lab to develop different control algorithms in real-time. dSPACE's hardware systems are in widespread use in the automotive industry and are also employed in drives, aerospace, and industrial automation. But new competitors are developing new sophisticated systems and their features convinced the laboratory to test new products. One of these competitors is National Instrument (NI). In order to get to know the specifications and capabilities of NI tools, an agreement was made to test a NI evolutionary system. This system is used to control a 1-D hydraulic slider. The objective of this research project is to develop a control scheme for the teleoperation of a hydraulically driven manipulator, and to implement a control algorithm between human and machine interaction, and machine and task environment interaction both on NI and dSPACE systems simultaneously and to compare the results.
Resumo:
One of the main challenges in Software Engineering is to cope with the transition from an industry based on software as a product to software as a service. The field of Software Engineering should provide the necessary methods and tools to develop and deploy new cost-efficient and scalable digital services. In this thesis, we focus on deployment platforms to ensure cost-efficient scalability of multi-tier web applications and on-demand video transcoding service for different types of load conditions. Infrastructure as a Service (IaaS) clouds provide Virtual Machines (VMs) under the pay-per-use business model. Dynamically provisioning VMs on demand allows service providers to cope with fluctuations on the number of service users. However, VM provisioning must be done carefully, because over-provisioning results in an increased operational cost, while underprovisioning leads to a subpar service. Therefore, our main focus in this thesis is on cost-efficient VM provisioning for multi-tier web applications and on-demand video transcoding. Moreover, to prevent provisioned VMs from becoming overloaded, we augment VM provisioning with an admission control mechanism. Similarly, to ensure efficient use of provisioned VMs, web applications on the under-utilized VMs are consolidated periodically. Thus, the main problem that we address is cost-efficient VM provisioning augmented with server consolidation and admission control on the provisioned VMs. We seek solutions for two types of applications: multi-tier web applications that follow the request-response paradigm and on-demand video transcoding that is based on video streams with soft realtime constraints. Our first contribution is a cost-efficient VM provisioning approach for multi-tier web applications. The proposed approach comprises two subapproaches: a reactive VM provisioning approach called ARVUE and a hybrid reactive-proactive VM provisioning approach called Cost-efficient Resource Allocation for Multiple web applications with Proactive scaling. Our second contribution is a prediction-based VM provisioning approach for on-demand video transcoding in the cloud. Moreover, to prevent virtualized servers from becoming overloaded, the proposed VM provisioning approaches are augmented with admission control approaches. Therefore, our third contribution is a session-based admission control approach for multi-tier web applications called adaptive Admission Control for Virtualized Application Servers. Similarly, the fourth contribution in this thesis is a stream-based admission control and scheduling approach for on-demand video transcoding called Stream-Based Admission Control and Scheduling. Our fifth contribution is a computation and storage trade-o strategy for cost-efficient video transcoding in cloud computing. Finally, the sixth and the last contribution is a web application consolidation approach, which uses Ant Colony System to minimize the under-utilization of the virtualized application servers.
Resumo:
The objective of this thesis is to develop and generalize further the differential evolution based data classification method. For many years, evolutionary algorithms have been successfully applied to many classification tasks. Evolution algorithms are population based, stochastic search algorithms that mimic natural selection and genetics. Differential evolution is an evolutionary algorithm that has gained popularity because of its simplicity and good observed performance. In this thesis a differential evolution classifier with pool of distances is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, to determine the optimal values for all free parameters of the classifier model during the training phase of the classifier. The differential evolution classifier applies the individually optimized distance measure for each new data set to be classified is generalized to cover a pool of distances. Instead of optimizing a single distance measure for the given data set, the selection of the optimal distance measure from a predefined pool of alternative measures is attempted systematically and automatically. Furthermore, instead of only selecting the optimal distance measure from a set of alternatives, an attempt is made to optimize the values of the possible control parameters related with the selected distance measure. Specifically, a pool of alternative distance measures is first created and then the differential evolution algorithm is applied to select the optimal distance measure that yields the highest classification accuracy with the current data. After determining the optimal distance measures for the given data set together with their optimal parameters, all determined distance measures are aggregated to form a single total distance measure. The total distance measure is applied to the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; a sample belongs to the class represented by the nearest prototype vector when measured with the optimized total distance measure. During the training process the differential evolution algorithm determines the optimal class vectors, selects optimal distance metrics, and determines the optimal values for the free parameters of each selected distance measure. The results obtained with the above method confirm that the choice of distance measure is one of the most crucial factors for obtaining higher classification accuracy. The results also demonstrate that it is possible to build a classifier that is able to select the optimal distance measure for the given data set automatically and systematically. After finding optimal distance measures together with optimal parameters from the particular distance measure results are then aggregated to form a total distance, which will be used to form the deviation between the class vectors and samples and thus classify the samples. This thesis also discusses two types of aggregation operators, namely, ordered weighted averaging (OWA) based multi-distances and generalized ordered weighted averaging (GOWA). These aggregation operators were applied in this work to the aggregation of the normalized distance values. The results demonstrate that a proper combination of aggregation operator and weight generation scheme play an important role in obtaining good classification accuracy. The main outcomes of the work are the six new generalized versions of previous method called differential evolution classifier. All these DE classifier demonstrated good results in the classification tasks.
Resumo:
The aim of this thesis is to propose a novel control method for teleoperated electrohydraulic servo systems that implements a reliable haptic sense between the human and manipulator interaction, and an ideal position control between the manipulator and the task environment interaction. The proposed method has the characteristics of a universal technique independent of the actual control algorithm and it can be applied with other suitable control methods as a real-time control strategy. The motivation to develop this control method is the necessity for a reliable real-time controller for teleoperated electrohydraulic servo systems that provides highly accurate position control based on joystick inputs with haptic capabilities. The contribution of the research is that the proposed control method combines a directed random search method and a real-time simulation to develop an intelligent controller in which each generation of parameters is tested on-line by the real-time simulator before being applied to the real process. The controller was evaluated on a hydraulic position servo system. The simulator of the hydraulic system was built based on Markov chain Monte Carlo (MCMC) method. A Particle Swarm Optimization algorithm combined with the foraging behavior of E. coli bacteria was utilized as the directed random search engine. The control strategy allows the operator to be plugged into the work environment dynamically and kinetically. This helps to ensure the system has haptic sense with high stability, without abstracting away the dynamics of the hydraulic system. The new control algorithm provides asymptotically exact tracking of both, the position and the contact force. In addition, this research proposes a novel method for re-calibration of multi-axis force/torque sensors. The method makes several improvements to traditional methods. It can be used without dismantling the sensor from its application and it requires smaller number of standard loads for calibration. It is also more cost efficient and faster in comparison to traditional calibration methods. The proposed method was developed in response to re-calibration issues with the force sensors utilized in teleoperated systems. The new approach aimed to avoid dismantling of the sensors from their applications for applying calibration. A major complication with many manipulators is the difficulty accessing them when they operate inside a non-accessible environment; especially if those environments are harsh; such as in radioactive areas. The proposed technique is based on design of experiment methodology. It has been successfully applied to different force/torque sensors and this research presents experimental validation of use of the calibration method with one of the force sensors which method has been applied to.
Resumo:
Crystal properties, product quality and particle size are determined by the operating conditions in the crystallization process. Thus, in order to obtain desired end-products, the crystallization process should be effectively controlled based on reliable kinetic information, which can be provided by powerful analytical tools such as Raman spectrometry and thermal analysis. The present research work studied various crystallization processes such as reactive crystallization, precipitation with anti-solvent and evaporation crystallization. The goal of the work was to understand more comprehensively the fundamentals, phenomena and utilizations of crystallization, and establish proper methods to control particle size distribution, especially for three phase gas-liquid-solid crystallization systems. As a part of the solid-liquid equilibrium studies in this work, prediction of KCl solubility in a MgCl2-KCl-H2O system was studied theoretically. Additionally, a solubility prediction model by Pitzer thermodynamic model was investigated based on solubility measurements of potassium dihydrogen phosphate with the presence of non-electronic organic substances in aqueous solutions. The prediction model helps to extend literature data and offers an easy and economical way to choose solvent for anti-solvent precipitation. Using experimental and modern analytical methods, precipitation kinetics and mass transfer in reactive crystallization of magnesium carbonate hydrates with magnesium hydroxide slurry and CO2 gas were systematically investigated. The obtained results gave deeper insight into gas-liquid-solid interactions and the mechanisms of this heterogeneous crystallization process. The research approach developed can provide theoretical guidance and act as a useful reference to promote development of gas-liquid reactive crystallization. Gas-liquid mass transfer of absorption in the presence of solid particles in a stirred tank was investigated in order to gain understanding of how different-sized particles interact with gas bubbles. Based on obtained volumetric mass transfer coefficient values, it was found that the influence of the presence of small particles on gas-liquid mass transfer cannot be ignored since there are interactions between bubbles and particles. Raman spectrometry was successfully applied for liquid and solids analysis in semi-batch anti-solvent precipitation and evaporation crystallization. Real-time information such as supersaturation, formation of precipitates and identification of crystal polymorphs could be obtained by Raman spectrometry. The solubility prediction models, monitoring methods for precipitation and empirical model for absorption developed in this study together with the methodologies used gives valuable information for aspects of industrial crystallization. Furthermore, Raman analysis was seen to be a potential controlling method for various crystallization processes.
Resumo:
Case company utilizes multi-branding strategy (or house of brands strategy) in its product portfolio. In practice the company has multiple brands – one main brand and four acquired brands – which all utilize one single product platform. The objective of this research is to analyze case company’s multi-branding strategy and its benefits and challenges. Moreover, the purpose is to clarify that how could a company in B2B markets utilize multi-branding strategy more efficiently and profitably. The theoretical part of this thesis consists of aspects of branding strategies; different brand name architectures, benefits and challenges of different strategies and different ways of utilize branding strategies in mergers and acquisitions. The empirical part, on the other hand, includes the description of the case company’s branding strategy and the employees’ perspective on the benefits and challenges of multi-branding strategy, and how to utilize it more efficiently and profitably. This study shows, that the major benefits of utilizing multi-branding are lower production costs, ability to reach wider market coverage, possibility to utilize common sales tools, synergies in R&D and shared resources. On the other hand, the major challenges are lack of product differentiation, internal competition, branding issues in production and deliveries, pricing issues and conflicts, and compromises in product compatibility and suitability. Based on the results, several ways to utilize multi-branding strategy more efficiently and profitably were found; by putting more effort on brand image and product differentiation, by having more co-operation among the brands and by focusing on more precise customer and market segmentation.