942 resultados para Multi-objective optimization techniques
Resumo:
The purpose of this thesis is twofold. The first and major part is devoted to sensitivity analysis of various discrete optimization problems while the second part addresses methods applied for calculating measures of solution stability and solving multicriteria discrete optimization problems. Despite numerous approaches to stability analysis of discrete optimization problems two major directions can be single out: quantitative and qualitative. Qualitative sensitivity analysis is conducted for multicriteria discrete optimization problems with minisum, minimax and minimin partial criteria. The main results obtained here are necessary and sufficient conditions for different stability types of optimal solutions (or a set of optimal solutions) of the considered problems. Within the framework of quantitative direction various measures of solution stability are investigated. A formula for a quantitative characteristic called stability radius is obtained for the generalized equilibrium situation invariant to changes of game parameters in the case of the H¨older metric. Quality of the problem solution can also be described in terms of robustness analysis. In this work the concepts of accuracy and robustness tolerances are presented for a strategic game with a finite number of players where initial coefficients (costs) of linear payoff functions are subject to perturbations. Investigation of stability radius also aims to devise methods for its calculation. A new metaheuristic approach is derived for calculation of stability radius of an optimal solution to the shortest path problem. The main advantage of the developed method is that it can be potentially applicable for calculating stability radii of NP-hard problems. The last chapter of the thesis focuses on deriving innovative methods based on interactive optimization approach for solving multicriteria combinatorial optimization problems. The key idea of the proposed approach is to utilize a parameterized achievement scalarizing function for solution calculation and to direct interactive procedure by changing weighting coefficients of this function. In order to illustrate the introduced ideas a decision making process is simulated for three objective median location problem. The concepts, models, and ideas collected and analyzed in this thesis create a good and relevant grounds for developing more complicated and integrated models of postoptimal analysis and solving the most computationally challenging problems related to it.
Resumo:
To obtain the desirable accuracy of a robot, there are two techniques available. The first option would be to make the robot match the nominal mathematic model. In other words, the manufacturing and assembling tolerances of every part would be extremely tight so that all of the various parameters would match the “design” or “nominal” values as closely as possible. This method can satisfy most of the accuracy requirements, but the cost would increase dramatically as the accuracy requirement increases. Alternatively, a more cost-effective solution is to build a manipulator with relaxed manufacturing and assembling tolerances. By modifying the mathematical model in the controller, the actual errors of the robot can be compensated. This is the essence of robot calibration. Simply put, robot calibration is the process of defining an appropriate error model and then identifying the various parameter errors that make the error model match the robot as closely as possible. This work focuses on kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial-parallel hybrid robot. The robot consists of a 4-DOF serial mechanism and a 6-DOF hexapod parallel manipulator. The redundant 4-DOF serial structure is used to enlarge workspace and the 6-DOF hexapod manipulator is used to provide high load capabilities and stiffness for the whole structure. The main objective of the study is to develop a suitable calibration method to improve the accuracy of the redundant serial-parallel hybrid robot. To this end, a Denavit–Hartenberg (DH) hybrid error model and a Product-of-Exponential (POE) error model are developed for error modeling of the proposed robot. Furthermore, two kinds of global optimization methods, i.e. the differential-evolution (DE) algorithm and the Markov Chain Monte Carlo (MCMC) algorithm, are employed to identify the parameter errors of the derived error model. A measurement method based on a 3-2-1 wire-based pose estimation system is proposed and implemented in a Solidworks environment to simulate the real experimental validations. Numerical simulations and Solidworks prototype-model validations are carried out on the hybrid robot to verify the effectiveness, accuracy and robustness of the calibration algorithms.
Resumo:
Avhandlingen har sitt utspring i mitt engasjement for elevers møte med kunst i grunnskolen i faget kunst og håndverk og mitt syn på ungdom som kompetente bidragsytere til forskningen om fenomener som angår deres liv. Elevene er informanter til, eller aktører i, forskning på fenomenet dialog med kunst. Dialog med kunst er her definert som en helhetlig prosess som innlemmer alt fra elevenes møte med visuelle kunstverk til deres eget skapende arbeid. At avhandlingens fagdidaktiske problemområde er elevers praktisk skapende virksomhet, knytter undersøkelsen til slöjdpedagogisk forskning. Avhandlingens overgripende hensikt er å bidra til utvikling av fagdidaktikken i kunst og håndverk med utgangspunkt i elevenes erfaringer med kunstundervisningens innhold og metode på ungdomsskoletrinnet. Studien består av kasusstudier på to ungdomsskoler. Data ble innsamlet igjennom intervjuer, deltakende observasjon, dokumenter, prosessbøker og foto av formingsprodukter. Ungdoms dialog med kunst i skolen blir analysert og fremstilt ut fra et erfart og et operasjonalisert perspektiv. Funnene speiles i ulike fagdidaktiske tendenser, det vil si ulike hovedoppfatninger i debatten om det moderne samfunn, og i et virksomhetsteoretisk perspektiv. Resultatene fra undersøkelsen utfordrer oss til en fagdidaktisk nyorientering når det gjelder ungdoms møte med kunstverk i skolen, i retning av et mer ungdomskulturelt innhold og relasjonelle kunstmøter som er narrative, tolkningsorientert, opplevelsesorientert, dialogiske og flerstemmige. Undersøkelsen viser at elevene liker det praktisk skapende arbeidet, men at undervisningen i sterkere grad bør ta i bruk digital kunnskap og handle om hvordan kunst kan brukes som utgangspunkt for skapende arbeid, og den bør legge til rette for det læringspotensialet som ligger i dialogen elevene imellom. Elevene liker en undervisning som ikke bare handler om estetiske virkemidler, materialer og teknikker, men også om kommunikasjon og ytringsfrihet. Resultatene viser at det frie skapende arbeid består av tre likeverdige aspekter: det individuelle, det kulturelle og det sosiale. Både funnene og avhandlingens virksomhetsteoretiske perspektiv kan bidra til diskursen om kreativitetsbegrepet og identitetskonstruksjon i vårt moderne samfunn. Virksomhetssystemet blir i denne avhandlingen utviklet til en teori for skapende arbeid i faget kunst og håndverk, et overgripende fagdidaktisk rammeverk for bild/bildkonst og slöjdfaget satt inn i et nordisk utdanningsperspektiv.
Resumo:
Currently, a high penetration level of Distributed Generations (DGs) has been observed in the Danish distribution systems, and even more DGs are foreseen to be present in the upcoming years. How to utilize them for maintaining the security of the power supply under the emergency situations, has been of great interest for study. This master project is intended to develop a control architecture for studying purposes of distribution systems with large scale integration of solar power. As part of the EcoGrid EU Smart Grid project, it focuses on the system modelling and simulation of a Danish representative LV network located in Bornholm island. Regarding the control architecture, two types of reactive control techniques are implemented and compare. In addition, a network voltage control based on a tap changer transformer is tested. The optimized results after applying a genetic algorithm to five typical Danish domestic loads are lower power losses and voltage deviation using Q(U) control, specially with large consumptions. Finally, a communication and information exchange system is developed with the objective of regulating the reactive power and thereby, the network voltage remotely and real-time. Validation test of the simulated parameters are performed as well.
Resumo:
The objective of this thesis was to examine the potential of multi-axis solutions in packaging machines produced in Europe. The definition of a multi-axis solution in this study is a construction that uses a common DC bus power supply for different amplifiers running the axes and the intelligence is centralized into one unit. The cost structure of a packaging machine was gained from an automation research, which divided the machines according to automation categories. The automation categories were then further divided into different sub-components by evaluating the ratio of multi-axis solutions compared to other automation components in packaging machines. A global motion control study was used for further information. With the help of the ratio, an estimation of the potential of multi-axis solutions in each country and packaging machine sector was completed. In addition to the research, a specific questionnaire was sent to five companies to gain information about the present situation and possible trends in packaging machinery. The greatest potential markets are in Germany and Italy, which are also the largest producers of packaging machinery in Europe. The greatest growth in the next few years will be seen in Turkey where the annual growth rate equals the general machinery production rate in Asia. The greatest market potential of the Nordic countries is found in Sweden in 35th position on the list. According to the interviews, motion control products in packaging machines will retain their current power levels, as well as the number of axes in the future. Integrated machine safety features together with a universal programming language are the desired attributes of the future. Unlike generally in industry, the energy saving objectives are and will remain insignificant in the packaging industry.
Resumo:
More than ever, education organisations are experiencing the need to develop new services and processes to satisfy expanding and changing customer needs and to adapt to the environmental changes and continually tightening economic situation. Innovation has been found in many studies to have a crucial role in the success of an organisation, both in the private and public sectors, in formal education and in manufacturing and services alike. However, studies concerning innovation in non-formal adult education organisations, such as adult education centres (AECs) in Finland, are still lacking. This study investigates innovation in the non-formal adult education organisation context from the perspective of organisational culture types and social networks. The objective is to determine the significant characteristics of an innovative non-formal adult education organisation. The analysis is based on data from interviews with the principals and fulltime staff of four case AECs. Before the case study, a pre-study phase is accomplished in order to obtain a preliminary understanding of innovation at AECs. The research found strong support for the need of innovation in AECs. Innovation is basically needed to accomplish the AEC system’s primary mission mentioned in the ACT on Liberal Adult Education. In addition, innovation is regarded vital to institutes and may prevent their decline. It helps the institutes to be more attractive, to enter new market, to increase customer satisfaction and to be on the cutting edge. Innovation is also seen as a solution to the shortage of resources. Innovative AECs search actively for additional resources for development work through project funding and subsidies, cooperation networks and creating a conversational and joyful atmosphere in the institute. The findings also suggest that the culture type that supports innovation at AECs is multidimensional, with an emphasis on the clan and adhocratic culture types and such values as: dynamism, future orientation, acquiring new resources, mistake tolerance, openness, flexibility, customer orientation, a risk-taking attitude, and community spirit. Active and creative internal and external cooperation also promote innovation at AECs. This study also suggests that the behaviour of a principal is crucial. The way he or she shows appreciation the staff, encouragement and support to the staff and his or her approachability and concrete participation in innovation activities have a strong effect on innovation attitudes and activities in AECs.
Resumo:
The objective of this work was to optimize the parameter setup for GTAW of aluminum using an AC rectangular wave output and continuous feeding. A series of welds was carried-out in an industrial joint, with variation of the negative and positive current amplitude, the negative and positive duration time, the travel speed and the feeding speed. Another series was carried out to investigate the isolate effect of the negative duration time and travel speed. Bead geometry aspects were assessed, such as reinforcement, penetration, incomplete fusion and joint wall bridging. The results showed that currents at both polarities are remarkably more significant than the respective duration times. It was also shown that there is a straight relationship between welding speed and feeding speed and this relationship must be followed for obtaining sound beads. A very short positive duration time is enough for arc stability achievement and when the negative duration time is longer than 5 ms its effect on geometry appears. The possibility of optimizing the parameter selection, despite the high inter-correlation amongst them, was demonstrate through a computer program. An approach to reduce the number of variables in this process is also presented.
Resumo:
Nowadays, the upwind three bladed horizontal axis wind turbine is the leading player on the market. It has been found to be the best industrial compromise in the range of different turbine constructions. The current wind industry innovation is conducted in the development of individual turbine components. The blade constitutes 20-25% of the overall turbine budget. Its optimal operation in particular local economic and wind conditions is worth investigating. The blade geometry, namely the chord, twist and airfoil type distributions along the span, responds to the output measures of the blade performance. Therefore, the optimal wind blade geometry can improve the overall turbine performance. The objectives of the dissertation are focused on the development of a methodology and specific tool for the investigation of possible existing wind blade geometry adjustments. The novelty of the methodology presented in the thesis is the multiobjective perspective on wind blade geometry optimization, particularly taking simultaneously into account the local wind conditions and the issue of aerodynamic noise emissions. The presented optimization objective approach has not been investigated previously for the implementation in wind blade design. The possibilities to use different theories for the analysis and search procedures are investigated and sufficient arguments derived for the usage of proposed theories. The tool is used for the test optimization of a particular wind turbine blade. The sensitivity analysis shows the dependence of the outputs on the provided inputs, as well as its relative and absolute divergences and instabilities. The pros and cons of the proposed technique are seen from the practical implementation, which is documented in the results, analysis and conclusion sections.
Resumo:
Stochastic approximation methods for stochastic optimization are considered. Reviewed the main methods of stochastic approximation: stochastic quasi-gradient algorithm, Kiefer-Wolfowitz algorithm and adaptive rules for them, simultaneous perturbation stochastic approximation (SPSA) algorithm. Suggested the model and the solution of the retailer's profit optimization problem and considered an application of the SQG-algorithm for the optimization problems with objective functions given in the form of ordinary differential equation.
Resumo:
Organizational creativity is increasingly important for organizations aiming to survive and thrive in complex and unexpectedly changing environments. It is precondition of innovation and a driver of an organization’s performance success. Whereas innovation research increasingly promotes high-involvement and participatory innovation, the models of organizational creativity are still mainly based on an individual-creativity view. Likewise, the definitions of organizational creativity and innovation are somewhat equal, and they are used as interchangeable constructs, while on the other hand they are seen as different constructs. Creativity is seen as generation of novel and useful ideas, whereas innovation is seen as the implementation of these ideas. The research streams of innovation and organizational creativity seem to be advancing somewhat separately, although together they could provide many synergy advantages. Thereby, this study addresses three main research gaps. First, as the knowledge and knowing is being increasingly expertized and distributed in organizations, the conceptualization of organizational creativity needs to face that perspective, rather than relying on the individual-creativity view. Thus, the conceptualization of organizational creativity needs clarification, especially as an organizational-level phenomenon (i.e., creativity by an organization). Second, approaches to consciously build organizational creativity to increase the capacity of an organization to demonstrate novelty in its knowledgeable actions are rare. The current creativity techniques are mainly based on individual-creativity views, and they mainly focus on the occasional problem-solving cases among a limited number of individuals, whereas, the development of collective creativity and creativity by the organization lacks approaches. Third, in terms of organizational creativity as a collective phenomenon, the engagement, contributions, and participation of organizational members into activities of common meaning creation are more important than the individualcreativity skills. Therefore, the development approaches to foster creativity as social, emerging, embodied, and collective creativity are needed to complement the current creativity techniques. To address these gaps, the study takes a multiparadigm perspective to face the following three objectives. The first objective of this study is to clarify and extend the conceptualization of organizational creativity. The second is to study the development of organizational creativity. The third is to explore how an improvisational theater based approach fosters organizational creativity. The study consists of two parts comprising the introductory part (part I) and six publications (part II). Each publication addresses the research questions of the thesis through detailed subquestions. The study makes three main contributions to the research of organizational creativity. First, it contributes toward the conceptualization of organizational creativity by extending the current view of organizational creativity. This study views organizational creativity as a multilevel construct constituting both of individual and collective (group and organizational) creativity. In contrast to current views of organizational creativity, this study bases on organizational (collective) knowledge that is based on and demonstrated through the knowledgeable actions of an organization as a whole. The study defines organizational creativity as an overall ability of an organization to demonstrate novelty in its knowledgeable actions (through what it does and how it does what it does).Second, this study contributes toward the development of organizational creativity as multi-level phenomena, introducing developmental approaches that face two or more of these levels simultaneously. More specifically, the study presents the cross-level approaches to building organizational creativity, by using an approach based in improvisational theater and considering assessment of organizational renewal capability. Third, the study contributes on development of organizational creativity using an improvisational theater based approach as twofold meaning. First, it fosters individual and collective creativity simultaneously and builds space for creativity to occur. Second, it models collective and distributed creativity processes, thereby, contributing to the conceptualization of organizational creativity.
Resumo:
The objective of this study was to optimize and validate the solid-liquid extraction (ESL) technique for determination of picloram residues in soil samples. At the optimization stage, the optimal conditions for extraction of soil samples were determined using univariate analysis. Ratio soil/solution extraction, type and time of agitation, ionic strength and pH of extraction solution were evaluated. Based on the optimized parameters, the following method of extraction and analysis of picloram was developed: weigh 2.00 g of soil dried and sieved through a sieve mesh of 2.0 mm pore, add 20.0 mL of KCl concentration of 0.5 mol L-1, shake the bottle in the vortex for 10 seconds to form suspension and adjust to pH 7.00, with alkaline KOH 0.1 mol L-1. Homogenate the system in a shaker system for 60 minutes and then let it stand for 10 minutes. The bottles are centrifuged for 10 minutes at 3,500 rpm. After the settlement of the soil particles and cleaning of the supernatant extract, an aliquot is withdrawn and analyzed by high performance liquid chromatography. The optimized method was validated by determining the selectivity, linearity, detection and quantification limits, precision and accuracy. The ESL methodology was efficient for analysis of residues of the pesticides studied, with percentages of recovery above 90%. The limits of detection and quantification were 20.0 and 66.0 mg kg-1 soil for the PVA, and 40.0 and 132.0 mg kg-1 soil for the VLA. The coefficients of variation (CV) were equal to 2.32 and 2.69 for PVA and TH soils, respectively. The methodology resulted in low organic solvent consumption and cleaner extracts, as well as no purification steps for chromatographic analysis were required. The parameters evaluated in the validation process indicated that the ESL methodology is efficient for the extraction of picloram residues in soils, with low limits of detection and quantification.
Resumo:
The amount of biological data has grown exponentially in recent decades. Modern biotechnologies, such as microarrays and next-generation sequencing, are capable to produce massive amounts of biomedical data in a single experiment. As the amount of the data is rapidly growing there is an urgent need for reliable computational methods for analyzing and visualizing it. This thesis addresses this need by studying how to efficiently and reliably analyze and visualize high-dimensional data, especially that obtained from gene expression microarray experiments. First, we will study the ways to improve the quality of microarray data by replacing (imputing) the missing data entries with the estimated values for these entries. Missing value imputation is a method which is commonly used to make the original incomplete data complete, thus making it easier to be analyzed with statistical and computational methods. Our novel approach was to use curated external biological information as a guide for the missing value imputation. Secondly, we studied the effect of missing value imputation on the downstream data analysis methods like clustering. We compared multiple recent imputation algorithms against 8 publicly available microarray data sets. It was observed that the missing value imputation indeed is a rational way to improve the quality of biological data. The research revealed differences between the clustering results obtained with different imputation methods. On most data sets, the simple and fast k-NN imputation was good enough, but there were also needs for more advanced imputation methods, such as Bayesian Principal Component Algorithm (BPCA). Finally, we studied the visualization of biological network data. Biological interaction networks are examples of the outcome of multiple biological experiments such as using the gene microarray techniques. Such networks are typically very large and highly connected, thus there is a need for fast algorithms for producing visually pleasant layouts. A computationally efficient way to produce layouts of large biological interaction networks was developed. The algorithm uses multilevel optimization within the regular force directed graph layout algorithm.
Resumo:
Switching power supplies are usually implemented with a control circuitry that uses constant clock frequency turning the power semiconductor switches on and off. A drawback of this customary operating principle is that the switching frequency and harmonic frequencies are present in both the conducted and radiated EMI spectrum of the power converter. Various variable-frequency techniques have been introduced during the last decade to overcome the EMC problem. The main objective of this study was to compare the EMI and steady-state performance of a switch mode power supply with different spread-spectrum/variable-frequency methods. Another goal was to find out suitable tools for the variable-frequency EMI analysis. This thesis can be divided into three main parts: Firstly, some aspects of spectral estimation and measurement are presented. Secondly, selected spread spectrum generation techniques are presented with simulations and background information. Finally, simulations and prototype measurements from the EMC and the steady-state performance are carried out in the last part of this work. Combination of the autocorrelation function, the Welch spectrum estimate and the spectrogram were used as a substitute for ordinary Fourier methods in the EMC analysis. It was also shown that the switching function can be used in preliminary EMC analysis of a SMPS and the spectrum and autocorrelation sequence of a switching function correlates with the final EMI spectrum. This work is based on numerous simulations and measurements made with the prototype. All these simulations and measurements are made with the boost DC/DC converter. Four different variable-frequency modulation techniques in six different configurations were analyzed and the EMI performance was compared to the constant frequency operation. Output voltage and input current waveforms were also analyzed in time domain to see the effect of the spread spectrum operation on these quantities. According to the results presented in this work, spread spectrum modulation can be utilized in power converter for EMI mitigation. The results from steady-state voltage measurements show, that the variable-frequency operation of the SMPS has effect on the voltage ripple, but the ripple measured from the prototype is still acceptable in some applications. Both current and voltage ripple can be controlled with proper main circuit and controller design.
Resumo:
The objective of this project was to introduce a new software product to pulp industry, a new market for case company. An optimization based scheduling tool has been developed to allow pulp operations to better control their production processes and improve both production efficiency and stability. Both the work here and earlier research indicates that there is a potential for savings around 1-5%. All the supporting data is available today coming from distributed control systems, data historians and other existing sources. The pulp mill model together with the scheduler, allows what-if analyses of the impacts and timely feasibility of various external actions such as planned maintenance of any particular mill operation. The visibility gained from the model proves also to be a real benefit. The aim is to satisfy demand and gain extra profit, while achieving the required customer service level. Research effort has been put both in understanding the minimum features needed to satisfy the scheduling requirements in the industry and the overall existence of the market. A qualitative study was constructed to both identify competitive situation and the requirements vs. gaps on the market. It becomes clear that there is no such system on the marketplace today and also that there is room to improve target market overall process efficiency through such planning tool. This thesis also provides better overall understanding of the different processes in this particular industry for the case company.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.