919 resultados para Search Algorithm
Resumo:
Support Vector Machines (SVMs) have achieved very good performance on different learning problems. However, the success of SVMs depends on the adequate choice of the values of a number of parameters (e.g., the kernel and regularization parameters). In the current work, we propose the combination of meta-learning and search algorithms to deal with the problem of SVM parameter selection. In this combination, given a new problem to be solved, meta-learning is employed to recommend SVM parameter values based on parameter configurations that have been successfully adopted in previous similar problems. The parameter values returned by meta-learning are then used as initial search points by a search technique, which will further explore the parameter space. In this proposal, we envisioned that the initial solutions provided by meta-learning are located in good regions of the search space (i.e. they are closer to optimum solutions). Hence, the search algorithm would need to evaluate a lower number of candidate solutions when looking for an adequate solution. In this work, we investigate the combination of meta-learning with two search algorithms: Particle Swarm Optimization and Tabu Search. The implemented hybrid algorithms were used to select the values of two SVM parameters in the regression domain. These combinations were compared with the use of the search algorithms without meta-learning. The experimental results on a set of 40 regression problems showed that, on average, the proposed hybrid methods obtained lower error rates when compared to their components applied in isolation.
Resumo:
The field of linear optical quantum computation (LOQC) will soon need a repertoire of experimental milestones. We make progress in this direction by describing several experiments based on Grover's algorithm. These experiments range from a relatively simple implementation using only a single nonscalable controlled- NOT (CNOT) gate to the most complex, requiring two concatenated scalable CNOT gates, and thus form a useful set of early milestones for LOQC. We also give a complete description of basic LOQC using polarization-encoded qubits, making use of many simplifications to the original scheme of Knill, Laflamme, and Milburn [E. Knill, R. Laflamme, and G. J. Milburn, Nature (London) 409, 46 (2001)].
Resumo:
This paper considers the problem of concept generalization in decision-making systems where such features of real-world databases as large size, incompleteness and inconsistence of the stored information are taken into account. The methods of the rough set theory (like lower and upper approximations, positive regions and reducts) are used for the solving of this problem. The new discretization algorithm of the continuous attributes is proposed. It essentially increases an overall performance of generalization algorithms and can be applied to processing of real value attributes in large data tables. Also the search algorithm of the significant attributes combined with a stage of discretization is developed. It allows avoiding splitting of continuous domains of insignificant attributes into intervals.
Resumo:
ACM Computing Classification System (1998): I.2.8, G.1.6.
Resumo:
This paper proposes and investigates a metaheuristic tabu search algorithm (TSA) that generates optimal or near optimal solutions sequences for the feedback length minimization problem (FLMP) associated to a design structure matrix (DSM). The FLMP is a non-linear combinatorial optimization problem, belonging to the NP-hard class, and therefore finding an exact optimal solution is very hard and time consuming, especially on medium and large problem instances. First, we introduce the subject and provide a review of the related literature and problem definitions. Using the tabu search method (TSM) paradigm, this paper presents a new tabu search algorithm that generates optimal or sub-optimal solutions for the feedback length minimization problem, using two different neighborhoods based on swaps of two activities and shifting an activity to a different position. Furthermore, this paper includes numerical results for analyzing the performance of the proposed TSA and for fixing the proper values of its parameters. Then we compare our results on benchmarked problems with those already published in the literature. We conclude that the proposed tabu search algorithm is very promising because it outperforms the existing methods, and because no other tabu search method for the FLMP is reported in the literature. The proposed tabu search algorithm applied to the process layer of the multidimensional design structure matrices proves to be a key optimization method for an optimal product development.
Resumo:
Non-orthogonal multiple access (NOMA) is emerging as a promising multiple access technology for the fifth generation cellular networks to address the fast growing mobile data traffic. It applies superposition coding in transmitters, allowing simultaneous allocation of the same frequency resource to multiple intra-cell users. Successive interference cancellation is used at the receivers to cancel intra-cell interference. User pairing and power allocation (UPPA) is a key design aspect of NOMA. Existing UPPA algorithms are mainly based on exhaustive search method with extensive computation complexity, which can severely affect the NOMA performance. A fast proportional fairness (PF) scheduling based UPPA algorithm is proposed to address the problem. The novel idea is to form user pairs around the users with the highest PF metrics with pre-configured fixed power allocation. Systemlevel simulation results show that the proposed algorithm is significantly faster (seven times faster for the scenario with 20 users) with a negligible throughput loss than the existing exhaustive search algorithm.
Resumo:
This paper proposes and investigates a metaheuristic tabu search algorithm (TSA) that generates optimal or near optimal solutions sequences for the feedback length minimization problem (FLMP) associated to a design structure matrix (DSM). The FLMP is a non-linear combinatorial optimization problem, belonging to the NP-hard class, and therefore finding an exact optimal solution is very hard and time consuming, especially on medium and large problem instances. First, we introduce the subject and provide a review of the related literature and problem definitions. Using the tabu search method (TSM) paradigm, this paper presents a new tabu search algorithm that generates optimal or sub-optimal solutions for the feedback length minimization problem, using two different neighborhoods based on swaps of two activities and shifting an activity to a different position. Furthermore, this paper includes numerical results for analyzing the performance of the proposed TSA and for fixing the proper values of its parameters. Then we compare our results on benchmarked problems with those already published in the literature. We conclude that the proposed tabu search algorithm is very promising because it outperforms the existing methods, and because no other tabu search method for the FLMP is reported in the literature. The proposed tabu search algorithm applied to the process layer of the multidimensional design structure matrices proves to be a key optimization method for an optimal product development.
Resumo:
Unmanned Aerial Vehicles (UAVs) are emerging as an ideal platform for a wide range of civil applications such as disaster monitoring, atmospheric observation and outback delivery. However, the operation of UAVs is currently restricted to specially segregated regions of airspace outside of the National Airspace System (NAS). Mission Flight Planning (MFP) is an integral part of UAV operation that addresses some of the requirements (such as safety and the rules of the air) of integrating UAVs in the NAS. Automated MFP is a key enabler for a number of UAV operating scenarios as it aids in increasing the level of onboard autonomy. For example, onboard MFP is required to ensure continued conformance with the NAS integration requirements when there is an outage in the communications link. MFP is a motion planning task concerned with finding a path between a designated start waypoint and goal waypoint. This path is described with a sequence of 4 Dimensional (4D) waypoints (three spatial and one time dimension) or equivalently with a sequence of trajectory segments (or tracks). It is necessary to consider the time dimension as the UAV operates in a dynamic environment. Existing methods for generic motion planning, UAV motion planning and general vehicle motion planning cannot adequately address the requirements of MFP. The flight plan needs to optimise for multiple decision objectives including mission safety objectives, the rules of the air and mission efficiency objectives. Online (in-flight) replanning capability is needed as the UAV operates in a large, dynamic and uncertain outdoor environment. This thesis derives a multi-objective 4D search algorithm entitled Multi- Step A* (MSA*) based on the seminal A* search algorithm. MSA* is proven to find the optimal (least cost) path given a variable successor operator (which enables arbitrary track angle and track velocity resolution). Furthermore, it is shown to be of comparable complexity to multi-objective, vector neighbourhood based A* (Vector A*, an extension of A*). A variable successor operator enables the imposition of a multi-resolution lattice structure on the search space (which results in fewer search nodes). Unlike cell decomposition based methods, soundness is guaranteed with multi-resolution MSA*. MSA* is demonstrated through Monte Carlo simulations to be computationally efficient. It is shown that multi-resolution, lattice based MSA* finds paths of equivalent cost (less than 0.5% difference) to Vector A* (the benchmark) in a third of the computation time (on average). This is the first contribution of the research. The second contribution is the discovery of the additive consistency property for planning with multiple decision objectives. Additive consistency ensures that the planner is not biased (which results in a suboptimal path) by ensuring that the cost of traversing a track using one step equals that of traversing the same track using multiple steps. MSA* mitigates uncertainty through online replanning, Multi-Criteria Decision Making (MCDM) and tolerance. Each trajectory segment is modeled with a cell sequence that completely encloses the trajectory segment. The tolerance, measured as the minimum distance between the track and cell boundaries, is the third major contribution. Even though MSA* is demonstrated for UAV MFP, it is extensible to other 4D vehicle motion planning applications. Finally, the research proposes a self-scheduling replanning architecture for MFP. This architecture replicates the decision strategies of human experts to meet the time constraints of online replanning. Based on a feedback loop, the proposed architecture switches between fast, near-optimal planning and optimal planning to minimise the need for hold manoeuvres. The derived MFP framework is original and shown, through extensive verification and validation, to satisfy the requirements of UAV MFP. As MFP is an enabling factor for operation of UAVs in the NAS, the presented work is both original and significant.
Resumo:
This paper presents Multi-Step A* (MSA*), a search algorithm based on A* for multi-objective 4D vehicle motion planning (three spatial and one time dimension). The research is principally motivated by the need for offline and online motion planning for autonomous Unmanned Aerial Vehicles (UAVs). For UAVs operating in large, dynamic and uncertain 4D environments, the motion plan consists of a sequence of connected linear tracks (or trajectory segments). The track angle and velocity are important parameters that are often restricted by assumptions and grid geometry in conventional motion planners. Many existing planners also fail to incorporate multiple decision criteria and constraints such as wind, fuel, dynamic obstacles and the rules of the air. It is shown that MSA* finds a cost optimal solution using variable length, angle and velocity trajectory segments. These segments are approximated with a grid based cell sequence that provides an inherent tolerance to uncertainty. Computational efficiency is achieved by using variable successor operators to create a multi-resolution, memory efficient lattice sampling structure. Simulation studies on the UAV flight planning problem show that MSA* meets the time constraints of online replanning and finds paths of equivalent cost but in a quarter of the time (on average) of vector neighbourhood based A*.
Resumo:
Genetic research of complex diseases is a challenging, but exciting, area of research. The early development of the research was limited, however, until the completion of the Human Genome and HapMap projects, along with the reduction in the cost of genotyping, which paves the way for understanding the genetic composition of complex diseases. In this thesis, we focus on the statistical methods for two aspects of genetic research: phenotype definition for diseases with complex etiology and methods for identifying potentially associated Single Nucleotide Polymorphisms (SNPs) and SNP-SNP interactions. With regard to phenotype definition for diseases with complex etiology, we firstly investigated the effects of different statistical phenotyping approaches on the subsequent analysis. In light of the findings, and the difficulties in validating the estimated phenotype, we proposed two different methods for reconciling phenotypes of different models using Bayesian model averaging as a coherent mechanism for accounting for model uncertainty. In the second part of the thesis, the focus is turned to the methods for identifying associated SNPs and SNP interactions. We review the use of Bayesian logistic regression with variable selection for SNP identification and extended the model for detecting the interaction effects for population based case-control studies. In this part of study, we also develop a machine learning algorithm to cope with the large scale data analysis, namely modified Logic Regression with Genetic Program (MLR-GEP), which is then compared with the Bayesian model, Random Forests and other variants of logic regression.
Resumo:
The main aim of this thesis is to analyse and optimise a public hospital Emergency Department. The Emergency Department (ED) is a complex system with limited resources and a high demand for these resources. Adding to the complexity is the stochastic nature of almost every element and characteristic in the ED. The interaction with other functional areas also complicates the system as these areas have a huge impact on the ED and the ED is powerless to change them. Therefore it is imperative that OR be applied to the ED to improve the performance within the constraints of the system. The main characteristics of the system to optimise included tardiness, adherence to waiting time targets, access block and length of stay. A validated and verified simulation model was built to model the real life system. This enabled detailed analysis of resources and flow without disruption to the actual ED. A wide range of different policies for the ED and a variety of resources were able to be investigated. Of particular interest was the number and type of beds in the ED and also the shift times of physicians. One point worth noting was that neither of these resources work in isolation and for optimisation of the system both resources need to be investigated in tandem. The ED was likened to a flow shop scheduling problem with the patients and beds being synonymous with the jobs and machines typically found in manufacturing problems. This enabled an analytic scheduling approach. Constructive heuristics were developed to reactively schedule the system in real time and these were able to improve the performance of the system. Metaheuristics that optimised the system were also developed and analysed. An innovative hybrid Simulated Annealing and Tabu Search algorithm was developed that out-performed both simulated annealing and tabu search algorithms by combining some of their features. The new algorithm achieves a more optimal solution and does so in a shorter time.
Resumo:
A comprehensive one-dimensional meanline design approach for radial inflow turbines is described in the present work. An original code was developed in Python that takes a novel approach to the automatic selection of feasible machines based on pre-defined performance or geometry characteristics for a given application. It comprises a brute-force search algorithm that traverses the entire search space based on key non-dimensional parameters and rotational speed. In this study, an in-depth analysis and subsequent implementation of relevant loss models as well as selection criteria for radial inflow turbines is addressed. Comparison with previously published designs, as well as other available codes, showed good agreement. Sample (real and theoretical) test cases were trialed and results showed good agreement when compared to other available codes. The presented approach was found to be valid and the model was found to be a useful tool with regards to the preliminary design and performance estimation of radial inflow turbines, enabling its integration with other thermodynamic cycle analysis and three-dimensional blade design codes.
Resumo:
Proper functioning of Insulated Rail Joints (IRJs) is essential for the safe operation of the railway signalling systems and broken rail identification circuitries. The Conventional IRJ (CIRJ) resembles structural butt joints consisting of two pieces of rails connected together through two joint bars on either side of their web and the assembly is held together through pre-tensioned bolts. As the IRJs should maintain electrical insulation between the two rails, a gap between the rail ends must be retained at all times and all metal contacting surfaces should be electrically isolated from each other using non-conductive material. At the gap, the rail ends lose longitudinal continuity and hence the vertical sections of the rail ends are often severely damaged, especially at the railhead, due to the passage of wheels compared to other continuously welded rail sections. Fundamentally, the reason for the severe damage can be related to the singularities of the wheel-rail contact pressure and the railhead stress. No new generation designs that have emerged in the market to date have focussed on this fundamental; they only have provided attention to either the higher strength materials or the thickness of the sections of various components of the IRJs. In this thesis a novel method of shape optimisation of the railhead is developed to eliminate the pressure and stress singularities through changes to the original sharp corner shaped railhead into an arc profile in the longitudinal direction. The optimal shape of the longitudinal railhead profile has been determined using three nongradient methods in search of accuracy and efficiency: (1) Grid Search Method; (2) Genetic Algorithm Method and (3) Hybrid Genetic Algorithm Method. All these methods have been coupled with a parametric finite element formulation for the evaluation of the objective function for each iteration or generation depending on the search algorithm employed. The optimal shape derived from these optimisation methods is termed as Stress Minimised Railhead (SMRH) in this thesis. This optimal SMRH design has exhibited significantly reduced stress concentration that remains well below the yield strength of the head hardened rail steels and has shifted the stress concentration location away from the critical zone of the railhead end. The reduction in the magnitude and the relocation of the stress concentration in the SMRH design has been validated through a full scale wheel – railhead interaction test rig; Railhead strains under the loaded wheels have been recorded using a non-contact digital image correlation method. Experimental study has confirmed the accuracy of the numerical predications. Although the SMRH shaped IRJs eliminate stress singularities, they can still fail due to joint bar or bolt hole cracking; therefore, another conceptual design, termed as Embedded IRJ (EIRJ) in this thesis, with no joint bars and pre-tensioned bolts has been developed using a multi-objective optimisation formulation based on the coupled genetic algorithm – parametric finite element method. To achieve the required structural stiffness for the safe passage of the loaded wheels, the rails were embedded into the concrete of the post tensioned sleepers; the optimal solutions for the design of the EIRJ is shown to simplify the design through the elimination of the complex interactions and failure modes of the various structural components of the CIRJ. The practical applicability of the optimal shapes SMRH and EIRJ is demonstrated through two illustrative examples, termed as improved designs (IMD1 & IMD2) in this thesis; IMD1 is a combination of the CIRJ and the SMRH designs, whilst IMD2 is a combination of the EIRJ and SMRH designs. These two improved designs have been simulated for two key operating (speed and wagon load) and design (wheel diameter) parameters that affect the wheel-rail contact; the effect of these parameters has been found to be negligible to the performance of the two improved designs and the improved designs are in turn found far superior to the current designs of the CIRJs in terms of stress singularities and deformation under the passage of the loaded wheels. Therefore, these improved designs are expected to provide longer service life in relation to the CIRJs.
Resumo:
Railway crew scheduling problem is the process of allocating train services to the crew duties based on the published train timetable while satisfying operational and contractual requirements. The problem is restricted by many constraints and it belongs to the class of NP-hard. In this paper, we develop a mathematical model for railway crew scheduling with the aim of minimising the number of crew duties by reducing idle transition times. Duties are generated by arranging scheduled trips over a set of duties and sequentially ordering the set of trips within each of duties. The optimisation model includes the time period of relief opportunities within which a train crew can be relieved at any relief point. Existing models and algorithms usually only consider relieving a crew at the beginning of the interval of relief opportunities which may be impractical. This model involves a large number of decision variables and constraints, and therefore a hybrid constructive heuristic with the simulated annealing search algorithm is applied to yield an optimal or near-optimal schedule. The performance of the proposed algorithms is evaluated by applying computational experiments on randomly generated test instances. The results show that the proposed approaches obtain near-optimal solutions in a reasonable computational time for large-sized problems.