78 resultados para Multi-objective Optimization (MOO)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A neurogenetic-based hybrid framework is developed where the main components within the framework are artificial neural networks (ANNs) and genetic algorithms (GAs). The investigation covers a mode of combination or hybridisation between the two components that is called task hybridisation. The combination between ANNs and GAs using task hybridisation leads to the development of a hybrid multilayer feedforward network, trained using supervised learning. This paper discusses the GA method used to optimize the process parameters, using the ANN developed as the process mode, in a solder paste printing process, which is part of the process in the surface mount technology (SMT) method. The results obtained showed that the GA-based optimization method works well under various optimization criteria

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes an efficient solution algorithm for realistic multi-objective median shortest path problems in the design of urban transportation networks. The proposed problem formulation and solution algorithm to median shortest path problem is based on three realistic objectives via route cost or investment cost, overall travel time of the entire network and total toll revenue. The proposed solution approach to the problem is based on the heuristic labeling and exhaustive search technique in criteria space and solution space of the algorithm respectively. The first labels each node in terms of route cost and deletes cyclic and infeasible paths in criteria space imposing cyclic break and route cost constraint respectively. The latter deletes dominated paths in terms of objectives vector in solution space in order to identify a set of Pareto optimal paths. The approach, thus, proposes a non-inferior solution set of Pareto optimal paths based on non-dominated objective vector and leaves the ultimate decision to decision-makers for purpose specific final decision during applications. A numerical experiment is conducted to test the proposed algorithm using artificial transportation network. Sensitivity analyses have shown that the proposed algorithm is advantageous and efficient over existing algorithms to find a set of Pareto optimal paths to median shortest paths problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Short-term load forecasting (STLF) is of great importance for control and scheduling of electrical power systems. The uncertainty of power systems increases due to the random nature of climate and the penetration of the renewable energies such as wind and solar power. Traditional methods for generating point forecasts of load demands cannot properly handle uncertainties in datasets. To quantify these potential uncertainties associated with forecasts, this paper implements a neural network (NN)-based method for construction of prediction intervals (PIs). A newly proposed method, called lower upper bound estimation (LUBE), is applied to develop PIs using NN models. The primary multi-objective problem is firstly transformed into a constrained single-objective problem. This new problem formulation is closer to the original problem and has fewer parameters than the cost function. Particle swarm optimization (PSO) integrated with the mutation operator is used to solve the problem. Two case studies from Singapore and New South Wales (Australia) historical load datasets are used to validate the PSO-based LUBE method. Demonstrated results show that the proposed method can construct high quality PIs for load forecasting applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, much attention has been given to the mass spectrometry (MS) technology based disease classification, diagnosis, and protein-based biomarker identification. Similar to microarray based investigation, proteomic data generated by such kind of high-throughput experiments are often with high feature-to-sample ratio. Moreover, biological information and pattern are compounded with data noise, redundancy and outliers. Thus, the development of algorithms and procedures for the analysis and interpretation of such kind of data is of paramount importance. In this paper, we propose a hybrid system for analyzing such high dimensional data. The proposed method uses the k-mean clustering algorithm based feature extraction and selection procedure to bridge the filter selection and wrapper selection methods. The potential informative mass/charge (m/z) markers selected by filters are subject to the k-mean clustering algorithm for correlation and redundancy reduction, and a multi-objective Genetic Algorithm selector is then employed to identify discriminative m/z markers generated by k-mean clustering algorithm. Experimental results obtained by using the proposed method indicate that it is suitable for m/z biomarker selection and MS based sample classification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Different data classification algorithms have been developed and applied in various areas to analyze and extract valuable information and patterns from large datasets with noise and missing values. However, none of them could consistently perform well over all datasets. To this end, ensemble methods have been suggested as the promising measures. This paper proposes a novel hybrid algorithm, which is the combination of a multi-objective Genetic Algorithm (GA) and an ensemble classifier. While the ensemble classifier, which consists of a decision tree classifier, an Artificial Neural Network (ANN) classifier, and a Support Vector Machine (SVM) classifier, is used as the classification committee, the multi-objective Genetic Algorithm is employed as the feature selector to facilitate the ensemble classifier to improve the overall sample classification accuracy while also identifying the most important features in the dataset of interest. The proposed GA-Ensemble method is tested on three benchmark datasets, and compared with each individual classifier as well as the methods based on mutual information theory, bagging and boosting. The results suggest that this GA-Ensemble method outperform other algorithms in comparison, and be a useful method for classification and feature selection problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Agencies charged with nature conservation and protecting built-assets from fire face a policy dilemma because management that protects assets can have adverse impacts on biodiversity. Although conservation is often a policy goal, protecting built-assets usually takes precedence in fire management implementation. To make decisions that can better achieve both objectives, existing trade-offs must first be recognized, and then policies implemented to manage multiple objectives explicitly. We briefly review fire management actions that can conflict with biodiversity conservation. Through this review, we find that common management practices might not appreciably reduce the threat to built-assets but could have a large negative impact on biodiversity. We develop a framework based on decision theory that could be applied to minimize these conflicts. Critical to this approach is (1) the identification of the full range of management options and (2) obtaining data for evaluating the effectiveness of those options for achieving asset protection and conservation goals. This information can be used to compare explicitly the effectiveness of different management choices for conserving species and for protecting assets, given budget constraints. The challenge now is to gather data to quantify these trade-offs so that fire policy and practices can be better aligned with multiple objectives

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, the single machine job shop scheduling problem is studied with the objectives of minimizing the tardiness and the material cost of jobs. The simultaneous consideration of these objectives is the multi-criteria optimization problem under study. A metaheuristic procedure based on simulated annealing is proposed to find the approximate Pareto optimal (non-dominated) solutions. The two objectives are combined in one composite utility function based on the decision maker’s interest in having a schedule with weighted combination. In view of the unknown nature of the weights for the defined objectives, a priori approach is applied to search for the non-dominated set of solutions based on the Pareto dominance. The obtained solutions set is presented to the decision maker to choose the best solution according to his preferences. The performance of the algorithm is evaluated in terms of the number of non-dominated schedules generated and the proximity of the obtained non-dominated front to the true Pareto front. Results show that the produced solutions do not differ significantly from the optimal solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In named entity recognition (NER) for biomedical literature, approaches based on combined classifiers have demonstrated great performance improvement compared to a single (best) classifier. This is mainly owed to sufficient level of diversity exhibited among classifiers, which is a selective property of classifier set. Given a large number of classifiers, how to select different classifiers to put into a classifier-ensemble is a crucial issue of multiple classifier-ensemble design. With this observation in mind, we proposed a generic genetic classifier-ensemble method for the classifier selection in biomedical NER. Various diversity measures and majority voting are considered, and disjoint feature subsets are selected to construct individual classifiers. A basic type of individual classifier – Support Vector Machine (SVM) classifier is adopted as SVM-classifier committee. A multi-objective Genetic algorithm (GA) is employed as the classifier selector to facilitate the ensemble classifier to improve the overall sample classification accuracy. The proposed approach is tested on the benchmark dataset – GENIA version 3.02 corpus, and compared with both individual best SVM classifier and SVM-classifier ensemble algorithm as well as other machine learning methods such as CRF, HMM and MEMM. The results show that the proposed approach outperforms other classification algorithms and can be a useful method for the biomedical NER problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coal handling is a complex process involving different correlated and highly dependent operations such as selecting appropriate product types, planning stockpiles, scheduling stacking and reclaiming activities and managing train loads. Planning these operations manually is time consuming and can result in non-optimized schedules as future impact of decisions may not be appropriately considered. This paper addresses the operational scheduling of the continuous coal handling problem with multiple conflicting objectives. As the problem is NP-hard in nature, an effective heuristic is presented for planning stockpiles and scheduling resources to minimize delays in production and the coal age in the stockyard. A model of stockyard operations within a coal mine is described and the problem is formulated as a Bi- Objective Optimization Problem (BOOP). The algorithm efficacy is demonstrated on different real-life data scenarios. Computational results show that the solution algorithm is effective and the coal throughput is substantially impacted by the conflicting objectives. Together, the model and the proposed heuristic, can act as a decision support system for the stockyard planner to explore the effects of alternative decisions, such as balancing age and volume of stockpiles, and minimizing conflicts due to stacker and reclaimer movements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In our previous investigations, two Similarity Reasoning (SR)-based frameworks for tackling real-world problems have been proposed. In both frameworks, SR is used to deduce unknown fuzzy rules based on similarity of the given and unknown fuzzy rules for building a Fuzzy Inference System (FIS). In this paper, we further extend our previous findings by developing (1) a multi-objective evolutionary model for fuzzy rule selection; and (2) an evidential function to facilitate the use of both frameworks. The Non-Dominated Sorting Genetic Algorithms-p (NSGA-p) is adopted for fuzzy rule selection, in accordance with the Pareto optimal criterion. Besides that, two new evidential functions are developed, whereby given fuzzy rules are considered as evidence. Simulated and benchmark examples are included to demonstrate the applicability of these suggestions. Positive results were obtained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Live video forwarding for IP cameras has become a popular service in video data centers. In the forwarding service, requests of end users from different regions arrive in real-time to gain live video streams of IP cameras from inter-connected video data centers. A fundamental scheduling problem is how to assign resources with the global optimal resource cost and forwarding delay to forward live video streams. We introduce the resource provisioning cost as the combination of media server cost, connection bandwidth cost, and forwarding delay cost. In this paper, a multi-objective resource provisioning (MORP) approach is proposed to deal with the online inter-datacenter resource provisioning problem. The approach aims at minimizing the resource provisioning cost during live video forwarding. It adaptively allocates media servers in appropriate video data centers and connects the chosen media servers together to provide system scalability and connectivity. Different from previous works, MORP takes both resource capacity and diversity (e.g. location and price) into consideration during live video forwarding. Finally, the experimental results show that MORP approach not only cuts the resource provisioning cost of 3% to 10% comparing to the bench mark approach, but also shortens the resource provisioning delay.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Human associated delay-tolerant networks (HDTNs) are new networks where mobile devices are associated with humans and demonstrate social-related communication characteristics. Most of recent works use real social trace file to analyse its social characteristics, however social-related data is sensitive and has concern of privacy issues. In this paper, we propose an anonymous method that anonymize the original data by coding to preserve individual's privacy. The Shannon entropy is applied to the anonymous data to keep rich useful social characteristics for network optimization, e.g. routing optimization. We use an existing MIT reality dataset and Infocom 06 dataset, which are human associated mobile network trace files, to simulate our method. The results of our simulations show that this method can make data anonymously while achieving network optimization.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Adaptive Multiple-hyperplane Machine (AMM) was recently proposed to deal with large-scale datasets. However, it has no principle to tune the complexity and sparsity levels of the solution. Addressing the sparsity is important to improve learning generalization, prediction accuracy and computational speedup. In this paper, we employ the max-margin principle and sparse approach to propose a new Sparse AMM (SAMM). We solve the new optimization objective function with stochastic gradient descent (SGD). Besides inheriting the good features of SGD-based learning method and the original AMM, our proposed Sparse AMM provides machinery and flexibility to tune the complexity and sparsity of the solution, making it possible to avoid overfitting and underfitting. We validate our approach on several large benchmark datasets. We show that with the ability to control sparsity, the proposed Sparse AMM yields superior classification accuracy to the original AMM while simultaneously achieving computational speedup.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In recent years, there has been studies on the cardinality constrained multi-cycle problems on directed graphs, some of which considered chains co-existing on the same digraph whilst others did not. These studies were inspired by the optimal matching of kidneys known as the Kidney Exchange Problem (KEP). In a KEP, a vertex on the digraph represents a donor-patient pair who are related, though the kidney of the donor is incompatible to the patient. When there are multiple such incompatible pairs in the kidney exchange pool, the kidney of the donor of one incompatible pair may in fact be compatible to the patient of another incompatible pair. If Donor A’s kidney is suitable for Patient B, and vice versa, then there will be arcs in both directions between Vertex A to Vertex B. Such exchanges form a 2-cycle. There may also be cycles involving 3 or more vertices. As all exchanges in a kidney exchange cycle must take place simultaneously, (otherwise a donor can drop out from the program once his/her partner has received a kidney from another donor), due to logistic and human resource reasons, only a limited number of kidney exchanges can occur simultaneously, hence the cardinality of these cycles are constrained. In recent years, kidney exchange programs around the world have altruistic donors in the pool. A sequence of exchanges that starts from an altruistic donor forms a chain instead of a cycle. We therefore have two underlying combinatorial optimization problems: Cardinality Constrained Multi-cycle Problem (CCMcP) and the Cardinality Constrained Cycles and Chains Problem (CCCCP). The objective of the KEP is either to maximize the number of kidney matches, or to maximize a certain weighted function of kidney matches. In a CCMcP, a vertex can be in at most one cycle whereas in a CCCCP, a vertex can be part of (but in no more than) a cycle or a chain. The cardinality of the cycles are constrained in all studies. The cardinality of the chains, however, are considered unconstrained in some studies, constrained but larger than that of cycles, or the same as that of cycles in others. Although the CCMcP has some similarities to the ATSP- and VRP-family of problems, there is a major difference: strong subtour elimination constraints are mostly invalid for the CCMcP, as we do allow smaller subtours as long as they do not exceed the size limit. The CCCCP has its distinctive feature that allows chains as well as cycles on the same directed graph. Hence, both the CCMcP and the CCCCP are interesting and challenging combinatorial optimization problems in their own rights. Most existing studies focused on solution methodologies, and as far as we aware, there is no polyhedral studies so far. In this paper, we will study the polyhedral structure of the natural arc-based integer programming models of the CCMcP and the CCCCP, both containing exponentially many constraints. We do so to pave the way for studying strong valid cuts we have found that can be applied in a Lagrangean relaxation-based branch-and-bound framework where at each node of the branch-and-bound tree, we may be able to obtain a relaxation that can be solved in polynomial time, with strong valid cuts dualized into the objective function and the dual multipliers optimised by subgradient optimisation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: This study investigated 5-year trends in body weight, overweight and obesity and their association with sociodemographic variables in a large, multi-ethnic community sample of Australian adults.  Design: This prospective population study used baseline and 5-year follow-up data from participants in the Melbourne Collaborative Cohort Study (MCCS). Setting: Population study in Melbourne, Australia. Subjects: In total, 12 125 men and 17 674 women aged 35–69 years at baseline. Results: Mean 5-year weight change in this sample was +1.58 (standard deviation (SD) 4.82) kg for men and +2.42 (SD 5.17) kg for women. Younger (35–44 years) men and, in particular, women gained more weight than older adults and were at highest risk of major weight gain ($5 kg) and becoming overweight. Risk of major weight gain and associations between demographic variables and weight change did not vary greatly by ethnicity. Education level showed complex associations with weight outcomes that differed by sex and ethnicity. Multivariate analyses showed that, among men, higher initial body weight was associated with decreased likelihood of major weight gain, whereas among women, those initially overweight or obese were about 20% more likely to experience major weight gain than underweight or healthy weight women. Conclusions: Findings of widespread weight gain across this entire population sample, and particularly among younger women and women who were already overweight, are a cause for alarm. The prevention of weight gain and obesity across the entire population should be an urgent public health priority. Young-to-mid adulthood appears to be a critical time to intervene to prevent future weight gain.