39 resultados para Optimization problems


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fault-tolerant motion of redundant manipulators can be obtained by joint velocity reconfiguration. For fault-tolerant manipulators, it is beneficial to determine the configurations that can tolerate the locked-joint failures with a minimum relative joint velocity jump, because the manipulator can rapidly reconfigure itself to tolerate the fault. This paper uses the properties of the condition numbers to introduce those optimal configurations for serial manipulators. The relationship between the manipulator's locked-joint failures and the condition number of the Jacobian matrix is indicated by using a matrix perturbation methodology. Then, it is observed that the condition number provides an upper bound of the required relative joint velocity change for recovering the faults which leads to define the optimal fault-tolerant configuration from the minimization of the condition number. The optimization problem to obtain the minimum condition number is converted to three standard Eigen value optimization problems. A solution is for selected optimization problem is presented. Finally, in order to obtain the optimal fault-tolerant configuration, the proposed method is applied to a 4-DoF planar manipulator.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A grid computing system consists of a group of programs and resources that are spread across machines in the grid. A grid system has a dynamic environment and decentralized distributed resources, so it is important to provide efficient scheduling for applications. Task scheduling is an NP-hard problem and deterministic algorithms are inadequate and heuristic algorithms such as particle swarm optimization (PSO) are needed to solve the problem. PSO is a simple parallel algorithm that can be applied in different ways to resolve optimization problems. PSO searches the problem space globally and needs to be combined with other methods to search locally as well. In this paper, we propose a hybrid-scheduling algorithm to solve the independent task- scheduling problem in grid computing. We have combined PSO with the gravitational emulation local search (GELS) algorithm to form a new method, PSO–GELS. Our experimental results demonstrate the effectiveness of PSO–GELS compared to other algorithms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As shortest path (SP) problem has been one of the most fundamental network optimization problems for a long time, technologies for this problem are still being studied. In this paper, a new method by integrating a path finding mathematical model, inspired by Physarum polycephalum, with extracted one heuristic rule to solve SP problem has been proposed, which is called Rapid Physarum Algorithm (RPA). Simulation experiments have been carried out on three different network topologies with varying number of nodes. It is noted that the proposed RPA can find the optimal path as the path finding model does for most networks. What is more, experimental results show that the performance of RPA surpasses the path finding model on both iterations and solution time. © 2014 Elsevier B.V.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Prediction of patient outcomes is critical to plan resources in an hospital emergency department. We present a method to exploit longitudinal data from Electronic Medical Records (EMR), whilst exploiting multiple patient outcomes. We divide the EMR data into segments where each segment is a task, and all tasks are associated with multiple patient outcomes over a 3, 6 and 12 month period. We propose a model that learns a prediction function for each task-label pair, interacting through two subspaces: the first subspace is used to impose sharing across all tasks for a given label. The second subspace captures the task-specific variations and is shared across all the labels for a given task. The proposed model is formulated as an iterative optimization problems and solved using a scalable and efficient Block co-ordinate descent (BCD) method. We apply the proposed model on two hospital cohorts - Cancer and Acute Myocardial Infarction (AMI) patients collected over a two year period from a large hospital emergency department. We show that the predictive performance of our proposed models is significantly better than those of several state-of-the-art multi-task and multi-label learning methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

For multiple heterogeneous multicore server processors across clouds and data centers, the aggregated performance of the cloud of clouds can be optimized by load distribution and balancing. Energy efficiency is one of the most important issues for large-scale server systems in current and future data centers. The multicore processor technology provides new levels of performance and energy efficiency. The present paper aims to develop power and performance constrained load distribution methods for cloud computing in current and future large-scale data centers. In particular, we address the problem of optimal power allocation and load distribution for multiple heterogeneous multicore server processors across clouds and data centers. Our strategy is to formulate optimal power allocation and load distribution for multiple servers in a cloud of clouds as optimization problems, i.e., power constrained performance optimization and performance constrained power optimization. Our research problems in large-scale data centers are well-defined multivariable optimization problems, which explore the power-performance tradeoff by fixing one factor and minimizing the other, from the perspective of optimal load distribution. It is clear that such power and performance optimization is important for a cloud computing provider to efficiently utilize all the available resources. We model a multicore server processor as a queuing system with multiple servers. Our optimization problems are solved for two different models of core speed, where one model assumes that a core runs at zero speed when it is idle, and the other model assumes that a core runs at a constant speed. Our results in this paper provide new theoretical insights into power management and performance optimization in data centers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Physarum Polycephalum is a unicellular and multi-headed slime mold, which can form high efficient networks connecting spatially separated food sources in the process of foraging. Such adaptive networks exhibit a unique characteristic in which network length and fault tolerance are appropriately balanced. Based on the biological observations, the foraging process of Physarum demonstrates two self-organized behaviors, i.e., search and contraction. In this paper, these two behaviors are captured in a multi-agent system. Two types of agents and three transition rules are designed to imitate the search and the contraction behaviors of Physarum based on the necessary and the sufficient conditions of a self-organized computational system. Some simulations of foraging process are used to investigate the characteristics of our system. Experimental results show that our system can autonomously search for food sources and then converge to a stable solution, which replicates the foraging process of Physarum. Specially, a case study of maze problem is used to estimate the path-finding ability of the foraging behaviors of Physarum. What’s more, the model inspired by the foraging behaviors of Physarum is proposed to optimize meta-heuristic algorithms for solving optimization problems. Through comparing the optimized algorithms and the corresponding traditional algorithms, we have found that the optimization strategies have a higher computational performance than their corresponding traditional algorithms, which further justifies that the foraging behaviors of Physarum have a higher computational ability.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In recent years, there has been studies on the cardinality constrained multi-cycle problems on directed graphs, some of which considered chains co-existing on the same digraph whilst others did not. These studies were inspired by the optimal matching of kidneys known as the Kidney Exchange Problem (KEP). In a KEP, a vertex on the digraph represents a donor-patient pair who are related, though the kidney of the donor is incompatible to the patient. When there are multiple such incompatible pairs in the kidney exchange pool, the kidney of the donor of one incompatible pair may in fact be compatible to the patient of another incompatible pair. If Donor A’s kidney is suitable for Patient B, and vice versa, then there will be arcs in both directions between Vertex A to Vertex B. Such exchanges form a 2-cycle. There may also be cycles involving 3 or more vertices. As all exchanges in a kidney exchange cycle must take place simultaneously, (otherwise a donor can drop out from the program once his/her partner has received a kidney from another donor), due to logistic and human resource reasons, only a limited number of kidney exchanges can occur simultaneously, hence the cardinality of these cycles are constrained. In recent years, kidney exchange programs around the world have altruistic donors in the pool. A sequence of exchanges that starts from an altruistic donor forms a chain instead of a cycle. We therefore have two underlying combinatorial optimization problems: Cardinality Constrained Multi-cycle Problem (CCMcP) and the Cardinality Constrained Cycles and Chains Problem (CCCCP). The objective of the KEP is either to maximize the number of kidney matches, or to maximize a certain weighted function of kidney matches. In a CCMcP, a vertex can be in at most one cycle whereas in a CCCCP, a vertex can be part of (but in no more than) a cycle or a chain. The cardinality of the cycles are constrained in all studies. The cardinality of the chains, however, are considered unconstrained in some studies, constrained but larger than that of cycles, or the same as that of cycles in others. Although the CCMcP has some similarities to the ATSP- and VRP-family of problems, there is a major difference: strong subtour elimination constraints are mostly invalid for the CCMcP, as we do allow smaller subtours as long as they do not exceed the size limit. The CCCCP has its distinctive feature that allows chains as well as cycles on the same directed graph. Hence, both the CCMcP and the CCCCP are interesting and challenging combinatorial optimization problems in their own rights. Most existing studies focused on solution methodologies, and as far as we aware, there is no polyhedral studies so far. In this paper, we will study the polyhedral structure of the natural arc-based integer programming models of the CCMcP and the CCCCP, both containing exponentially many constraints. We do so to pave the way for studying strong valid cuts we have found that can be applied in a Lagrangean relaxation-based branch-and-bound framework where at each node of the branch-and-bound tree, we may be able to obtain a relaxation that can be solved in polynomial time, with strong valid cuts dualized into the objective function and the dual multipliers optimised by subgradient optimisation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Health analysis often involves prediction of multiple outcomes of mixed-type. Existing work is restrictive to either a limited number or specific outcome types. We propose a framework for mixed-type multi-outcome prediction. Our proposed framework proposes a cumulative loss function composed of a specific loss function for each outcome type - as an example, least square (continuous outcome), hinge (binary outcome), poisson (count outcome) and exponential (non-negative outcome). Tomodel these outcomes jointly, we impose a commonality across the prediction parameters through a common matrix-Normal prior. The framework is formulated as iterative optimization problems and solved using an efficient Block coordinate descent method (BCD). We empirically demonstrate both scalability and convergence. We apply the proposed model to a synthetic dataset and then on two real-world cohorts: a Cancer cohort and an Acute Myocardial Infarction cohort collected over a two year period. We predict multiple emergency related outcomes - as example, future emergency presentations (binary), emergency admissions (count), emergency length-of-stay-days (non-negative) and emergency time-to-next-admission-day (non-negative). Weshow that the predictive performance of the proposed model is better than several state-of-the-art baselines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Splines with free knots have been extensively studied in regard to calculating the optimal knot positions. The dependence of the accuracy of approximation on the knot distribution is highly nonlinear, and optimisation techniques face a difficult problem of multiple local minima. The domain of the problem is a simplex, which adds to the complexity. We have applied a recently developed cutting angle method of deterministic global optimisation, which allows one to solve a wide class of optimisation problems on a simplex. The results of the cutting angle method are subsequently improved by local discrete gradient method. The resulting algorithm is sufficiently fast and guarantees that the global minimum has been reached. The results of numerical experiments are presented.


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cutting angle method (CAM) is a deterministic global optimization technique applicable to Lipschitz functions f: Rn → R. The method builds a sequence of piecewise linear lower approximations to the objective function f. The sequence of solutions to these relaxed problems converges to the global minimum of f. This article adapts CAM to the case of linear constraints on the feasible domain. We show how the relaxed problems are modified, and how the numerical efficiency of solving these problems can be preserved. A number of numerical experiments confirms the improved numerical efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We examine a mathematical model of non-destructive testing of planar waveguides, based on numerical solution of a nonlinear integral equation. Such problem is ill-posed, and the method of Tikhonov regularization is applied. To minimize Tikhonov functional, and find the parameters of the waveguide, we use two new optimization methods: the cutting angle method of global optimization, and the discrete gradient method of nonsmooth local optimization. We examine how the noise in the experimental data influences the solution, and how the regularization parameter has to be chosen. We show that even with significant noise in the data, the numerical solution is of high accuracy, and the method can be used to process real experimental da.ta..

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Operations Research (OR) community have defined many deterministic manufacturing control problems mainly focused on scheduling. Well-defined benchmark problems provide a mechanism for communication of the effectiveness of different optimization algorithms. Manufacturing problems within industry are stochastic and complex. Common features of these problems include: variable demand, machine part specific breakdown patterns, part machine specific process durations, continuous production, Finished Goods Inventory (FGI) buffers, bottleneck machines and limited production capacity. Discrete Event Simulation (DES) is a commonly used tool for studying manufacturing systems of realistic complexity. There are few reports of detail-rich benchmark problems for use within the simulation optimization community that are as complex as those faced by production managers. This work details an algorithm that can be used to create single and multistage production control problems. The reported software implementation of the algorithm generates text files in eXtensible Markup Language (XML) format that are easily edited and understood as well as being cross-platform compatible. The distribution and acceptance of benchmark problems generated with the algorithm would enable researchers working on simulation and optimization of manufacturing problems to effectively communicate results to benefit the field in general.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mineral potential mapping is the process of combining a set of input maps, each representing a distinct geo-scientific variable, to produce a single map which ranks areas according to their potential to host deposits of a particular type. The maps are combined using a mapping function which must be either provided by an expert (knowledge-driven approach), or induced from sample data (data-driven approach). Current data-driven approaches using multilayer perceptrons (MLPs) to represent the mapping function have several inherent problems: they rely heavily on subjective judgment in selecting training data and are highly sensitive to this selection; they do not utilize the contextual information provided by unlabeled data; and, there is no objective interpretation of the values output by the MLP. This paper presents a novel approach which overcomes these three problems.