117 resultados para Problems solving
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
This paper discusses the use of probabilistic or randomized algorithms for solving combinatorial optimization problems. Our approach employs non-uniform probability distributions to add a biased random behavior to classical heuristics so a large set of alternative good solutions can be quickly obtained in a natural way and without complex conguration processes. This procedure is especially useful in problems where properties such as non-smoothness or non-convexity lead to a highly irregular solution space, for which the traditional optimization methods, both of exact and approximate nature, may fail to reach their full potential. The results obtained are promising enough to suggest that randomizing classical heuristics is a powerful method that can be successfully applied in a variety of cases.
Resumo:
Globalization involves several facility location problems that need to be handled at large scale. Location Allocation (LA) is a combinatorial problem in which the distance among points in the data space matter. Precisely, taking advantage of the distance property of the domain we exploit the capability of clustering techniques to partition the data space in order to convert an initial large LA problem into several simpler LA problems. Particularly, our motivation problem involves a huge geographical area that can be partitioned under overall conditions. We present different types of clustering techniques and then we perform a cluster analysis over our dataset in order to partition it. After that, we solve the LA problem applying simulated annealing algorithm to the clustered and non-clustered data in order to work out how profitable is the clustering and which of the presented methods is the most suitable
Resumo:
In this paper, we are proposing a methodology to determine the most efficient and least costly way of crew pairing optimization. We are developing a methodology based on algorithm optimization on Eclipse opensource IDE using the Java programming language to solve the crew scheduling problems.
Resumo:
In todays competitive markets, the importance of goodscheduling strategies in manufacturing companies lead to theneed of developing efficient methods to solve complexscheduling problems.In this paper, we studied two production scheduling problemswith sequence-dependent setups times. The setup times areone of the most common complications in scheduling problems,and are usually associated with cleaning operations andchanging tools and shapes in machines.The first problem considered is a single-machine schedulingwith release dates, sequence-dependent setup times anddelivery times. The performance measure is the maximumlateness.The second problem is a job-shop scheduling problem withsequence-dependent setup times where the objective is tominimize the makespan.We present several priority dispatching rules for bothproblems, followed by a study of their performance. Finally,conclusions and directions of future research are presented.
Resumo:
The paper develops a method to solve higher-dimensional stochasticcontrol problems in continuous time. A finite difference typeapproximation scheme is used on a coarse grid of low discrepancypoints, while the value function at intermediate points is obtainedby regression. The stability properties of the method are discussed,and applications are given to test problems of up to 10 dimensions.Accurate solutions to these problems can be obtained on a personalcomputer.
Resumo:
The decisions of many individuals and social groups, taking according to well-defined objectives, are causing serious social and environmental problems, in spite of following the dictates of economic rationality. There are many examples of serious problems for which there are not yet appropriate solutions, such as management of scarce natural resources including aquifer water or the distribution of space among incompatible uses. In order to solve these problems, the paper first characterizes the resources and goods involved from an economic perspective. Then, for each case, the paper notes that there is a serious divergence between individual and collective interests and, where possible, it designs the procedure for solving the conflict of interests. With this procedure, the real opportunities for the application of economic theory are shown, and especially the theory on collective goods and externalities. The limitations of conventional economic analysis are shown and the opportunity to correct the shortfalls is examined. Many environmental problems, such as climate change, have an impact on different generations that do not participate in present decisions. The paper shows that for these cases, the solutions suggested by economic theory are not valid. Furthermore, conventional methods of economic valuation (which usually help decision-makers) are unable to account for the existence of different generations and tend to obviate long-term impacts. The paper analyzes how economic valuation methods could account for the costs and benefits enjoyed by present and future generations. The paper studies an appropriate consideration of preferences for future consumption and the incorporation of sustainability as a requirement in social decisions, which implies not only more efficiency but also a fairer distribution between generations than the one implied by conventional economic analysis.
Mutigrid preconditioner for nonconforming discretization of elliptic problems with jump coefficients
Resumo:
In this paper, we present a multigrid preconditioner for solving the linear system arising from the piecewise linear nonconforming Crouzeix-Raviart discretization of second order elliptic problems with jump coe fficients. The preconditioner uses the standard conforming subspaces as coarse spaces. Numerical tests show both robustness with respect to the jump in the coe fficient and near-optimality with respect to the number of degrees of freedom.
Resumo:
From a managerial point of view, the more effcient, simple, and parameter-free (ESP) an algorithm is, the more likely it will be used in practice for solving real-life problems. Following this principle, an ESP algorithm for solving the Permutation Flowshop Sequencing Problem (PFSP) is proposed in this article. Using an Iterated Local Search (ILS) framework, the so-called ILS-ESP algorithm is able to compete in performance with other well-known ILS-based approaches, which are considered among the most effcient algorithms for the PFSP. However, while other similar approaches still employ several parameters that can affect their performance if not properly chosen, our algorithm does not require any particular fine-tuning process since it uses basic "common sense" rules for the local search, perturbation, and acceptance criterion stages of the ILS metaheuristic. Our approach defines a new operator for the ILS perturbation process, a new acceptance criterion based on extremely simple and transparent rules, and a biased randomization process of the initial solution to randomly generate different alternative initial solutions of similar quality -which is attained by applying a biased randomization to a classical PFSP heuristic. This diversification of the initial solution aims at avoiding poorly designed starting points and, thus, allows the methodology to take advantage of current trends in parallel and distributed computing. A set of extensive tests, based on literature benchmarks, has been carried out in order to validate our algorithm and compare it against other approaches. These tests show that our parameter-free algorithm is able to compete with state-of-the-art metaheuristics for the PFSP. Also, the experiments show that, when using parallel computing, it is possible to improve the top ILS-based metaheuristic by just incorporating to it our biased randomization process with a high-quality pseudo-random number generator.
Resumo:
We introduce a width parameter that bounds the complexity of classical planning problems and domains, along with a simple but effective blind-search procedure that runs in time that is exponential in the problem width. We show that many benchmark domains have a bounded and small width provided thatgoals are restricted to single atoms, and hence that such problems are provably solvable in low polynomial time. We then focus on the practical value of these ideas over the existing benchmarks which feature conjunctive goals. We show that the blind-search procedure can be used for both serializing the goal into subgoals and for solving the resulting problems, resulting in a ‘blind’ planner that competes well with a best-first search planner guided by state-of-the-art heuristics. In addition, ideas like helpful actions and landmarks can be integrated as well, producing a planner with state-of-the-art performance.
Resumo:
Polynomial constraint solving plays a prominent role in several areas of hardware and software analysis and verification, e.g., termination proving, program invariant generation and hybrid system verification, to name a few. In this paper we propose a new method for solving non-linear constraints based on encoding the problem into an SMT problem considering only linear arithmetic. Unlike other existing methods, our method focuses on proving satisfiability of the constraints rather than on proving unsatisfiability, which is more relevant in several applications as we illustrate with several examples. Nevertheless, we also present new techniques based on the analysis of unsatisfiable cores that allow one to efficiently prove unsatisfiability too for a broad class of problems. The power of our approach is demonstrated by means of extensive experiments comparing our prototype with state-of-the-art tools on benchmarks taken both from the academic and the industrial world.
Resumo:
Sudoku problems are some of the most known and enjoyed pastimes, with a never diminishing popularity, but, for the last few years those problems have gone from an entertainment to an interesting research area, a twofold interesting area, in fact. On the one side Sudoku problems, being a variant of Gerechte Designs and Latin Squares, are being actively used for experimental design, as in [8, 44, 39, 9]. On the other hand, Sudoku problems, as simple as they seem, are really hard structured combinatorial search problems, and thanks to their characteristics and behavior, they can be used as benchmark problems for refining and testing solving algorithms and approaches. Also, thanks to their high inner structure, their study can contribute more than studies of random problems to our goal of solving real-world problems and applications and understanding problem characteristics that make them hard to solve. In this work we use two techniques for solving and modeling Sudoku problems, namely, Constraint Satisfaction Problem (CSP) and Satisfiability Problem (SAT) approaches. To this effect we define the Generalized Sudoku Problem (GSP), where regions can be of rectangular shape, problems can be of any order, and solution existence is not guaranteed. With respect to the worst-case complexity, we prove that GSP with block regions of m rows and n columns with m = n is NP-complete. For studying the empirical hardness of GSP, we define a series of instance generators, that differ in the balancing level they guarantee between the constraints of the problem, by finely controlling how the holes are distributed in the cells of the GSP. Experimentally, we show that the more balanced are the constraints, the higher the complexity of solving the GSP instances, and that GSP is harder than the Quasigroup Completion Problem (QCP), a problem generalized by GSP. Finally, we provide a study of the correlation between backbone variables – variables with the same value in all the solutions of an instance– and hardness of GSP.
Resumo:
The goal of this work is to try to create a statistical model, based only on easily computable parameters from the CSP problem to predict runtime behaviour of the solving algorithms, and let us choose the best algorithm to solve the problem. Although it seems that the obvious choice should be MAC, experimental results obtained so far show, that with big numbers of variables, other algorithms perfom much better, specially for hard problems in the transition phase.
Resumo:
Background: A holistic perspective on health implies giving careful consideration to the relationship between physical and mental health. In this regard the present study sought to determine the level of Positive Mental Health (PMH) among people with chronic physical health problems, and to examine the relationship between the observed levels of PMH and both physical health status and socio-demographic variables. Methods: The study was based on the Multifactor Model of Positive Mental Health (Lluch, 1999), which comprises six factors: Personal Satisfaction (F1), Prosocial Attitude (F2), Self-control (F3), Autonomy (F4), Problem-solving and Self-actualization (F5), and Interpersonal Relationship Skills (F6). The sample comprised 259 adults with chronic physical health problems who were recruited through a primary care center in the province of Barcelona (Spain). Positive mental health was assessed by means of the Positive Mental Health Questionnaire (Lluch, 1999). Results: Levels of PMH differed, either on the global scale or on specific factors, in relation to the following variables: age: global PMH scores decreased with age (r=-0.129; p=0.038); b) gender: men scored higher on F1 (t=2.203; p=0.028) and F4 (t=3.182; p=0.002), while women scored higher on F2 (t -3.086; p=0.002) and F6 (t=-2.744; p=0.007); c) number of health conditions: the fewer the number of health problems the higher the PMH score on F5 (r=-0.146; p=0.019); d) daily medication: polymedication patients had lower PMH scores, both globally and on various factors; e) use of analgesics: occasional use of painkillers was associated with higher PMH scores on F1 (t=-2.811; p=0.006). There were no significant differences in global PMH scores according to the type of chronic health condition. The only significant difference in the analysis by factors was that patients with hypertension obtained lower PMH scores on the factor Autonomy (t=2.165; p=0.032). Conclusions: Most people with chronic physical health problems have medium or high levels of PMH. The variables that adversely affect PMH are old age, polypharmacy and frequent consumption of analgesics. The type of health problem does not influence the levels of PMH. Much more extensive studies with samples without chronic pathology are now required in order to be able to draw more robust conclusions.
Resumo:
The results obtained in several yield tests, at an international level (mainly the famous PISA 2003 report, by the OCDE), have raised a multiplicity of performances in order to improve the students' yield regarding problem solving. In this article we set a clear guideline on how problems should be used in Mathematics lessons, not for obtaining better scores in the yield tests but for improving the development of Mathematical thinking in students. From this perspective, the author analyses, through eight reflections, how the concept of problem, transmitted both in the school and from society, influences the students
Resumo:
Despite the huge increase in processor and interprocessor network performace, many computational problems remain unsolved due to lack of some critical resources such as floating point sustained performance, memory bandwidth, etc... Examples of these problems are found in areas of climate research, biology, astrophysics, high energy physics (montecarlo simulations) and artificial intelligence, among others. For some of these problems, computing resources of a single supercomputing facility can be 1 or 2 orders of magnitude apart from the resources needed to solve some them. Supercomputer centers have to face an increasing demand on processing performance, with the direct consequence of an increasing number of processors and systems, resulting in a more difficult administration of HPC resources and the need for more physical space, higher electrical power consumption and improved air conditioning, among other problems. Some of the previous problems can´t be easily solved, so grid computing, intended as a technology enabling the addition and consolidation of computing power, can help in solving large scale supercomputing problems. In this document, we describe how 2 supercomputing facilities in Spain joined their resources to solve a problem of this kind. The objectives of this experience were, among others, to demonstrate that such a cooperation can enable the solution of bigger dimension problems and to measure the efficiency that could be achieved. In this document we show some preliminary results of this experience and to what extend these objectives were achieved.