912 resultados para cutting stock problem with setups
Resumo:
In this paper we carry out an investigation of some of the major features of exam timetabling problems with a view to developing a similarity measure. This similarity measure will be used within a case-based reasoning (CBR) system to match a new problem with one from a case-based of previously solved problems. The case base will also store the heuristic for meta-heuristic techniques applied most successfully to each problem stored. The technique(s) stored with the matched case will be retrieved and applied to the new case. The CBR assumption in our system is that similar problems can be solved equally well by the same technique.
Resumo:
One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.
Resumo:
The aim of this research is twofold: Firstly, to model and solve a complex nurse scheduling problem with an integer programming formulation and evolutionary algorithms. Secondly, to detail a novel statistical method of comparing and hence building better scheduling algorithms by identifying successful algorithm modifications. The comparison method captures the results of algorithms in a single figure that can then be compared using traditional statistical techniques. Thus, the proposed method of comparing algorithms is an objective procedure designed to assist in the process of improving an algorithm. This is achieved even when some results are non-numeric or missing due to infeasibility. The final algorithm outperforms all previous evolutionary algorithms, which relied on human expertise for modification.
Resumo:
The aim of this research is twofold: Firstly, to model and solve a complex nurse scheduling problem with an integer programming formulation and evolutionary algorithms. Secondly, to detail a novel statistical method of comparing and hence building better scheduling algorithms by identifying successful algorithm modifications. The comparison method captures the results of algorithms in a single figure that can then be compared using traditional statistical techniques. Thus, the proposed method of comparing algorithms is an objective procedure designed to assist in the process of improving an algorithm. This is achieved even when some results are non-numeric or missing due to infeasibility. The final algorithm outperforms all previous evolutionary algorithms, which relied on human expertise for modification.
Resumo:
International audience
Resumo:
INTRODUCTION: In common with much of the developed world, Scotland has a severe and well established problem with overweight and obesity in childhood with recent figures demonstrating that 31% of Scottish children aged 2-15 years old were overweight including obese in 2014. This problem is more pronounced in socioeconomically disadvantaged groups and in older children across all economic groups (Scottish Health Survey, 2014). Children who are overweight or obese are at increased risk of a number of adverse health outcomes in the short term and throughout their life course (Lobstein and Jackson-Leach, 2006). The Scottish Government tasked all Scottish Health Boards with developing and delivering child healthy weight interventions to clinically overweight or obese children in an attempt to address this health problem. It is therefore imperative to deliver high quality, affordable, appropriately targeted interventions which can make a sustained impact on children’s lifestyles, setting them up for life as healthy weight adults. This research aimed to inform the design, readiness for application and Health Board suitability of an effective primary school-based curricular child healthy weight intervention. METHODS: the process involved in conceptualising a child healthy weight intervention, developing the intervention, planning for implementation and subsequent evaluation was guided by the PRECEDE-PROCEED Model (Green and Kreuter, 2005) and the Intervention Mapping protocol (Lloyd et al. 2011). RESULTS: The outputs from each stage of the development process were used to formulate a child healthy weight intervention conceptual model then develop plans for delivery and evaluation. DISCUSSION: The Fit for School conceptual model developed through this process has the potential to theoretically modify energy balance related behaviours associated with unhealthy weight gain in childhood. It also has the potential to be delivered at a Health Board scale within current organisational restrictions.
Resumo:
The goal of Vehicle Routing Problems (VRP) and their variations is to transport a set of orders with the minimum number of vehicles at least cost. Most approaches are designed to solve specific problem variations independently, whereas in real world applications, different constraints are handled concurrently. This research extends solutions obtained for the traveling salesman problem with time windows to a much wider class of route planning problems in logistics. The work describes a novel approach that: supports a heterogeneous fleet of vehicles dynamically reduces the number of vehicles respects individual capacity restrictions satisfies pickup and delivery constraints takes Hamiltonian paths (rather than cycles) The proposed approach uses Monte-Carlo Tree Search and in particular Nested Rollout Policy Adaptation. For the evaluation of the work, real data from the industry was obtained and tested and the results are reported.
Resumo:
Uma das áreas de aplicação da optimização é a Engenharia Biomédica, pois a optimização intervém no estudo de próteses e implantes, na reconstrução tomográfica, na mecânica experimental, entre outras aplicações. Este projecto tem como principal objectivo a criação de um novo programa de marcação de exames médicos a fim de minimizar o tempo de espera na realização dos mesmos. É efectuada uma breve referência à teoria da optimização bem como à optimização linear e não-linear, aos algoritmos genéticos, que foram usados para a realização deste trabalho. É também apresentado um caso de estudo, formulado como um problema de optimização não linear com restrições. Com este estudo verificou-se que o escalonamento de exames médicos nunca poderá ser optimizado a 100por cento devido à quantidade de variáveis existentes, sendo que algumas delas não são passíveis de prever com antecedência.
Resumo:
Natural language processing has achieved great success in a wide range of ap- plications, producing both commercial language services and open-source language tools. However, most methods take a static or batch approach, assuming that the model has all information it needs and makes a one-time prediction. In this disser- tation, we study dynamic problems where the input comes in a sequence instead of all at once, and the output must be produced while the input is arriving. In these problems, predictions are often made based only on partial information. We see this dynamic setting in many real-time, interactive applications. These problems usually involve a trade-off between the amount of input received (cost) and the quality of the output prediction (accuracy). Therefore, the evaluation considers both objectives (e.g., plotting a Pareto curve). Our goal is to develop a formal understanding of sequential prediction and decision-making problems in natural language processing and to propose efficient solutions. Toward this end, we present meta-algorithms that take an existent batch model and produce a dynamic model to handle sequential inputs and outputs. Webuild our framework upon theories of Markov Decision Process (MDP), which allows learning to trade off competing objectives in a principled way. The main machine learning techniques we use are from imitation learning and reinforcement learning, and we advance current techniques to tackle problems arising in our settings. We evaluate our algorithm on a variety of applications, including dependency parsing, machine translation, and question answering. We show that our approach achieves a better cost-accuracy trade-off than the batch approach and heuristic-based decision- making approaches. We first propose a general framework for cost-sensitive prediction, where dif- ferent parts of the input come at different costs. We formulate a decision-making process that selects pieces of the input sequentially, and the selection is adaptive to each instance. Our approach is evaluated on both standard classification tasks and a structured prediction task (dependency parsing). We show that it achieves similar prediction quality to methods that use all input, while inducing a much smaller cost. Next, we extend the framework to problems where the input is revealed incremen- tally in a fixed order. We study two applications: simultaneous machine translation and quiz bowl (incremental text classification). We discuss challenges in this set- ting and show that adding domain knowledge eases the decision-making problem. A central theme throughout the chapters is an MDP formulation of a challenging problem with sequential input/output and trade-off decisions, accompanied by a learning algorithm that solves the MDP.
Resumo:
Background Biofloc technology (BFT), a rearing method with little or no water exchange, is gaining popularity in aquaculture. In the water column, such systems develop conglomerates of microbes, algae and protozoa, together with detritus and dead organic particles. The intensive microbial community presents in these systems can be used as a pond water quality treatment system, and the microbial protein can serve as a feed additive. The current problem with BFT is the difficulty of controlling its bacterial community composition for both optimal water quality and optimal shrimp health. The main objective of the present study was to investigate microbial diversity of samples obtained from different culture environments (Biofloc technology and clear seawater) as well as from the intestines of shrimp reared in both environments through high-throughput sequencing technology. Results Analyses of the bacterial community identified in water from BFT and “clear seawater” (CW) systems (control) containing the shrimp Litopenaeus stylirostris revealed large differences in the frequency distribution of operational taxonomic units (OTUs). Four out of the five most dominant bacterial communities were different in both culture methods. Bacteria found in great abundance in BFT have two principal characteristics: the need for an organic substrate or nitrogen sources to grow and the capacity to attach to surfaces and co-aggregate. A correlation was found between bacteria groups and physicochemical and biological parameters measured in rearing tanks. Moreover, rearing-water bacterial communities influenced the microbiota of shrimp. Indeed, the biofloc environment modified the shrimp intestine microbiota, as the low level (27 %) of similarity between intestinal bacterial communities from the two treatments. Conclusion This study provides the first information describing the complex biofloc microbial community, which can help to understand the environment-microbiota-host relationship in this rearing system.
Resumo:
Part 21: Mobility and Logistics
Resumo:
Since policy-makers usually pursue several conflicting objectives, policy-making can be understood as a multicriteria decision problem. Following the methodological proposal by André and Cardenete (2005) André, F. J. and Cardenete, M. A. 2005. Multicriteria Policy Making. Defining Efficient Policies in a General Equilibrium Model, Seville: Centro de Estudios Andaluces. Working Paper No. E2005/04, multi-objective programming is used in connection with a computable general equilibrium model to represent optimal policy-making and to obtain so-called efficient policies in an application to a regional economy (Andalusia, Spain). This approach is applied to the design of subsidy policies under two different scenarios. In the first scenario, it is assumed that the government is concerned just about two objectives: ensuring the profitability of a key strategic sector and increasing overall output. Finally, the scope of the exercise is enlarged by solving a problem with seven policy objectives, including both general and sectorial objectives. It is concluded that the observed policy could have been Pareto-improved in several directions.
Resumo:
In the standard Vehicle Routing Problem (VRP), we route a fleet of vehicles to deliver the demands of all customers such that the total distance traveled by the fleet is minimized. In this dissertation, we study variants of the VRP that minimize the completion time, i.e., we minimize the distance of the longest route. We call it the min-max objective function. In applications such as disaster relief efforts and military operations, the objective is often to finish the delivery or the task as soon as possible, not to plan routes with the minimum total distance. Even in commercial package delivery nowadays, companies are investing in new technologies to speed up delivery instead of focusing merely on the min-sum objective. In this dissertation, we compare the min-max and the standard (min-sum) objective functions in a worst-case analysis to show that the optimal solution with respect to one objective function can be very poor with respect to the other. The results motivate the design of algorithms specifically for the min-max objective. We study variants of min-max VRPs including one problem from the literature (the min-max Multi-Depot VRP) and two new problems (the min-max Split Delivery Multi-Depot VRP with Minimum Service Requirement and the min-max Close-Enough VRP). We develop heuristics to solve these three problems. We compare the results produced by our heuristics to the best-known solutions in the literature and find that our algorithms are effective. In the case where benchmark instances are not available, we generate instances whose near-optimal solutions can be estimated based on geometry. We formulate the Vehicle Routing Problem with Drones and carry out a theoretical analysis to show the maximum benefit from using drones in addition to trucks to reduce delivery time. The speed-up ratio depends on the number of drones loaded onto one truck and the speed of the drone relative to the speed of the truck.
Resumo:
Photothermal imaging allows to inspect the structure of composite materials by means of nondestructive tests. The surface of a medium is heated at a number of locations. The resulting temperature field is recorded on the same surface. Thermal waves are strongly damped. Robust schemes are needed to reconstruct the structure of the medium from the decaying time dependent temperature field. The inverse problem is formulated as a weighted optimization problem with a time dependent constraint. The inclusions buried in the medium and their material constants are the design variables. We propose an approximation scheme in two steps. First, Laplace transforms are used to generate an approximate optimization problem with a small number of stationary constraints. Then, we implement a descent strategy alternating topological derivative techniques to reconstruct the geometry of inclusions with gradient methods to identify their material parameters. Numerical simulations assess the effectivity of the technique.
Resumo:
Most of the current domestic installations are single phase, with contracted power equal to or less than 15 kW and with a potential difference of 230 V. When consumption is expected to be higher you choose to use three different alternating currents with a difference voltage of 400 V between them, which are called phases. This enables the subdivision of the installation in different single-phase circuits, fed independently with the neutral installation. These couples have, in turn, a difference in voltage of 230 V. The neutral is common for all three phases so that, if the system is balanced, no current flows through it. The problem with these installations is that they are designed to work in an offset manner, using phase loads, and simultaneously an equal amount of energy consumed by the three phases of the network. Connection to each of the phases makes independent single-phase loads or disturbance of the operation of the original phase circuit and, consequently, the corresponding increases in consumption, heating of engines, etc.