899 resultados para the SIMPLE algorithm


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this work is presented mixed convection heat transfer inside a lid-driven cavity heated from below and filled with heterogeneous and homogeneous porous medium. In the heterogeneous approach, the solid domain is represented by heat conductive equally spaced blocks; the fluid phase surrounds the blocks being limited by the cavity walls. The homogeneous or pore-continuum approach is characterized by the cavity porosity and permeability. Generalized mass, momentum and energy conservation equations are obtained in dimensionless form to represent both the continuum and the pore-continuum models. The numerical solution is obtained via the finite volume method. QUICK interpolation scheme is set for numerical treatment of the advection terms and SIMPLE algorithm is applied for pressure-velocity coupling. Aiming the laminar regime, the flow parameters are kept in the range of 102≤Re≤103 and 103≤Ra≤106 for both the heterogeneous and homogeneous approaches. In the tested configurations for the continuous model, 9, 16, 36, and 64 blocks are considered for each combination of Re and Ra being the microscopic porosity set as constant φ=0,64 . For the pore-continuum model the Darcy number (Da) is set according to the number of blocks in the heterogeneous cavity and the φ. Numerical results of the comparative study between the microscopic and macroscopic approaches are presented. As a result, average Nusselt number equations for the continuum and the pore continuum models as a function of Ra and Re are obtained.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a new type of genetic algorithm for the set covering problem. It differs from previous evolutionary approaches first because it is an indirect algorithm, i.e. the actual solutions are found by an external decoder function. The genetic algorithm itself provides this decoder with permutations of the solution variables and other parameters. Second, it will be shown that results can be further improved by adding another indirect optimisation layer. The decoder will not directly seek out low cost solutions but instead aims for good exploitable solutions. These are then post optimised by another hill-climbing algorithm. Although seemingly more complicated, we will show that this three-stage approach has advantages in terms of solution quality, speed and adaptability to new types of problems over more direct approaches. Extensive computational results are presented and compared to the latest evolutionary and other heuristic approaches to the same data instances.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

During our earlier research, it was recognised that in order to be successful with an indirect genetic algorithm approach using a decoder, the decoder has to strike a balance between being an optimiser in its own right and finding feasible solutions. Previously this balance was achieved manually. Here we extend this by presenting an automated approach where the genetic algorithm itself, simultaneously to solving the problem, sets weights to balance the components out. Subsequently we were able to solve a complex and non-linear scheduling problem better than with a standard direct genetic algorithm implementation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnoloigia, 2016.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Plague es un editor de archivos escritos en lenguajes de planificación como STRIPS y PDDL, que permite lanzar el algoritmo GrahPlan a partir de los archivos de dominio y problema editados y encontrar una solución al problema planteado. El objetivo del editor es eminentemente pedagógico: su uso es muy simple y viene con variados ejemplos de ambos lenguajes de planificación, de modo que el usuario pueda aprenderlos de forma paulatina. Además, la salida de la ejecución permite ir viendo paso a paso el desarrollo del algoritmo GraphPlan: los operadores que se van ejecutando, los no-ops que se han seguido, los mutex que se han aplicado en cada nivel y el tiempo empleado, además de la solución final al problema si se alcanza. El programa hace uso de dos utilidades que permiten compilar el código STRIPS o PDDL que son JavaGP y PDDL4J. Una vez ejecutado el problema de planificación, se obtiene la salida en pantalla y también se puede imprimir el problema completo incluida la solución. El objetivo ha sido crear un programa que permita al usuario editar rápidamente archivos STRIPS y PDDL, los pueda compilar velozmente y obtener el resultado en un solo sitio, con una salida mucho más clara, organizada y entendible y se evite el problema de tener que usar editores externos y una ventana de línea de comando para ejecutar GraphPlan. Plague is a text editor for files written in action languages, such as STRIPS and PDDL, which allows running the GraphPlan algorithm from the domain archives and edited problems, and finding a solution to the proposed problem. The goal of the editor is primarily for pedagogical purposes: it is simple to use and comes equipped with a variety of examples in both action languages, so that the user can gradually learn. In addition, as the editor runs it allows the user to observe the step by step development of the GraphPlan algorithm: the operators being executed, the no-ops that have been followed, the mutex applied at each level and the time spent, as well as the final answer to the problem, if reached. The program uses two utilities allowing the STRIPS or PDDL code to be compiled: JavaGP and PDDL4J. Once the planning problem has been executed, the result is shown on screen and the complete problem can also be printed, including the solution. The objective has been to create a program that allows the user to quickly edit STRIPS and PDDL archives, to compile them swiftly and obtain the solution in a single place, with a result that is clear, organised and understandable, thus avoiding the problem of having to use external editors and command prompts to execute GraphPlan.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Kinematic structure of planar mechanisms addresses the study of attributes determined exclusively by the joining pattern among the links forming a mechanism. The system group classification is central to the kinematic structure and consists of determining a sequence of kinematically and statically independent-simple chains which represent a modular basis for the kinematics and force analysis of the mechanism. This article presents a novel graph-based algorithm for structural analysis of planar mechanisms with closed-loop kinematic structure which determines a sequence of modules (Assur groups) representing the topology of the mechanism. The computational complexity analysis and proof of correctness of the implemented algorithm are provided. A case study is presented to illustrate the results of the devised method.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this paper is to present new results on H-infinity control synthesis for time-delay linear systems. We extend the use of a finite order LTI system, called comparison system to H-infinity analysis and design. Differently from what can be viewed as a common feature of other control design methods available in the literature to date, the one presented here treats time-delay systems control design with classical numeric routines based on Riccati equations arisen from H-infinity theory. The proposed algorithm is simple, efficient and easy to implement. Some examples illustrating state and output feedback design are solved and discussed in order to put in evidence the most relevant characteristic of the theoretical results. Moreover, a practical application involving a 3-DOF networked control system is presented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The only method used to date to measure dissolved nitrate concentration (NITRATE) with sensors mounted on profiling floats is based on the absorption of light at ultraviolet wavelengths by nitrate ion (Johnson and Coletti, 2002; Johnson et al., 2010; 2013; D’Ortenzio et al., 2012). Nitrate has a modest UV absorption band with a peak near 210 nm, which overlaps with the stronger absorption band of bromide, which has a peak near 200 nm. In addition, there is a much weaker absorption due to dissolved organic matter and light scattering by particles (Ogura and Hanya, 1966). The UV spectrum thus consists of three components, bromide, nitrate and a background due to organics and particles. The background also includes thermal effects on the instrument and slow drift. All of these latter effects (organics, particles, thermal effects and drift) tend to be smooth spectra that combine to form an absorption spectrum that is linear in wavelength over relatively short wavelength spans. If the light absorption spectrum is measured in the wavelength range around 217 to 240 nm (the exact range is a bit of a decision by the operator), then the nitrate concentration can be determined. Two different instruments based on the same optical principles are in use for this purpose. The In Situ Ultraviolet Spectrophotometer (ISUS) built at MBARI or at Satlantic has been mounted inside the pressure hull of a Teledyne/Webb Research APEX and NKE Provor profiling floats and the optics penetrate through the upper end cap into the water. The Satlantic Submersible Ultraviolet Nitrate Analyzer (SUNA) is placed on the outside of APEX, Provor, and Navis profiling floats in its own pressure housing and is connected to the float through an underwater cable that provides power and communications. Power, communications between the float controller and the sensor, and data processing requirements are essentially the same for both ISUS and SUNA. There are several possible algorithms that can be used for the deconvolution of nitrate concentration from the observed UV absorption spectrum (Johnson and Coletti, 2002; Arai et al., 2008; Sakamoto et al., 2009; Zielinski et al., 2011). In addition, the default algorithm that is available in Satlantic sensors is a proprietary approach, but this is not generally used on profiling floats. There are some tradeoffs in every approach. To date almost all nitrate sensors on profiling floats have used the Temperature Compensated Salinity Subtracted (TCSS) algorithm developed by Sakamoto et al. (2009), and this document focuses on that method. It is likely that there will be further algorithm development and it is necessary that the data systems clearly identify the algorithm that is used. It is also desirable that the data system allow for recalculation of prior data sets using new algorithms. To accomplish this, the float must report not just the computed nitrate, but the observed light intensity. Then, the rule to obtain only one NITRATE parameter is, if the spectrum is present then, the NITRATE should be recalculated from the spectrum while the computation of nitrate concentration can also generate useful diagnostics of data quality.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The bubble crab Dotilla fenestrata forms very dense populations on the sand flats of the eastern coast of Inhaca Island, Mozambique, making it an interesting biological model to examine spatial distribution patterns and test the relative efficiency of common sampling methods. Due to its apparent ecological importance within the sandy intertidal community, understanding the factors ruling the dynamics of Dotilla populations is also a key issue. In this study, different techniques of estimating crab density are described, and the trends of spatial distribution of the different population categories are shown. The studied populations are arranged in discrete patches located at the well-drained crests of nearly parallel mega sand ripples. For a given sample size, there was an obvious gain in precision by using a stratified random sampling technique, considering discrete patches as strata, compared to the simple random design. Density average and variance differed considerably among patches since juveniles and ovigerous females were found clumped, with higher densities at the lower and upper shore levels, respectively. Burrow counting was found to be an adequate method for large-scale sampling, although consistently underestimating actual crab density by nearly half. Regression analyses suggested that crabs smaller than 2.9 mm carapace width tend to be undetected in visual burrow counts. A visual survey of sampling plots over several patches of a large Dotilla population showed that crab density varied in an interesting oscillating pattern, apparently following the topography of the sand flat. Patches extending to the lower shore contained higher densities than those mostly covering the higher shore. Within-patch density variability also pointed to the same trend, but the density increment towards the lowest shore level varied greatly among the patches compared.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

International audience

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade Gama, Programa de Pós-Graduação em Engenharia Biomédica, 2015.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper addresses a potential role that tariffs and tariff policy can play in encouraging countries to take part in a multilateral effort to mitigate climate change. It begins by assessing whether increasing tariffs on products from energy intensive or polluting industries amounts to a violation of WTO rules and whether protectionism in this case can be differentiated from genuine environmental concerns. It then argues that while lowering tariffs for environmental goods can serve as a carrot to promote dissemination of cleaner technologies, tariff deconsolidation is a legitimate stick to encourage polluting countries to move towards an international climate agreement. The paper further explores this view by undertaking a partialequilibrium simulation analysis to examine the impact of a unilateral unit increase in tariffs on the imports of the most carbon-intensive products from countries not committed to climate polices. Our results suggest that the committed importing countries would have to raise their tariffs only slightly to effect a significant decline in the imports of these products from the non-committed countries. For instance, a unit increase in the simple average applied tariffs on the imports of these carbon-intensive products in 2005 from our sample of non-committed exporting countries would reduce the imports of these products by an average 32.6% in Australia, 178% in Canada, 195% in the EU, 271% in Japan and 62% in the US, therebysuggesting the effectiveness of such a measure in pushing countries towards a global climate policy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Using robotic systems for many missions that require power distribution can decrease the need for human intervention in such missions significantly. For accomplishing this capability a robotic system capable of autonomous navigation, power systems adaptation, and establishing physical connection needs to be developed. This thesis presents developed path planning and navigation algorithms for an autonomous ground power distribution system. In this work, a survey on existing path planning methods along with two developed algorithms by author is presented. One of these algorithms is a simple path planner suitable for implementation on lab-size platforms. A navigation hierarchy is developed for experimental validation of the path planner and proof of concept for autonomous ground power distribution system in lab environment. The second algorithm is a robust path planner developed for real-size implementation based on lessons learned from lab-size experiments. The simulation results illustrates that the algorithm is efficient and reliable in unknown environments. Future plans for developing intelligent power electronics and integrating them with robotic systems is presented. The ultimate goal is to create a power distribution system capable of regulating power flow at a desired voltage and frequency adaptable to load demands.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Research has found that children with autism spectrum disorders (ASD) show significant deficits in receptive language skills (Wiesmer, Lord, & Esler, 2010). One of the primary goals of applied behavior analytic intervention is to improve the communication skills of children with autism by teaching receptive discriminations. Both receptive discriminations and receptive language entail matching spoken words with corresponding objects, symbols (e.g., pictures or words), actions, people, and so on (Green, 2001). In order to develop receptive language skills, children with autism often undergo discrimination training within the context of discrete trial training. This training entails teaching the learner how to respond differentially to different stimuli (Green, 2001). It is through discrimination training that individuals with autism learn and develop language (Lovaas, 2003). The present study compares three procedures for teaching receptive discriminations: (1) simple/conditional (Procedure A), (2) conditional only (Procedure B), and (3) conditional discrimination of two target cards (Procedure C). Six children, ranging in age from 2-years-old to 5-years-old, with an autism diagnosis were taught how to receptively discriminate nine sets of stimuli. Results suggest that the extra training steps included in the simple/conditional and conditional only procedures may not be necessary to teach children with autism how to receptively discriminate. For all participants, Procedure C appeared to be the most efficient and effective procedure for teaching young children with autism receptive discriminations. Response maintenance and generalization probes conducted one-month following the end of training indicate that even though Procedure C resulted in less training sessions overall, no one procedure resulted in better maintenance and generalization than the others. In other words, more training sessions, as evident with the simple/conditional and conditional only procedures, did not facilitate participants’ ability to accurately respond or generalize one-month following training. The present study contributes to the literature on what is the most efficient and effective way to teach receptive discrimination during discrete trial training to children with ASD. These findings are critical as research shows that receptive language skills are predictive of better outcomes and adaptive behaviors in the future. ^