100 resultados para Sweep algorithms
Resumo:
In this paper, we address the problem of scheduling jobs in a no-wait flowshop with the objective of minimising the total completion time. This problem is well-known for being nondeterministic polynomial-time hard, and therefore, most contributions to the topic focus on developing algorithms able to obtain good approximate solutions for the problem in a short CPU time. More specifically, there are various constructive heuristics available for the problem [such as the ones by Rajendran and Chaudhuri (Nav Res Logist 37: 695-705, 1990); Bertolissi (J Mater Process Technol 107: 459-465, 2000), Aldowaisan and Allahverdi (Omega 32: 345-352, 2004) and the Chins heuristic by Fink and Voa (Eur J Operat Res 151: 400-414, 2003)], as well as a successful local search procedure (Pilot-1-Chins). We propose a new constructive heuristic based on an analogy with the two-machine problem in order to select the candidate to be appended in the partial schedule. The myopic behaviour of the heuristic is tempered by exploring the neighbourhood of the so-obtained partial schedules. The computational results indicate that the proposed heuristic outperforms existing ones in terms of quality of the solution obtained and equals the performance of the time-consuming Pilot-1-Chins.
Resumo:
This paper presents a domain boundary element formulation for inelastic saturated porous media with rate-independent behavior for the solid skeleton. The formulation is then applied to elastic-plastic behavior for the solid. Biot`s consolidation theory, extended to include irreversible phenomena is considered and the direct boundary element technique is used for the numerical solution after time discretization by the implicit Euler backward algorithm. The associated nonlinear algebraic problem is solved by the Newton-Raphson procedure whereby the loading/unloading conditions are fully taken into account and the consistent tangent operator defined. Only domain nodes (nodes defined inside the domain) are used to represent all domain values and the corresponding integrals are computed by using an accurate sub-elementation scheme. The developments are illustrated through the Drucker-Prager elastic-plastic model for the solid skeleton and various examples are analyzed with the proposed algorithms. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
A new cryptographic hash function Whirlwind is presented. We give the full specification and explain the design rationale. We show how the hash function can be implemented efficiently in software and give first performance numbers. A detailed analysis of the security against state-of-the-art cryptanalysis methods is also provided. In comparison to the algorithms submitted to the SHA-3 competition, Whirlwind takes recent developments in cryptanalysis into account by design. Even though software performance is not outstanding, it compares favourably with the 512-bit versions of SHA-3 candidates such as LANE or the original CubeHash proposal and is about on par with ECHO and MD6.
Resumo:
We preserit a computational procedure to control art experimental chaotic system by applying the occasional proportional feedback (OPF) method. The method implementation uses the fuzzy theory to relate the variable correction to the necessary adjustment in the control parameter. As an application We control the chaotic attractors of the Chua circuit. We present file developed circuits and algorithms to implement this control in real time. To simplify the used procedure, we use it low resolution analog to digital converter compensated for a lowpass filter that facilitates similar applications to control other systems. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
This paper investigates how to make improved action selection for online policy learning in robotic scenarios using reinforcement learning (RL) algorithms. Since finding control policies using any RL algorithm can be very time consuming, we propose to combine RL algorithms with heuristic functions for selecting promising actions during the learning process. With this aim, we investigate the use of heuristics for increasing the rate of convergence of RL algorithms and contribute with a new learning algorithm, Heuristically Accelerated Q-learning (HAQL), which incorporates heuristics for action selection to the Q-Learning algorithm. Experimental results on robot navigation show that the use of even very simple heuristic functions results in significant performance enhancement of the learning rate.
Resumo:
Ecological niche modelling combines species occurrence points with environmental raster layers in order to obtain models for describing the probabilistic distribution of species. The process to generate an ecological niche model is complex. It requires dealing with a large amount of data, use of different software packages for data conversion, for model generation and for different types of processing and analyses, among other functionalities. A software platform that integrates all requirements under a single and seamless interface would be very helpful for users. Furthermore, since biodiversity modelling is constantly evolving, new requirements are constantly being added in terms of functions, algorithms and data formats. This evolution must be accompanied by any software intended to be used in this area. In this scenario, a Service-Oriented Architecture (SOA) is an appropriate choice for designing such systems. According to SOA best practices and methodologies, the design of a reference business process must be performed prior to the architecture definition. The purpose is to understand the complexities of the process (business process in this context refers to the ecological niche modelling problem) and to design an architecture able to offer a comprehensive solution, called a reference architecture, that can be further detailed when implementing specific systems. This paper presents a reference business process for ecological niche modelling, as part of a major work focused on the definition of a reference architecture based on SOA concepts that will be used to evolve the openModeller software package for species modelling. The basic steps that are performed while developing a model are described, highlighting important aspects, based on the knowledge of modelling experts. In order to illustrate the steps defined for the process, an experiment was developed, modelling the distribution of Ouratea spectabilis (Mart.) Engl. (Ochnaceae) using openModeller. As a consequence of the knowledge gained with this work, many desirable improvements on the modelling software packages have been identified and are presented. Also, a discussion on the potential for large-scale experimentation in ecological niche modelling is provided, highlighting opportunities for research. The results obtained are very important for those involved in the development of modelling tools and systems, for requirement analysis and to provide insight on new features and trends for this category of systems. They can also be very helpful for beginners in modelling research, who can use the process and the experiment example as a guide to this complex activity. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper presents an Adaptive Maximum Entropy (AME) approach for modeling biological species. The Maximum Entropy algorithm (MaxEnt) is one of the most used methods in modeling biological species geographical distribution. The approach presented here is an alternative to the classical algorithm. Instead of using the same set features in the training, the AME approach tries to insert or to remove a single feature at each iteration. The aim is to reach the convergence faster without affect the performance of the generated models. The preliminary experiments were well performed. They showed an increasing on performance both in accuracy and in execution time. Comparisons with other algorithms are beyond the scope of this paper. Some important researches are proposed as future works.
Resumo:
This paper presents a new methodology to estimate unbalanced harmonic distortions in a power system, based on measurements of a limited number of given sites. The algorithm utilizes evolutionary strategies (ES), a development branch of evolutionary algorithms. The problem solving algorithm herein proposed makes use of data from various power quality meters, which can either be synchronized by high technology GPS devices or by using information from a fundamental frequency load flow, what makes the overall power quality monitoring system much less costly. The ES based harmonic estimation model is applied to a 14 bus network to compare its performance to a conventional Monte Carlo approach. It is also applied to a 50 bus subtransmission network in order to compare the three-phase and single-phase approaches as well as the robustness of the proposed method. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This research presents the development and implementation in a computational routine of algorithms for fault location in multiterminal transmission lines. These algorithms are part of a fault-location system, which is capable of correctly identifying the fault point based on voltage and current phasor quantities, calculated by using measurements of voltage and current signals from intelligent electronic devices, located on the transmission-line terminals. The algorithms have access to the electrical parameters of the transmission lines and to information about the transformers loading and their connection type. This paper also presents the development of phase component models for the power system elements used by the fault-location algorithms.
Resumo:
This paper presents a new methodology to estimate harmonic distortions in a power system, based on measurements of a limited number of given sites. The algorithm utilizes evolutionary strategies (ES), a development branch of evolutionary algorithms. The main advantage in using such a technique relies upon its modeling facilities as well as its potential to solve fairly complex problems. The problem-solving algorithm herein proposed makes use of data from various power-quality (PQ) meters, which can either be synchronized by high technology global positioning system devices or by using information from a fundamental frequency load flow. This second approach makes the overall PQ monitoring system much less costly. The algorithm is applied to an IEEE test network, for which sensitivity analysis is performed to determine how the parameters of the ES can be selected so that the algorithm performs in an effective way. Case studies show fairly promising results and the robustness of the proposed method.
Resumo:
An improvement to the quality bidimensional Delaunay mesh generation algorithm, which combines the mesh refinement algorithms strategy of Ruppert and Shewchuk is proposed in this research. The developed technique uses diametral lenses criterion, introduced by L. P. Chew, with the purpose of eliminating the extremely obtuse triangles in the boundary mesh. This method splits the boundary segment and obtains an initial prerefinement, and thus reducing the number of necessary iterations to generate a high quality sequential triangulation. Moreover, it decreases the intensity of the communication and synchronization between subdomains in parallel mesh refinement.
Resumo:
Most post-processors for boundary element (BE) analysis use an auxiliary domain mesh to display domain results, working against the profitable modelling process of a pure boundary discretization. This paper introduces a novel visualization technique which preserves the basic properties of the boundary element methods. The proposed algorithm does not require any domain discretization and is based on the direct and automatic identification of isolines. Another critical aspect of the visualization of domain results in BE analysis is the effort required to evaluate results in interior points. In order to tackle this issue, the present article also provides a comparison between the performance of two different BE formulations (conventional and hybrid). In addition, this paper presents an overview of the most common post-processing and visualization techniques in BE analysis, such as the classical algorithms of scan line and the interpolation over a domain discretization. The results presented herein show that the proposed algorithm offers a very high performance compared with other visualization procedures.
Resumo:
This article presents a tool for the allocation analysis of complex systems of water resources, called AcquaNetXL, developed in the form of spreadsheet in which a model of linear optimization and another nonlinear were incorporated. The AcquaNetXL keeps the concepts and attributes of a decision support system. In other words, it straightens out the communication between the user and the computer, facilitates the understanding and the formulation of the problem, the interpretation of the results and it also gives a support in the process of decision making, turning it into a clear and organized process. The performance of the algorithms used for solving the problems of water allocation was satisfactory especially for the linear model.
Resumo:
Although the Hertz theory is not applicable in the analysis of the indentation of elastic-plastic materials, it is common practice to incorporate the concept of indenter/specimen combined modulus to consider indenter deformation. The appropriateness was assessed of the use of reduced modulus to incorporate the effect of indenter deformation in the analysis of the indentation with spherical indenters. The analysis based on finite element simulations considered four values of the ratio of the indented material elastic modulus to that of the diamond indenter, E/E(i) (0, 0.04, 0.19, 0.39), four values of the ratio of the elastic reduced modulus to the initial yield strength, E(r)/Y (0, 10, 20, 100), and two values of the ratio of the indenter radius to maximum total displacement, R/delta(max) (3, 10). Indenter deformation effects are better accounted for by the reduced modulus if the indented material behaves entirely elastically. In this case, identical load-displacement (P - delta) curves are obtained with rigid and elastic spherical indenters for the same elastic reduced modulus. Changes in the ratio E/E(i), from 0 to 0.39, resulted in variations lower than 5% for the load dimensionless functions, lower than 3% in the contact area, A(c), and lower than 5% in the ratio H/E(r). However, deformations of the elastic indenter made the actual radius of contact change, even in the indentation of elastic materials. Even though the load dimensionless functions showed only a little increase with the ratio E/E(i), the hardening coefficient and the yield strength could be slightly overestimated when algorithms based on rigid indenters are used. For the unloading curves, the ratio delta(e)/delta(max), where delta(e) is the point corresponding to zero load of a straight line with slope S from the point (P(max), delta(max)), varied less than 5% with the ratio E/E(i). Similarly, the relationship between reduced modulus and the unloading indentation curve, expressed by Sneddon`s equation, did not reveal the necessity of correction with the ratio E/E(i). The most affected parameter in the indentation curve, as a consequence of the indentation deformation, was the ratio between the residual indentation depth after complete unloading and the maximum indenter displacement, delta(r)/delta(max) (up to 26%), but this variation did not significantly decrease the capability to estimate hardness and elastic modulus based on the ratio of the residual indentation depth to maximum indentation depth, h(r)/h(max). In general, the results confirm the convenience of the use of the reduced modulus in the spherical instrumented indentation tests.
Resumo:
This work deals with the problem of minimizing the waste of space that occurs on a rotational placement of a set of irregular two dimensional polygons inside a two dimensional container. This problem is approached with an heuristic based on simulated annealing. Traditional 14 external penalization"" techniques are avoided through the application of the no-fit polygon, that determinates the collision free area for each polygon before its placement. The simulated annealing controls: the rotation applied, the placement and the sequence of placement of the polygons. For each non placed polygon, a limited depth binary search is performed to find a scale factor that when applied to the polygon, would allow it to be fitted in the container. It is proposed a crystallization heuristic, in order to increase the number of accepted solutions. The bottom left and larger first deterministic heuristics were also studied. The proposed process is suited for non convex polygons and containers, the containers can have holes inside. (C) 2009 Elsevier Ltd. All rights reserved.