917 resultados para gravitational search algorithm
Resumo:
During the past few decades, developing efficient methods to solve dynamic facility layout problems has been focused on significantly by practitioners and researchers. More specifically meta-heuristic algorithms, especially genetic algorithm, have been proven to be increasingly helpful to generate sub-optimal solutions for large-scale dynamic facility layout problems. Nevertheless, the uncertainty of the manufacturing factors in addition to the scale of the layout problem calls for a mixed genetic algorithm–robust approach that could provide a single unlimited layout design. The present research aims to devise a customized permutation-based robust genetic algorithm in dynamic manufacturing environments that is expected to be generating a unique robust layout for all the manufacturing periods. The numerical outcomes of the proposed robust genetic algorithm indicate significant cost improvements compared to the conventional genetic algorithm methods and a selective number of other heuristic and meta-heuristic techniques.
Resumo:
The purpose of this study is to analyze and develop various forms of abduction as a means of conceptualizing processes of discovery. Abduction was originally presented by Charles S. Peirce (1839-1914) as a "weak", third main mode of inference -- besides deduction and induction -- one which, he proposed, is closely related to many kinds of cognitive processes, such as instincts, perception, practices and mediated activity in general. Both abduction and discovery are controversial issues in philosophy of science. It is often claimed that discovery cannot be a proper subject area for conceptual analysis and, accordingly, abduction cannot serve as a "logic of discovery". I argue, however, that abduction gives essential means for understanding processes of discovery although it cannot give rise to a manual or algorithm for making discoveries. In the first part of the study, I briefly present how the main trend in philosophy of science has, for a long time, been critical towards a systematic account of discovery. Various models have, however, been suggested. I outline a short history of abduction; first Peirce's evolving forms of his theory, and then later developments. Although abduction has not been a major area of research until quite recently, I review some critiques of it and look at the ways it has been analyzed, developed and used in various fields of research. Peirce's own writings and later developments, I argue, leave room for various subsequent interpretations of abduction. The second part of the study consists of six research articles. First I treat "classical" arguments against abduction as a logic of discovery. I show that by developing strategic aspects of abductive inference these arguments can be countered. Nowadays the term 'abduction' is often used as a synonym for the Inference to the Best Explanation (IBE) model. I argue, however, that it is useful to distinguish between IBE ("Harmanian abduction") and "Hansonian abduction"; the latter concentrating on analyzing processes of discovery. The distinctions between loveliness and likeliness, and between potential and actual explanations are more fruitful within Hansonian abduction. I clarify the nature of abduction by using Peirce's distinction between three areas of "semeiotic": grammar, critic, and methodeutic. Grammar (emphasizing "Firstnesses" and iconicity) and methodeutic (i.e., a processual approach) especially, give new means for understanding abduction. Peirce himself held a controversial view that new abductive ideas are products of an instinct and an inference at the same time. I maintain that it is beneficial to make a clear distinction between abductive inference and abductive instinct, on the basis of which both can be developed further. Besides these, I analyze abduction as a part of distributed cognition which emphasizes a long-term interaction with the material, social and cultural environment as a source for abductive ideas. This approach suggests a "trialogical" model in which inquirers are fundamentally connected both to other inquirers and to the objects of inquiry. As for the classical Meno paradox about discovery, I show that abduction provides more than one answer. As my main example of abductive methodology, I analyze the process of Ignaz Semmelweis' research on childbed fever. A central basis for abduction is the claim that discovery is not a sequence of events governed only by processes of chance. Abduction treats those processes which both constrain and instigate the search for new ideas; starting from the use of clues as a starting point for discovery, but continuing in considerations like elegance and 'loveliness'. The study then continues a Peircean-Hansonian research programme by developing abduction as a way of analyzing processes of discovery.
Resumo:
An on-line algorithm is developed for the location of single cross point faults in a PLA (FPLA). The main feature of the algorithm is the determination of a fault set corresponding to the response obtained for a failed test. For the apparently small number of faults in this set, all other tests are generated and a fault table is formed. Subsequently, an adaptive procedure is used to diagnose the fault. Functional equivalence test is carried out to determine the actual fault class if the adaptive testing results in a set of faults with identical tests. The large amount of computation time and storage required in the determination, a priori, of all the fault equivalence classes or in the construction of a fault dictionary are not needed here. A brief study of functional equivalence among the cross point faults is also made.
Resumo:
The Reeb graph tracks topology changes in level sets of a scalar function and finds applications in scientific visualization and geometric modeling. We describe an algorithm that constructs the Reeb graph of a Morse function defined on a 3-manifold. Our algorithm maintains connected components of the two dimensional levels sets as a dynamic graph and constructs the Reeb graph in O(nlogn+nlogg(loglogg)3) time, where n is the number of triangles in the tetrahedral mesh representing the 3-manifold and g is the maximum genus over all level sets of the function. We extend this algorithm to construct Reeb graphs of d-manifolds in O(nlogn(loglogn)3) time, where n is the number of triangles in the simplicial complex that represents the d-manifold. Our result is a significant improvement over the previously known O(n2) algorithm. Finally, we present experimental results of our implementation and demonstrate that our algorithm for 3-manifolds performs efficiently in practice.
Resumo:
Models that implement the bio-physical components of agro-ecosystems are ideally suited for exploring sustainability issues in cropping systems. Sustainability may be represented as a number of objectives to be maximised or minimised. However, the full decision space of these objectives is usually very large and simplifications are necessary to safeguard computational feasibility. Different optimisation approaches have been proposed in the literature, usually based on mathematical programming techniques. Here, we present a search approach based on a multiobjective evaluation technique within an evolutionary algorithm (EA), linked to the APSIM cropping systems model. A simple case study addressing crop choice and sowing rules in North-East Australian cropping systems is used to illustrate the methodology. Sustainability of these systems is evaluated in terms of economic performance and resource use. Due to the limited size of this sample problem, the quality of the EA optimisation can be assessed by comparison to the full problem domain. Results demonstrate that the EA procedure, parameterised with generic parameters from the literature, converges to a useable solution set within a reasonable amount of time. Frontier ‘‘peels’’ or Pareto-optimal solutions as described by the multiobjective evaluation procedure provide useful information for discussion on trade-offs between conflicting objectives.
Resumo:
In this paper we study two problems in feedback stabilization. The first is the simultaneous stabilization problem, which can be stated as follows. Given plantsG_{0}, G_{1},..., G_{l}, does there exist a single compensatorCthat stabilizes all of them? The second is that of stabilization by a stable compensator, or more generally, a "least unstable" compensator. Given a plantG, we would like to know whether or not there exists a stable compensatorCthat stabilizesG; if not, what is the smallest number of right half-place poles (counted according to their McMillan degree) that any stabilizing compensator must have? We show that the two problems are equivalent in the following sense. The problem of simultaneously stabilizingl + 1plants can be reduced to the problem of simultaneously stabilizinglplants using a stable compensator, which in turn can be stated as the following purely algebraic problem. Given2lmatricesA_{1}, ..., A_{l}, B_{1}, ..., B_{l}, whereA_{i}, B_{i}are right-coprime for alli, does there exist a matrixMsuch thatA_{i} + MB_{i}, is unimodular for alli?Conversely, the problem of simultaneously stabilizinglplants using a stable compensator can be formulated as one of simultaneously stabilizingl + 1plants. The problem of determining whether or not there exists anMsuch thatA + BMis unimodular, given a right-coprime pair (A, B), turns out to be a special case of a question concerning a matrix division algorithm in a proper Euclidean domain. We give an answer to this question, and we believe this result might be of some independent interest. We show that, given twon times mplantsG_{0} and G_{1}we can generically stabilize them simultaneously provided eithernormis greater than one. In contrast, simultaneous stabilizability, of two single-input-single-output plants, g0and g1, is not generic.
Resumo:
Systems of learning automata have been studied by various researchers to evolve useful strategies for decision making under uncertainity. Considered in this paper are a class of hierarchical systems of learning automata where the system gets responses from its environment at each level of the hierarchy. A classification of such sequential learning tasks based on the complexity of the learning problem is presented. It is shown that none of the existing algorithms can perform in the most general type of hierarchical problem. An algorithm for learning the globally optimal path in this general setting is presented, and its convergence is established. This algorithm needs information transfer from the lower levels to the higher levels. Using the methodology of estimator algorithms, this model can be generalized to accommodate other kinds of hierarchical learning tasks.
Resumo:
This paper considers the applicability of the least mean fourth (LM F) power gradient adaptation criteria with 'advantage' for signals associated with gaussian noise, the associated noise power estimate not being known. The proposed method, as an adaptive spectral estimator, is found to provide superior performance than the least mean square (LMS) adaptation for the same (or even lower) speed of convergence for signals having sufficiently high signal-to-gaussian noise ratio. The results include comparison of the performance of the LMS-tapped delay line, LMF-tapped delay line, LMS-lattice and LMF-lattice algorithms, with the Burg's block data method as reference. The signals, like sinusoids with noise and stochastic signals like EEG, are considered in this study.
Resumo:
A generalized pulse pair has been suggested in which the longitudinal spin order is retained and the transverse components cancelled by random variation of the interval between pulses, in successive applications of the two-dimensional NMR algorithm. This method leads to pure phases and has been exploited to provide a simpler scheme for two-spin filtering and for pure phase spectroscopy in multiple-quantum-filtered two-dimensional NMR experiments.
Resumo:
A fuzzy logic based centralized control algorithm for irrigation canals is presented. Purpose of the algorithm is to control downstream discharge and water level of pools in the canal, by adjusting discharge release from the upstream end and gates settings. The algorithm is based on the dynamic wave model (Saint-Venant equations) inversion in space, wherein the momentum equation is replaced by a fuzzy rule based model, while retaining the continuity equation in its complete form. The fuzzy rule based model is developed on fuzzification of a new mathematical model for wave velocity, the derivational details of which are given. The advantages of the fuzzy control algorithm, over other conventional control algorithms, are described. It is transparent and intuitive, and no linearizations of the governing equations are involved. Timing of the algorithm and method of computation are explained. It is shown that the tuning is easy and the computations are straightforward. The algorithm provides stable, realistic and robust outputs. The disadvantage of the algorithm is reduced precision in its outputs due to the approximation inherent in the fuzzy logic. Feed back control logic is adopted to eliminate error caused by the system disturbances as well as error caused by the reduced precision in the outputs. The algorithm is tested by applying it to water level control problem in a fictitious canal with a single pool and also in a real canal with a series of pools. It is found that results obtained from the algorithm are comparable to those obtained from conventional control algorithms.
Resumo:
Artifacts in the form of cross peaks have been observed along two- and three-quantum diagonals in single-quantum two-dimensional correlated (COSY) spectra of several peptides and oligonucleotides. These have been identified as due to the presence of a non-equilibrium state of kind I (a state describable by populations which differ from equilibrium) of strongly coupled spins carried over from one experiment to the next in the COSY algorithm.
Resumo:
In visual search one tries to find the currently relevant item among other, irrelevant items. In the present study, visual search performance for complex objects (characters, faces, computer icons and words) was investigated, and the contribution of different stimulus properties, such as luminance contrast between characters and background, set size, stimulus size, colour contrast, spatial frequency, and stimulus layout were investigated. Subjects were required to search for a target object among distracter objects in two-dimensional stimulus arrays. The outcome measure was threshold search time, that is, the presentation duration of the stimulus array required by the subject to find the target with a certain probability. It reflects the time used for visual processing separated from the time used for decision making and manual reactions. The duration of stimulus presentation was controlled by an adaptive staircase method. The number and duration of eye fixations, saccade amplitude, and perceptual span, i.e., the number of items that can be processed during a single fixation, were measured. It was found that search performance was correlated with the number of fixations needed to find the target. Search time and the number of fixations increased with increasing stimulus set size. On the other hand, several complex objects could be processed during a single fixation, i.e., within the perceptual span. Search time and the number of fixations depended on object type as well as luminance contrast. The size of the perceptual span was smaller for more complex objects, and decreased with decreasing luminance contrast within object type, especially for very low contrasts. In addition, the size and shape of perceptual span explained the changes in search performance for different stimulus layouts in word search. Perceptual span was scale invariant for a 16-fold range of stimulus sizes, i.e., the number of items processed during a single fixation was independent of retinal stimulus size or viewing distance. It is suggested that saccadic visual search consists of both serial (eye movements) and parallel (processing within perceptual span) components, and that the size of the perceptual span may explain the effectiveness of saccadic search in different stimulus conditions. Further, low-level visual factors, such as the anatomical structure of the retina, peripheral stimulus visibility and resolution requirements for the identification of different object types are proposed to constrain the size of the perceptual span, and thus, limit visual search performance. Similar methods were used in a clinical study to characterise the visual search performance and eye movements of neurological patients with chronic solvent-induced encephalopathy (CSE). In addition, the data about the effects of different stimulus properties on visual search in normal subjects were presented as simple practical guidelines, so that the limits of human visual perception could be taken into account in the design of user interfaces.
Resumo:
This report describes the development and simulation of a variable rate controller for a 6-degree of freedom nonlinear model. The variable rate simulation model represents an off the shelf autopilot. Flight experiment involves risks and can be expensive. Therefore a dynamic model to understand the performance characteristics of the UAS in mission simulation before actual flight test or to obtain parameters needed for the flight is important. The control and guidance is implemented in Simulink. The report tests the use of the model for air search and air sampling path planning. A GUI in which a set of mission scenarios, in which two experts (mission expert, i.e. air sampling or air search and an UAV expert) interact, is presented showing the benefits of the method.
Resumo:
A computational algorithm (based on Smullyan's analytic tableau method) that varifies whether a given well-formed formula in propositional calculus is a tautology or not has been implemented on a DEC system 10. The stepwise refinement approch of program development used for this implementation forms the subject matter of this paper. The top-down design has resulted in a modular and reliable program package. This computational algoritlhm compares favourably with the algorithm based on the well-known resolution principle used in theorem provers.
Resumo:
This paper describes an algorithm to compute the union, intersection and difference of two polygons using a scan-grid approach. Basically, in this method, the screen is divided into cells and the algorithm is applied to each cell in turn. The output from all the cells is integrated to yield a representation of the output polygon. In most cells, no computation is required and thus the algorithm is a fast one. The algorithm has been implemented for polygons but can be extended to polyhedra as well. The algorithm is shown to take O(N) time in the average case where N is the total number of edges of the two input polygons.