917 resultados para GRAVITATIONAL SEARCH ALGORITHM
Resumo:
Background: Despite initial concerns about the sensitivity of the proposed diagnostic criteria for DSM-5 Autism Spectrum Disorder (ASD; e.g. Gibbs et al., 2012; McPartland et al., 2012), evidence is growing that the DSM-5 criteria provides an inclusive description with both good sensitivity and specificity (e.g. Frazier et al., 2012; Kent, Carrington et al., 2013). The capacity of the criteria to provide high levels of sensitivity and specificity comparable with DSM-IV-TR however relies on careful measurement to ensure that appropriate items from diagnostic instruments map onto the new DSM-5 descriptions.Objectives: To use an existing DSM-5 diagnostic algorithm (Kent, Carrington et .al., 2013) to identify a set of ‘essential’ behaviors sufficient to make a reliable and accurate diagnosis of DSM-5 Autism Spectrum Disorder (ASD) across age and ability level. Methods: Specific behaviors were identified and tested from the recently published DSM-5 algorithm for the Diagnostic Interview for Social and Communication Disorders (DISCO). Analyses were run on existing DISCO datasets, with a total participant sample size of 335. Three studies provided step-by-step development towards identification of a minimum set of items. Study 1 identified the most highly discriminating items (p<.0001). Study 2 used a lower selection threshold than in Study 1 (p<.05) to facilitate better representation of the full DSM-5 ASD profile. Study 3 included additional items previously reported as significantly more frequent in individuals with higher ability. The discriminant validity of all three item sets was tested using Receiver Operating Characteristic curves. Finally, sensitivity across age and ability was investigated in a subset of individuals with ASD (n=190).Results: Study 1 identified an item set (14 items) with good discriminant validity, but which predominantly measured social-communication behaviors (11/14). The Study 2 item set (48 items) better represented the DSM-5 ASD and had good discriminant validity, but the item set lacked sensitivity for individuals with higher ability. The final Study 3 adjusted item set (54 items) improved sensitivity for individuals with higher ability and performance and was comparable to the published DISCO DSM-5 algorithm.Conclusions: This work represents a first attempt to derive a reduced set of behaviors for DSM-5 directly from an existing standardized ASD developmental history interview. Further work involving existing ASD diagnostic tools with community-based and well characterized research samples will be required to replicate these findings and exploit their potential to contribute to a more efficient and focused ASD diagnostic process.
Resumo:
The problem of finding the optimal join ordering executing a query to a relational database management system is a combinatorial optimization problem, which makes deterministic exhaustive solution search unacceptable for queries with a great number of joined relations. In this work an adaptive genetic algorithm with dynamic population size is proposed for optimizing large join queries. The performance of the algorithm is compared with that of several classical non-deterministic optimization algorithms. Experiments have been performed optimizing several random queries against a randomly generated data dictionary. The proposed adaptive genetic algorithm with probabilistic selection operator outperforms in a number of test runs the canonical genetic algorithm with Elitist selection as well as two common random search strategies and proves to be a viable alternative to existing non-deterministic optimization approaches.
Resumo:
In this paper a variable neighborhood search (VNS) approach for the task assignment problem (TAP) is considered. An appropriate neighborhood scheme along with a shaking operator and local search procedure are constructed specifically for this problem. The computational results are presented for the instances from the literature, and compared to optimal solutions obtained by the CPLEX solver and heuristic solutions generated by the genetic algorithm. It can be seen that the proposed VNS approach reaches all optimal solutions in a quite short amount of computational time.
Resumo:
The objective of this study was to identify a set of 'essential' behaviours sufficient for diagnosis of DSM-5 Autism Spectrum Disorder (ASD). Highly discriminating, 'essential' behaviours were identified from the published DSM-5 algorithm developed for the Diagnostic Interview for Social and Communication Disorders (DISCO). Study 1 identified a reduced item set (48 items) with good predictive validity (as measured using receiver operating characteristic curves) that represented all symptom sub-domains described in the DSM-5 ASD criteria but lacked sensitivity for individuals with higher ability. An adjusted essential item set (54 items; Study 2) had good sensitivity when applied to individuals with higher ability and performance was comparable to the published full DISCO DSM-5 algorithm. Investigation at the item level revealed that the most highly discriminating items predominantly measured social-communication behaviours. This work represents a first attempt to derive a reduced set of behaviours for DSM-5 directly from an existing standardised ASD developmental history interview and has implications for the use of DSM-5 criteria for clinical and research practice. © 2014 The Authors.
Resumo:
This paper presents a Variable neighbourhood search (VNS) approach for solving the Maximum Set Splitting Problem (MSSP). The algorithm forms a system of neighborhoods based on changing the component for an increasing number of elements. An efficient local search procedure swaps the components of pairs of elements and yields a relatively short running time. Numerical experiments are performed on the instances known in the literature: minimum hitting set and Steiner triple systems. Computational results show that the proposed VNS achieves all optimal or best known solutions in short times. The experiments indicate that the VNS compares favorably with other methods previously used for solving the MSSP. ACM Computing Classification System (1998): I.2.8.
Resumo:
This work contributes to the development of search engines that self-adapt their size in response to fluctuations in workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computational resources to or from the engine. In this paper, we focus on the problem of regrouping the metric-space search index when the number of virtual machines used to run the search engine is modified to reflect changes in workload. We propose an algorithm for incrementally adjusting the index to fit the varying number of virtual machines. We tested its performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud, while calibrating the results to compensate for the performance fluctuations of the platform. Our experiments show that, when compared with computing the index from scratch, the incremental algorithm speeds up the index computation 2–10 times while maintaining a similar search performance.
Resumo:
In this paper a Variable Neighborhood Search (VNS) algorithm for solving the Capacitated Single Allocation Hub Location Problem (CSAHLP) is presented. CSAHLP consists of two subproblems; the first is choosing a set of hubs from all nodes in a network, while the other comprises finding the optimal allocation of non-hubs to hubs when a set of hubs is already known. The VNS algorithm was used for the first subproblem, while the CPLEX solver was used for the second. Computational results demonstrate that the proposed algorithm has reached optimal solutions on all 20 test instances for which optimal solutions are known, and this in short computational time.
Resumo:
Sequential pattern mining is an important subject in data mining with broad applications in many different areas. However, previous sequential mining algorithms mostly aimed to calculate the number of occurrences (the support) without regard to the degree of importance of different data items. In this paper, we propose to explore the search space of subsequences with normalized weights. We are not only interested in the number of occurrences of the sequences (supports of sequences), but also concerned about importance of sequences (weights). When generating subsequence candidates we use both the support and the weight of the candidates while maintaining the downward closure property of these patterns which allows to accelerate the process of candidate generation.
Resumo:
This research focuses on automatically adapting a search engine size in response to fluctuations in query workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computer resources to or from the engine. Our solution is to contribute an adaptive search engine that will repeatedly re-evaluate its load and, when appropriate, switch over to a dierent number of active processors. We focus on three aspects and break them out into three sub-problems as follows: Continually determining the Number of Processors (CNP), New Grouping Problem (NGP) and Regrouping Order Problem (ROP). CNP means that (in the light of the changes in the query workload in the search engine) there is a problem of determining the ideal number of processors p active at any given time to use in the search engine and we call this problem CNP. NGP happens when changes in the number of processors are determined and it must also be determined which groups of search data will be distributed across the processors. ROP is how to redistribute this data onto processors while keeping the engine responsive and while also minimising the switchover time and the incurred network load. We propose solutions for these sub-problems. For NGP we propose an algorithm for incrementally adjusting the index to t the varying number of virtual machines. For ROP we present an ecient method for redistributing data among processors while keeping the search engine responsive. Regarding the solution for CNP, we propose an algorithm determining the new size of the search engine by re-evaluating its load. We tested the solution performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud. Our experiments show that when we compare our NGP solution with computing the index from scratch, the incremental algorithm speeds up the index computation 2{10 times while maintaining a similar search performance. The chosen redistribution method is 25% to 50% faster than other methods and reduces the network load around by 30%. For CNP we present a deterministic algorithm that shows a good ability to determine a new size of search engine. When combined, these algorithms give an adapting algorithm that is able to adjust the search engine size with a variable workload.
Resumo:
The major barrier to practical optimization of pavement preservation programming has always been that for formulations where the identity of individual projects is preserved, the solution space grows exponentially with the problem size to an extent where it can become unmanageable by the traditional analytical optimization techniques within reasonable limit. This has been attributed to the problem of combinatorial explosion that is, exponential growth of the number of combinations. The relatively large number of constraints often presents in a real-life pavement preservation programming problems and the trade-off considerations required between preventive maintenance, rehabilitation and reconstruction, present yet another factor that contributes to the solution complexity. In this research study, a new integrated multi-year optimization procedure was developed to solve network level pavement preservation programming problems, through cost-effectiveness based evolutionary programming analysis, using the Shuffled Complex Evolution (SCE) algorithm.^ A case study problem was analyzed to illustrate the robustness and consistency of the SCE technique in solving network level pavement preservation problems. The output from this program is a list of maintenance and rehabilitation treatment (M&R) strategies for each identified segment of the network in each programming year, and the impact on the overall performance of the network, in terms of the performance levels of the recommended optimal M&R strategy. ^ The results show that the SCE is very efficient and consistent in the simultaneous consideration of the trade-off between various pavement preservation strategies, while preserving the identity of the individual network segments. The flexibility of the technique is also demonstrated, in the sense that, by suitably coding the problem parameters, it can be used to solve several forms of pavement management programming problems. It is recommended that for large networks, some sort of decomposition technique should be applied to aggregate sections, which exhibit similar performance characteristics into links, such that whatever M&R alternative is recommended for a link can be applied to all the sections connected to it. In this way the problem size, and hence the solution time, can be greatly reduced to a more manageable solution space. ^ The study concludes that the robust search characteristics of SCE are well suited for solving the combinatorial problems in long-term network level pavement M&R programming and provides a rich area for future research. ^
Resumo:
Antenna design is an iterative process in which structures are analyzed and changed to comply with certain performance parameters required. The classic approach starts with analyzing a "known" structure, obtaining the value of its performance parameter and changing this structure until the "target" value is achieved. This process relies on having an initial structure, which follows some known or "intuitive" patterns already familiar to the designer. The purpose of this research was to develop a method of designing UWB antennas. What is new in this proposal is that the design process is reversed: the designer will start with the target performance parameter and obtain a structure as the result of the design process. This method provided a new way to replicate and optimize existing performance parameters. The base of the method was the use of a Genetic Algorithm (GA) adapted to the format of the chromosome that will be evaluated by the Electromagnetic (EM) solver. For the electromagnetic study we used XFDTD™ program, based in the Finite-Difference Time-Domain technique. The programming portion of the method was created under the MatLab environment, which serves as the interface for converting chromosomes, file formats and transferring of data between the XFDTD™ and GA. A high level of customization had to be written into the code to work with the specific files generated by the XFDTD™ program. Two types of cost functions were evaluated; the first one seeking broadband performance within the UWB band, and the second one searching for curve replication of a reference geometry. The performance of the method was evaluated considering the speed provided by the computer resources used. Balance between accuracy, data file size and speed of execution was achieved by defining parameters in the GA code as well as changing the internal parameters of the XFDTD™ projects. The results showed that the GA produced geometries that were analyzed by the XFDTD™ program and changed following the search criteria until reaching the target value of the cost function. Results also showed how the parameters can change the search criteria and influence the running of the code to provide a variety of geometries.
Resumo:
Many classical as well as modern optimization techniques exist. One such modern method belonging to the field of swarm intelligence is termed ant colony optimization. This relatively new concept in optimization involves the use of artificial ants and is based on real ant behavior inspired by the way ants search for food. In this thesis, a novel ant colony optimization technique for continuous domains was developed. The goal was to provide improvements in computing time and robustness when compared to other optimization algorithms. Optimization function spaces can have extreme topologies and are therefore difficult to optimize. The proposed method effectively searched the domain and solved difficult single-objective optimization problems. The developed algorithm was run for numerous classic test cases for both single and multi-objective problems. The results demonstrate that the method is robust, stable, and that the number of objective function evaluations is comparable to other optimization algorithms.
Resumo:
The effectiveness of an optimization algorithm can be reduced to its ability to navigate an objective function’s topology. Hybrid optimization algorithms combine various optimization algorithms using a single meta-heuristic so that the hybrid algorithm is more robust, computationally efficient, and/or accurate than the individual algorithms it is made of. This thesis proposes a novel meta-heuristic that uses search vectors to select the constituent algorithm that is appropriate for a given objective function. The hybrid is shown to perform competitively against several existing hybrid and non-hybrid optimization algorithms over a set of three hundred test cases. This thesis also proposes a general framework for evaluating the effectiveness of hybrid optimization algorithms. Finally, this thesis presents an improved Method of Characteristics Code with novel boundary conditions, which better characterizes pipelines than previous codes. This code is coupled with the hybrid optimization algorithm in order to optimize the operation of real-world piston pumps.
Resumo:
This paper combines the idea of a hierarchical distributed genetic algorithm with different inter-agent partnering strategies. Cascading clusters of sub-populations are built from bottom up, with higher-level sub-populations optimising larger parts of the problem. Hence, higher-level sub-populations search a larger search space with a lower resolution whilst lower-level sub-populations search a smaller search space with a higher resolution. The effects of different partner selection schemes amongst the agents on solution quality are examined for two multiple-choice optimisation problems. It is shown that partnering strategies that exploit problem-specific knowledge are superior and can counter inappropriate (sub-) fitness measurements.
Resumo:
This paper combines the idea of a hierarchical distributed genetic algorithm with different inter-agent partnering strategies. Cascading clusters of sub-populations are built from bottom up, with higher-level sub-populations optimising larger parts of the problem. Hence higher-level sub-populations search a larger search space with a lower resolution whilst lower-level sub-populations search a smaller search space with a higher resolution. The effects of different partner selection schemes for (sub-)fitness evaluation purposes are examined for two multiple-choice optimisation problems. It is shown that random partnering strategies perform best by providing better sampling and more diversity.