876 resultados para algorithm optimization
Resumo:
In this paper, we present WebPut, a prototype system that adopts a novel web-based approach to the data imputation problem. Towards this, Webput utilizes the available information in an incomplete database in conjunction with the data consistency principle. Moreover, WebPut extends effective Information Extraction (IE) methods for the purpose of formulating web search queries that are capable of effectively retrieving missing values with high accuracy. WebPut employs a confidence-based scheme that efficiently leverages our suite of data imputation queries to automatically select the most effective imputation query for each missing value. A greedy iterative algorithm is proposed to schedule the imputation order of the different missing values in a database, and in turn the issuing of their corresponding imputation queries, for improving the accuracy and efficiency of WebPut. Moreover, several optimization techniques are also proposed to reduce the cost of estimating the confidence of imputation queries at both the tuple-level and the database-level. Experiments based on several real-world data collections demonstrate not only the effectiveness of WebPut compared to existing approaches, but also the efficiency of our proposed algorithms and optimization techniques.
Resumo:
To enhance the performance of the k-nearest neighbors approach in forecasting short-term traffic volume, this paper proposed and tested a two-step approach with the ability of forecasting multiple steps. In selecting k-nearest neighbors, a time constraint window is introduced, and then local minima of the distances between the state vectors are ranked to avoid overlappings among candidates. Moreover, to control extreme values’ undesirable impact, a novel algorithm with attractive analytical features is developed based on the principle component. The enhanced KNN method has been evaluated using the field data, and our comparison analysis shows that it outperformed the competing algorithms in most cases.
Resumo:
Server consolidation using virtualization technology has become an important technology to improve the energy efficiency of data centers. Virtual machine placement is the key in the server consolidation technology. In the past few years, many approaches to the virtual machine placement have been proposed. However, existing virtual machine placement approaches consider the energy consumption by physical machines only, but do not consider the energy consumption in communication network, in a data center. However, the energy consumption in the communication network in a data center is not trivial, and therefore should be considered in the virtual machine placement. In our preliminary research, we have proposed a genetic algorithm for a new virtual machine placement problem that considers the energy consumption in both physical machines and the communication network in a data center. Aiming at improving the performance and efficiency of the genetic algorithm, this paper presents a hybrid genetic algorithm for the energy-efficient virtual machine placement problem. Experimental results show that the hybrid genetic algorithm significantly outperforms the original genetic algorithm, and that the hybrid genetic algorithm is scalable.
Resumo:
This paper presents an optimisation algorithm to maximize the loadability of single wire earth return (SWER) by minimizing the cost of batteries and regulators considering the voltage constraints and thermal limits. This algorithm, that finds the optimum location of batteries and regulators, uses hybrid discrete particle swarm optimization and mutation (DPSO + Mutation). The simulation results on realistic highly loaded SWER network show the effectiveness of using battery to improve the loadability of SWER network in a cost-effective way. In this case, while only 61% of peak load can be supplied without violating the constraints by existing network, the loadability of the network is increased to peak load by utilizing two battery sites which are located optimally. That is, in a SWER system like the studied one, each installed kVA of batteries, optimally located, supports a loadability increase as 2 kVA.
Resumo:
This paper presents a new algorithm based on a Modified Particle Swarm Optimization (MPSO) to estimate the harmonic state variables in a distribution networks. The proposed algorithm performs the estimation for both amplitude and phase of each injection harmonic currents by minimizing the error between the measured values from Phasor Measurement Units (PMUs) and the values computed from the estimated parameters during the estimation process. The proposed algorithm can take into account the uncertainty of the harmonic pseudo measurement and the tolerance in the line impedances of the network as well as the uncertainty of the Distributed Generators (DGs) such as Wind Turbines (WTs). The main features of the proposed MPSO algorithm are usage of a primary and secondary PSO loop and applying the mutation function. The simulation results on 34-bus IEEE radial and a 70-bus realistic radial test networks are presented. The results demonstrate that the speed and the accuracy of the proposed Distribution Harmonic State Estimation (DHSE) algorithm are very excellent compared to the algorithms such as Weight Least Square (WLS), Genetic Algorithm (GA), original PSO, and Honey Bees Mating Optimization (HBMO).
Resumo:
This paper presents a new method to determine feeder reconfiguration scheme considering variable load profile. The objective function consists of system losses, reliability costs and also switching costs. In order to achieve an optimal solution the proposed method compares these costs dynamically and determines when and how it is reasonable to have a switching operation. The proposed method divides a year into several equal time periods, then using particle swarm optimization (PSO), optimal candidate configurations for each period are obtained. System losses and customer interruption cost of each configuration during each period is also calculated. Then, considering switching cost from a configuration to another one, dynamic programming algorithm (DPA) is used to determine the annual reconfiguration scheme. Several test systems were used to validate the proposed method. The obtained results denote that to have an optimum solution it is necessary to compare operation costs dynamically.
Resumo:
A long query provides more useful hints for searching relevant documents, but it is likely to introduce noise which affects retrieval performance. In order to smooth such adverse effect, it is important to reduce noisy terms, introduce and boost additional relevant terms. This paper presents a comprehensive framework, called Aspect Hidden Markov Model (AHMM), which integrates query reduction and expansion, for retrieval with long queries. It optimizes the probability distribution of query terms by utilizing intra-query term dependencies as well as the relationships between query terms and words observed in relevance feedback documents. Empirical evaluation on three large-scale TREC collections demonstrates that our approach, which is automatic, achieves salient improvements over various strong baselines, and also reaches a comparable performance to a state of the art method based on user’s interactive query term reduction and expansion.
Resumo:
Reliability of carrier phase ambiguity resolution (AR) of an integer least-squares (ILS) problem depends on ambiguity success rate (ASR), which in practice can be well approximated by the success probability of integer bootstrapping solutions. With the current GPS constellation, sufficiently high ASR of geometry-based model can only be achievable at certain percentage of time. As a result, high reliability of AR cannot be assured by the single constellation. In the event of dual constellations system (DCS), for example, GPS and Beidou, which provide more satellites in view, users can expect significant performance benefits such as AR reliability and high precision positioning solutions. Simply using all the satellites in view for AR and positioning is a straightforward solution, but does not necessarily lead to high reliability as it is hoped. The paper presents an alternative approach that selects a subset of the visible satellites to achieve a higher reliability performance of the AR solutions in a multi-GNSS environment, instead of using all the satellites. Traditionally, satellite selection algorithms are mostly based on the position dilution of precision (PDOP) in order to meet accuracy requirements. In this contribution, some reliability criteria are introduced for GNSS satellite selection, and a novel satellite selection algorithm for reliable ambiguity resolution (SARA) is developed. The SARA algorithm allows receivers to select a subset of satellites for achieving high ASR such as above 0.99. Numerical results from a simulated dual constellation cases show that with the SARA procedure, the percentages of ASR values in excess of 0.99 and the percentages of ratio-test values passing the threshold 3 are both higher than those directly using all satellites in view, particularly in the case of dual-constellation, the percentages of ASRs (>0.99) and ratio-test values (>3) could be as high as 98.0 and 98.5 % respectively, compared to 18.1 and 25.0 % without satellite selection process. It is also worth noting that the implementation of SARA is simple and the computation time is low, which can be applied in most real-time data processing applications.
Resumo:
We propose a new protocol providing cryptographically secure authentication to unaided humans against passive adversaries. We also propose a new generic passive attack on human identification protocols. The attack is an application of Coppersmith’s baby-step giant-step algorithm on human identification protcols. Under this attack, the achievable security of some of the best candidates for human identification protocols in the literature is further reduced. We show that our protocol preserves similar usability while achieves better security than these protocols. A comprehensive security analysis is provided which suggests parameters guaranteeing desired levels of security.
Resumo:
The placement of the mappers and reducers on the machines directly affects the performance and cost of the MapReduce computation in cloud computing. From the computational point of view, the mappers/reducers placement problem is a generalization of the classical bin packing problem, which is NP-complete. Thus, in this paper we propose a new heuristic algorithm for the mappers/reducers placement problem in cloud computing and evaluate it by comparing with other several heuristics on solution quality and computation time by solving a set of test problems with various characteristics. The computational results show that our heuristic algorithm is much more efficient than the other heuristics. Also, we verify the effectiveness of our heuristic algorithm by comparing the mapper/reducer placement for a benchmark problem generated by our heuristic algorithm with a conventional mapper/reducer placement. The comparison results show that the computation using our mapper/reducer placement is much cheaper while still satisfying the computation deadline.
Resumo:
MapReduce is a computation model for processing large data sets in parallel on large clusters of machines, in a reliable, fault-tolerant manner. A MapReduce computation is broken down into a number of map tasks and reduce tasks, which are performed by so called mappers and reducers, respectively. The placement of the mappers and reducers on the machines directly affects the performance and cost of the MapReduce computation. From the computational point of view, the mappers/reducers placement problem is a generation of the classical bin packing problem, which is NPcomplete. Thus, in this paper we propose a new grouping genetic algorithm for the mappers/reducers placement problem in cloud computing. Compared with the original one, our grouping genetic algorithm uses an innovative coding scheme and also eliminates the inversion operator which is an essential operator in the original grouping genetic algorithm. The new grouping genetic algorithm is evaluated by experiments and the experimental results show that it is much more efficient than four popular algorithms for the problem, including the original grouping genetic algorithm.
Resumo:
A Software-as-a-Service or SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. Components in a composite SaaS may need to be scaled – replicated or deleted, to accommodate the user’s load. It may not be necessary to replicate all components of the SaaS, as some components can be shared by other instances. On the other hand, when the load is low, some of the instances may need to be deleted to avoid resource underutilisation. Thus, it is important to determine which components are to be scaled such that the performance of the SaaS is still maintained. Extensive research on the SaaS resource management in Cloud has not yet addressed the challenges of scaling process for composite SaaS. Therefore, a hybrid genetic algorithm is proposed in which it utilises the problem’s knowledge and explores the best combination of scaling plan for the components. Experimental results demonstrate that the proposed algorithm outperforms existing heuristic-based solutions.
Resumo:
In Service-oriented Architectures, business processes can be realized by composing loosely coupled services. The problem of QoS-aware service composition is widely recognized in the literature. Existing approaches on computing an optimal solution to this problem tackle structured business processes, i.e., business processes which are composed of XOR-block, AND-block, and repeat loop orchestration components. As of yet, OR-block and unstructured orchestration components have not been sufficiently considered in the context of QoS-aware service composition. The work at hand addresses this shortcoming. An approach for computing an optimal solution to the service composition problem is proposed considering the structured orchestration components, such as AND/XOR/OR-block and repeat loop, as well as unstructured orchestration components.
Resumo:
Scaffolds play a pivotal role in tissue engineering, promoting the synthesis of neo extra-cellular matrix (ECM), and providing temporary mechanical support for the cells during tissue regeneration. Advances introduced by additive manufacturing techniques have significantly improved the ability to regulate scaffold architecture, enhancing the control over scaffold shape and porosity. Thus, considerable research efforts have been devoted to the fabrication of 3D porous scaffolds with optimized micro-architectural features. This chapter gives an overview of the methods for the design of additively manufactured scaffolds and their applicability in tissue engineering (TE). Along with a survey of the state of the art, the Authors will also present a recently developed method, called Load-Adaptive Scaffold Architecturing (LASA), which returns scaffold architectures optimized for given applied mechanical loads systems, once the specific stress distribution is evaluated through Finite Element Analysis (FEA).
Resumo:
Fluid–Structure Interaction (FSI) problem is significant in science and engineering, which leads to challenges for computational mechanics. The coupled model of Finite Element and Smoothed Particle Hydrodynamics (FE-SPH) is a robust technique for simulation of FSI problems. However, two important steps of neighbor searching and contact searching in the coupled FE-SPH model are extremely time-consuming. Point-In-Box (PIB) searching algorithm has been developed by Swegle to improve the efficiency of searching. However, it has a shortcoming that efficiency of searching can be significantly affected by the distribution of points (nodes in FEM and particles in SPH). In this paper, in order to improve the efficiency of searching, a novel Striped-PIB (S-PIB) searching algorithm is proposed to overcome the shortcoming of PIB algorithm that caused by points distribution, and the two time-consuming steps of neighbor searching and contact searching are integrated into one searching step. The accuracy and efficiency of the newly developed searching algorithm is studied on by efficiency test and FSI problems. It has been found that the newly developed model can significantly improve the computational efficiency and it is believed to be a powerful tool for the FSI analysis.