90 resultados para Cryptography algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Type reduction (TR) is one of the key components of interval type-2 fuzzy logic systems (IT2FLSs). Minimizing the computational requirements has been one of the key design criteria for developing TR algorithms. Often researchers give more rewards to computationally less expensive TR algorithms. This paper evaluates and compares five frequently used TR algorithms based on their contribution to the forecasting performance of IT2FLS models. Algorithms are judged based on the generalization power of IT2FLS models developed using them. Synthetic and real world case studies with different levels of uncertainty are considered to examine effects of TR algorithms on forecasts' accuracies. As per obtained results, Coupland-Jonh TR algorithm leads to models with a higher and more stable forecasting performance. However, there is no obvious and consistent relationship between the widths of the type reduced set and the TR algorithm. © 2013 Elsevier B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We explore the multicast lifetime capacity of energy-limited wireless ad hoc networks using directional multibeam antennas by formulating and solving the corresponding optimization problem. In such networks, each node is equipped with a practical smart antenna array that can be configured to support multiple beams with adjustable orientation and beamwidth. The special case of this optimization problem in networks with single beams have been extensively studied and shown to be NP-hard. In this paper, we provide a globally optimal solution to this problem by developing a general MILP formulation that can apply to various configurable antenna models, many of which are not supported by the existing formulations. In order to study the multicast lifetime capacity of large-scale networks, we also propose an efficient heuristic algorithm with guaranteed theoretical performance. In particular, we provide a sufficient condition to determine if its performance reaches optimum based on the analysis of its approximation ratio. These results are validated by experiments as well. The multicast lifetime capacity is then quantitatively studied by evaluating the proposed exact and heuristic algorithms using simulations. The experimental results also show that using two-beam antennas can exploit most lifetime capacity of the networks for multicast communications. © 2013 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The least-mean-square-type (LMS-type) algorithms are known as simple and effective adaptation algorithms. However, the LMS-type algorithms have a trade-off between the convergence rate and steady-state performance. In this paper, we investigate a new variable step-size approach to achieve fast convergence rate and low steady-state misadjustment. By approximating the optimal step-size that minimizes the mean-square deviation, we derive variable step-sizes for both the time-domain normalized LMS (NLMS) algorithm and the transform-domain LMS (TDLMS) algorithm. The proposed variable step-sizes are simple quotient forms of the filtered versions of the quadratic error and very effective for the NLMS and TDLMS algorithms. The computer simulations are demonstrated in the framework of adaptive system modeling. Superior performance is obtained compared to the existing popular variable step-size approaches of the NLMS and TDLMS algorithms. © 2014 Springer Science+Business Media New York.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ant colony optimization (ACO) algorithms often fall into the local optimal solution and have lower search efficiency for solving the travelling salesman problem (TSP). According to these shortcomings, this paper proposes a universal optimization strategy for updating the pheromone matrix in the ACO algorithms. The new optimization strategy takes advantages of the unique feature of critical paths reserved in the process of evolving adaptive networks of the Physarum-inspired mathematical model (PMM). The optimized algorithms, denoted as PMACO algorithms, can enhance the amount of pheromone in the critical paths and promote the exploitation of the optimal solution. Experimental results in synthetic and real networks show that the PMACO algorithms are more efficient and robust than the traditional ACO algorithms, which are adaptable to solve the TSP with single or multiple objectives. Meanwhile, we further analyse the influence of parameters on the performance of the PMACO algorithms. Based on these analyses, the best values of these parameters are worked out for the TSP.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

 This research proposed a new methodology to extend algorithms to accept interval-based uncertain parameters. The methodology is applied on scheduling algorithms, including heuristic and meta-heuristic algorithms and produced optimal results with higher accuracy. The research outcomes are effective for decision making process using uncertain or predicted data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper comprehensively investigates performance of evolutionary algorithms for design optimization of shell and tube heat exchangers (STHX). Genetic algorithm (GA), firefly algorithm (FA), and cuckoo search (CS) method are implemented for finding the optimal values for seven key design variables of the STHX model. ε-NTU method and Bell-Delaware procedure are used for thermal modeling of STHX and calculation of shell side heat transfer coefficient and pressure drop. The purpose of STHX optimization is to maximize its thermal efficiency. Obtained results for several simulation optimizations indicate that GA is unable to find permissible and optimal solutions in the majority of cases. In contrast, design variables found by FA and CS always lead to maximum STHX efficiency. Also computational requirements of CS method are significantly less than FA method. As per optimization results, maximum efficiency (83.8%) can be achieved using several design configurations. However, these designs are bearing different dollar costs. Also it is found that the behavior of the majority of decision variables remains consistent in different runs of the FA and CS optimization processes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this research is to examine the efficiency of different aggregation algorithms to the forecasts obtained from individual neural network (NN) models in an ensemble. In this study an ensemble of 100 NN models are constructed with a heterogeneous architecture. The outputs from NN models are combined by three different aggregation algorithms. These aggregation algorithms comprise of a simple average, trimmed mean, and a Bayesian model averaging. These methods are utilized with certain modifications and are employed on the forecasts obtained from all individual NN models. The output of the aggregation algorithms is analyzed and compared with the individual NN models used in NN ensemble and with a Naive approach. Thirty-minutes interval electricity demand data from Australian Energy Market Operator (AEMO) and the New York Independent System Operator's web site (NYISO) are used in the empirical analysis. It is observed that the aggregation algorithm perform better than many of the individual NN models. In comparison with the Naive approach, the aggregation algorithms exhibit somewhat better forecasting performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The CONcrete Visual assistEd Transformation (CONVErT) framework provides facilities to generate reusable notations and compose them to form a wide variety of visual-isations. With an increased number of notations in large scale visualisations, it is crucial to use advanced layout algorithms to improve understandability of such complex visualisations. This showpiece paper demonstrates how advanced layout algorithms can be integrated into the notation specifications of CONVErT to generate layouts of complex visualisations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite several years of research, type reduction (TR) operation in interval type-2 fuzzy logic system (IT2FLS) cannot perform as fast as a type-1 defuzzifier. In particular, widely used Karnik-Mendel (KM) TR algorithm is computationally much more demanding than alternative TR approaches. In this work, a data driven framework is proposed to quickly, yet accurately, estimate the output of the KM TR algorithm using simple regression models. Comprehensive simulation performed in this study shows that the centroid end-points of KM algorithm can be approximated with a mean absolute percentage error as low as 0.4%. Also, switch point prediction accuracy can be as high as 100%. In conjunction with the fact that simple regression model can be trained with data generated using exhaustive defuzzification method, this work shows the potential of proposed method to provide highly accurate, yet extremely fast, TR approximation method. Speed of the proposed method should theoretically outperform all available TR methods while keeping the uncertainty information intact in the process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The need to estimate a particular quantile of a distribution is an important problem that frequently arises in many computer vision and signal processing applications. For example, our work was motivated by the requirements of many semiautomatic surveillance analytics systems that detect abnormalities in close-circuit television footage using statistical models of low-level motion features. In this paper, we specifically address the problem of estimating the running quantile of a data stream when the memory for storing observations is limited. We make the following several major contributions: 1) we highlight the limitations of approaches previously described in the literature that make them unsuitable for nonstationary streams; 2) we describe a novel principle for the utilization of the available storage space; 3) we introduce two novel algorithms that exploit the proposed principle in different ways; and 4) we present a comprehensive evaluation and analysis of the proposed algorithms and the existing methods in the literature on both synthetic data sets and three large real-world streams acquired in the course of operation of an existing commercial surveillance system. Our findings convincingly demonstrate that both of the proposed methods are highly successful and vastly outperform the existing alternatives. We show that the better of the two algorithms (data-aligned histogram) exhibits far superior performance in comparison with the previously described methods, achieving more than 10 times lower estimate errors on real-world data, even when its available working memory is an order of magnitude smaller.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Karnik-Mendel (KM) algorithm is the most widely used type reduction (TR) method in literature for the design of interval type-2 fuzzy logic systems (IT2FLS). Its iterative nature for finding left and right switch points is its Achilles heel. Despite a decade of research, none of the alternative TR methods offer uncertainty measures equivalent to KM algorithm. This paper takes a data-driven approach to tackle the computational burden of this algorithm while keeping its key features. We propose a regression method to approximate left and right switch points found by KM algorithm. Approximator only uses the firing intervals, rnles centroids, and FLS strnctural features as inputs. Once training is done, it can precisely approximate the left and right switch points through basic vector multiplications. Comprehensive simulation results demonstrate that the approximation accuracy for a wide variety of FLSs is 100%. Flexibility, ease of implementation, and speed are other features of the proposed method.