41 resultados para generalized entropy


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In information theory, entropies make up of the basis for distance and divergence measures among various probability densities. In this paper we propose a novel metric to detect DDoS attacks in networks by using the function of order α of the generalized (Rényi) entropy to distinguish DDoS attacks traffic from legitimate network traffic effectively. Our proposed approach can not only detect DDoS attacks early (it can detect attacks one hop earlier than using the Shannon metric while order α=2, and two hops earlier to detect attacks while order α=10.) but also reduce both the false positive rate and the false negative rate clearly compared with the traditional Shannon entropy metric approach.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Distributed Denial-of-Service (DDoS) attacks are a serious threat to the safety and security of cyberspace. In this paper we propose a novel metric to detect DDoS attacks in the Internet. More precisely, we use the function of order α of the generalized (Rényi) entropy to distinguish DDoS attacks traffic from legitimate network traffic effectively. In information theory, entropies make up the basis for distance and divergence measures among various probability densities. We design our abnormal-based detection metric using the generalized entropy. The experimental results show that our proposed approach can not only detect DDoS attacks early (it can detect attacks one hop earlier than using the Shannon metric while order  α =2, and two hops earlier than the Shannon metric while order α =10.) but can also reduce both the false positive rate and the false negative rate, compared with the traditional Shannon entropy metric approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Complexity analysis of a given time series is executed using various measures of irregularity, the most commonly used being Approximate entropy (ApEn), Sample entropy (SampEn) and Fuzzy entropy (FuzzyEn). However, the dependence of these measures on the critical parameter of tolerance `r' leads to precarious results, owing to random selections of r. Attempts to eliminate the use of r in entropy calculations introduced a new measure of entropy namely distribution entropy (DistEn) based on the empirical probability distribution function (ePDF). DistEn completely avoids the use of a variance dependent parameter like r and replaces it by a parameter M, which corresponds to the number of bins used in the histogram to calculate it. When tested for synthetic data, M has been observed to produce a minimal effect on DistEn as compared to the effect of r on other entropy measures. Also, DistEn is said to be relatively stable with data length (N) variations, as far as synthetic data is concerned. However, these claims have not been analyzed for physiological data. Our study evaluates the effect of data length N and bin number M on the performance of DistEn using both synthetic and physiologic time series data. Synthetic logistic data of `Periodic' and `Chaotic' levels of complexity and 40 RR interval time series belonging to two groups of healthy aging population (young and elderly) have been used for the analysis. The stability and consistency of DistEn as a complexity measure as well as a classifier have been studied. Experiments prove that the parameters N and M are more influential in deciding the efficacy of DistEn performance in the case of physiologic data than synthetic data. Therefore, a generalized random selection of M for a given data length N may not always be an appropriate combination to yield good performance of DistEn for physiologic data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we generalize Besag's pseudo-likelihood function for spatial statistical models on a region of a lattice. The correspondingly defined maximum generalized pseudo-likelihood estimates (MGPLEs) are natural extensions of Besag's maximum pseudo-likelihood estimate (MPLE). The MGPLEs connect the MPLE and the maximum likelihood estimate. We carry out experimental calculations of the MGPLEs for spatial processes on the lattice. These simulation results clearly show better performances of the MGPLEs than the MPLE, and the performances of differently defined MGPLEs are compared. These are also illustrated by the application to two real data sets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses identification of parameters of generalized ordered weighted averaging (GOWA) operators from empirical data. Similarly to ordinary OWA operators, GOWA are characterized by a vector of weights, as well as the power to which the arguments are raised. We develop optimization techniques which allow one to fit such operators to the observed data. We also generalize these methods for functional defined GOWA and generalized Choquet integral based aggregation operators.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One major difficulty frustrating the application of linear causal models is that they are not easily adapted to cope with discrete data. This is unfortunate since most real problems involve both continuous and discrete variables. In this paper, we consider a class of graphical models which allow both continuous and discrete variables, and propose the parameter estimation method and a structure discovery algorithm based on Minimum Message Length and parameter estimation. Experimental results are given to demonstrate the potential for the application of this method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: To assess from a health sector perspective the incremental cost-effectiveness of interventions for generalized anxiety disorder (cognitive behavioural therapy [CBT] and serotonin and noradrenaline reuptake inhibitors [SNRIs]) and panic disorder (CBT, selective serotonin reuptake inhibitors [SSRIs] and tricyclic antidepressants [TCAs]).

Method: The health benefit is measured as a reduction in disability-adjusted life years (DALYs), based on effect size calculations from meta-analyses of randomised controlled trials. An assessment on second stage filters ('equity', 'strength of evidence', 'feasibility' and 'acceptability to stakeholders') is also undertaken to incorporate additional factors that impact on resource allocation decisions. Costs and benefits are calculated for a period of one year for the eligible population (prevalent cases of generalized anxiety disorder/panic disorder identified in the National Survey of Mental Health and Wellbeing, extrapolated to the Australian population in the year 2000 for those aged 18 years and older). Simulation modelling techniques are used to present 95% uncertainty intervals (UI) around the incremental cost-effectiveness ratios (ICERs).

Results: Compared to current practice, CBT by a psychologist on a public salary is the most cost-effective intervention for both generalized anxiety disorder (A$6900/DALY saved; 95% UI A$4000 to A$12 000) and panic disorder (A$6800/DALY saved; 95% UI A$2900 to A$15 000). Cognitive behavioural therapy results in a greater total health benefit than the drug interventions for both anxiety disorders, although equity and feasibility concerns for CBT interventions are also greater.

Conclusions: Cognitive behavioural therapy is the most effective and cost-effective intervention for generalized anxiety disorder and panic disorder. However, its implementation would require policy change to enable more widespread access to a sufficient number of trained therapists for the treatment of anxiety disorders.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In epidemiologic studies, researchers often need to establish a nonlinear exposure-response relation between a continuous risk factor and a health outcome. Furthermore, periodic interviews are often conducted to take repeated measurements from an individual. The authors proposed to use fractional polynomial models to jointly analyze the effects of 2 continuous risk factors on a health outcome. This method was applied to an analysis of the effects of age and cumulative fluoride exposure on forced vital capacity in a longitudinal study of lung function carried out among aluminum workers in Australia (1995-2003). Generalized estimating equations and the quasi-likelihood under the independence model criterion were used. The authors found that the second-degree fractional polynomial models for age and fluoride fitted the data best. The best model for age was robust across different models for fluoride, and the best model for fluoride was also robust. No evidence was found to suggest that the effects of smoking and cumulative fluoride exposure on change in forced vital capacity over time were significant. The trend 1 model, which included the unexposed persons in the analysis of trend in forced vital capacity over tertiles of fluoride exposure, did not fit the data well, and caution should be exercised when this method is used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Citation matching is the problem of extracting bibliographic records from citation lists in technical papers, and merging records that represent the same publication. Generally, there are three types of data- sets in citation matching, i.e., sparse, dense and hybrid types. Typical approaches for citation matching are Joint Segmentation (Jnt-Seg) and Joint Segmentation Entity Resolution (Jnt-Seg-ER). Jnt-Seg method is effective at processing sparse type datasets, but often produces many errors when applied to dense type datasets. On the contrary, Jnt-Seg-ER method is good at dealing with dense type datasets, but insufficient when sparse type datasets are presented. In this paper we propose an alternative joint inference approach–Generalized Joint Segmentation (Generalized-Jnt-Seg). It can effectively deal with the situation when the dataset type is unknown. Especially, in hybrid type datasets analysis there is often no a priori information for choosing Jnt-Seg method or Jnt-Seg-ER method to process segmentation and entity resolution. Both methods may produce many errors. Fortunately, our method can effectively avoid error of segmentation and produce well field boundaries. Experimental results on both types of citation datasets show that our method outperforms many alternative approaches for citation matching.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A community network often operates with the same Internet service provider domain or the virtual network of different entities who are cooperating with each other. In such a federated network environment, routers can work closely to raise early warning of DDoS attacks to void catastrophic damages. However, the attackers simulate the normal network behaviors, e.g. pumping the attack packages as poisson distribution, to disable detection algorithms. It is an open question: how to discriminate DDoS attacks from surge legitimate accessing. We noticed that the attackers use the same mathematical functions to control the speed of attack package pumping to the victim. Based on this observation, the different attack flows of a DDoS attack share the same regularities, which is different from the real surging accessing in a short time period. We apply information theory parameter, entropy rate, to discriminate the DDoS attack from the surge legitimate accessing. We proved the effectiveness of our method in theory, and the simulations are the work in the near future. We also point out the future directions that worth to explore in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The performance of the modified adaptive conjugate gradient (CG) algorithms based on the iterative CG method for adaptive filtering is highly related to the ways of estimating the correlation matrix and the cross-correlation vector. The existing approaches of implementing the CG algorithms using the data windows of exponential form or sliding form result in either loss of convergence or increase in misadjustment. This paper presents and analyzes a new approach to the implementation of the CG algorithms for adaptive filtering by using a generalized data windowing scheme. For the new modified CG algorithms, we show that the convergence speed is accelerated, the misadjustment and tracking capability comparable to those of the recursive least squares (RLS) algorithm are achieved. Computer simulations demonstrated in the framework of linear system modeling problem show the improvements of the new modifications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we investigate modeling capabilities of Bonferroni means and their generalizations. We will show that weighted Bonferroni means can model the concepts of hard and soft partial conjunction, and lead to several interesting special cases, with quite an intuitive interpretation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose a new construction method for fuzzy and weak fuzzy subsethood measures based on the aggregation of implication operators. We study the desired properties of the implication operators in order to construct these measures. We also show the relationship between fuzzy entropy and weak fuzzy subsethood measures constructed by our method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we provide a systematic investigation of a family of composed aggregation functions which generalize the Bonferroni mean. Such extensions of the Bonferroni mean are capable of modeling the concepts of hard and soft partial conjunction and disjunction as well as that of k-tolerance and k-intolerance. There are several interesting special cases with quite an intuitive interpretation for application.