966 resultados para novel algorithm


Relevância:

70.00% 70.00%

Publicador:

Resumo:

To enhance the performance of the k-nearest neighbors approach in forecasting short-term traffic volume, this paper proposed and tested a two-step approach with the ability of forecasting multiple steps. In selecting k-nearest neighbors, a time constraint window is introduced, and then local minima of the distances between the state vectors are ranked to avoid overlappings among candidates. Moreover, to control extreme values’ undesirable impact, a novel algorithm with attractive analytical features is developed based on the principle component. The enhanced KNN method has been evaluated using the field data, and our comparison analysis shows that it outperformed the competing algorithms in most cases.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Bluetooth is a short-range radio technology operating in the unlicensed industrial-scientific-medical (ISM) band at 2.45 GHz. A piconet is basically a collection of slaves controlled by a master. A scatternet, on the other hand, is established by linking several piconets together in an ad hoc fashion to yield a global wireless ad hoc network. This paper proposes a scheduling policy that aims to achieve increased system throughput and reduced packet delays while providing reasonably good fairness among all traffic flows in bluetooth piconets and scatternets. We propose a novel algorithm for scheduling slots to slaves for both piconets and scatternets using multi-layered parameterized policies. Our scheduling scheme works with real data and obtains an optimal feedback policy within prescribed parameterized classes of these by using an efficient two-timescale simultaneous perturbation stochastic approximation (SPSA) algorithm. We show the convergence of our algorithm to an optimal multi-layered policy. We also propose novel polling schemes for intra- and inter-piconet scheduling that are seen to perform well. We present an extensive set of simulation results and performance comparisons with existing scheduling algorithms. Our results indicate that our proposed scheduling algorithm performs better overall on a wide range of experiments over the existing algorithms for both piconets (Das et al. in INFOCOM, pp. 591–600, 2001; Lapeyrie and Turletti in INFOCOM conference proceedings, San Francisco, US, 2003; Shreedhar and Varghese in SIGCOMM, pp. 231–242, 1995) and scatternets (Har-Shai et al. in OPNETWORK, 2002; Saha and Matsumot in AICT/ICIW, 2006; Tan and Guttag in The 27th annual IEEE conference on local computer networks(LCN). Tampa, 2002). Our studies also confirm that our proposed scheme achieves a high throughput and low packet delays with reasonable fairness among all the connections.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper we present a novel algorithm for learning oblique decision trees. Most of the current decision tree algorithms rely on impurity measures to assess goodness of hyperplanes at each node. These impurity measures do not properly capture the geometric structures in the data. Motivated by this, our algorithm uses a strategy, based on some recent variants of SVM, to assess the hyperplanes in such a way that the geometric structure in the data is taken into account. We show through empirical studies that our method is effective.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

To simulate fracture behaviors in concrete more realistically, a theoretical analysis on the potential question in the quasi-static method is presented, then a novel algorithm is proposed which takes into account the inertia effect due to unstable crack propagation and meanwhile requests much lower computational efforts than purely dynamic method. The inertia effect due to load increasing becomes less important and can be ignored with the loading rate decreasing, but the inertia effect due to unstable crack propagation remains considerable no matter how low the loading rate is. Therefore, results may become questionable if a fracture process including unstable cracking is simulated by the quasi-static procedure excluding completely inertia effects. However, it requires much higher computational effort to simulate experiments with not very high loading rates by the dynamic method. In this investigation which can be taken as a natural continuation, the potential question of quasi-static method is analyzed based on the dynamic equations of motion. One solution to this question is the new algorithm mentioned above. Numerical examples are provided by the generalized beam (GB) lattice model to show both fracture processes under different loading rates and capability of the new algorithm.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This article presents a novel algorithm for learning parameters in statistical dialogue systems which are modeled as Partially Observable Markov Decision Processes (POMDPs). The three main components of a POMDP dialogue manager are a dialogue model representing dialogue state information; a policy that selects the system's responses based on the inferred state; and a reward function that specifies the desired behavior of the system. Ideally both the model parameters and the policy would be designed to maximize the cumulative reward. However, while there are many techniques available for learning the optimal policy, no good ways of learning the optimal model parameters that scale to real-world dialogue systems have been found yet. The presented algorithm, called the Natural Actor and Belief Critic (NABC), is a policy gradient method that offers a solution to this problem. Based on observed rewards, the algorithm estimates the natural gradient of the expected cumulative reward. The resulting gradient is then used to adapt both the prior distribution of the dialogue model parameters and the policy parameters. In addition, the article presents a variant of the NABC algorithm, called the Natural Belief Critic (NBC), which assumes that the policy is fixed and only the model parameters need to be estimated. The algorithms are evaluated on a spoken dialogue system in the tourist information domain. The experiments show that model parameters estimated to maximize the expected cumulative reward result in significantly improved performance compared to the baseline hand-crafted model parameters. The algorithms are also compared to optimization techniques using plain gradients and state-of-the-art random search algorithms. In all cases, the algorithms based on the natural gradient work significantly better. © 2011 ACM.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper, a novel algorithm for removing facial makeup disturbances as a face detection preprocess based on high dimensional imaginal geometry is proposed. After simulation and practical application experiments, the algorithm is theoretically analyzed. Its apparent effect of removing facial makeup and the advantages of face detection with this pre-process over face detection without it are discussed. Furthermore, in our experiments with color images, the proposed algorithm even gives some surprises.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper presents a novel algorithm for learning in a class of stochastic Markov decision processes (MDPs) with continuous state and action spaces that trades speed for accuracy. A transform of the stochastic MDP into a deterministic one is presented which captures the essence of the original dynamics, in a sense made precise. In this transformed MDP, the calculation of values is greatly simplified. The online algorithm estimates the model of the transformed MDP and simultaneously does policy search against it. Bounds on the error of this approximation are proven, and experimental results in a bicycle riding domain are presented. The algorithm learns near optimal policies in orders of magnitude fewer interactions with the stochastic MDP, using less domain knowledge. All code used in the experiments is available on the project's web site.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Ferr?, S. and King, R. D. (2004) A dichotomic search algorithm for mining and learning in domain-specific logics. Fundamenta Informaticae. IOS Press. To appear

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Aims/hypothesis: Diabetic nephropathy is a major diabetic complication, and diabetes is the leading cause of end-stage renal disease (ESRD). Family studies suggest a hereditary component for diabetic nephropathy. However, only a few genes have been associated with diabetic nephropathy or ESRD in diabetic patients. Our aim was to detect novel genetic variants associated with diabetic nephropathy and ESRD. Methods: We exploited a novel algorithm, ‘Bag of Naive Bayes’, whose marker selection strategy is complementary to that of conventional genome-wide association models based on univariate association tests. The analysis was performed on a genome-wide association study of 3,464 patients with type 1 diabetes from the Finnish Diabetic Nephropathy (FinnDiane) Study and subsequently replicated with 4,263 type 1 diabetes patients from the Steno Diabetes Centre, the All Ireland-Warren 3-Genetics of Kidneys in Diabetes UK collection (UK–Republic of Ireland) and the Genetics of Kidneys in Diabetes US Study (GoKinD US). Results: Five genetic loci (WNT4/ZBTB40-rs12137135, RGMA/MCTP2-rs17709344, MAPRE1P2-rs1670754, SEMA6D/SLC24A5-rs12917114 and SIK1-rs2838302) were associated with ESRD in the FinnDiane study. An association between ESRD and rs17709344, tagging the previously identified rs12437854 and located between the RGMA and MCTP2 genes, was replicated in independent case–control cohorts. rs12917114 near SEMA6D was associated with ESRD in the replication cohorts under the genotypic model (p < 0.05), and rs12137135 upstream of WNT4 was associated with ESRD in Steno. Conclusions/interpretation: This study supports the previously identified findings on the RGMA/MCTP2 region and suggests novel susceptibility loci for ESRD. This highlights the importance of applying complementary statistical methods to detect novel genetic variants in diabetic nephropathy and, in general, in complex diseases.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Clustering is a difficult problem especially when we consider the task in the context of a data stream of categorical attributes. In this paper, we propose SCLOPE, a novel algorithm based on CLOPErsquos intuitive observation about cluster histograms. Unlike CLOPE however, our algo- rithm is very fast and operates within the constraints of a data stream environment. In particular, we designed SCLOPE according to the recent CluStream framework. Our evaluation of SCLOPE shows very promising results. It consistently outperforms CLOPE in speed and scalability tests on our data sets while maintaining high cluster purity; it also supports cluster analysis that other algorithms in its class do not.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A novel algorithm, immune genetic algorithm (IGA) is proposed for reactive power optimization of power system. While retaining excellent characteristics of genetic algorithm (GA), through imitating the biological immune system, the algorithm evaluates and selects the optimal solutions by the affinities between antigens and antibodies. With the regulation of the activating and suppressing of antibodies, IGA can achieve the dynamic balance between individual diversity and population convergence, and avoid getting into the local optimal solution. The proposed IGA is applied to the IEEE 30-bus system, and the results show that it is superior to the GA with good population convergence and fast computing speed.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The need to estimate a particular quantile of a distribution is an important problem which frequently arises in many computer vision and signal processing applications. For example, our work was motivated by the requirements of many semi-automatic surveillance analytics systems which detect abnormalities in close-circuit television (CCTV) footage using statistical models of low-level motion features. In this paper we specifically address the problem of estimating the running quantile of a data stream with non-stationary stochasticity when the memory for storing observations is limited. We make several major contributions: (i) we derive an important theoretical result which shows that the change in the quantile of a stream is constrained regardless of the stochastic properties of data, (ii) we describe a set of high-level design goals for an effective estimation algorithm that emerge as a consequence of our theoretical findings, (iii) we introduce a novel algorithm which implements the aforementioned design goals by retaining a sample of data values in a manner adaptive to changes in the distribution of data and progressively narrowing down its focus in the periods of quasi-stationary stochasticity, and (iv) we present a comprehensive evaluation of the proposed algorithm and compare it with the existing methods in the literature on both synthetic data sets and three large 'real-world' streams acquired in the course of operation of an existing commercial surveillance system. Our findings convincingly demonstrate that the proposed method is highly successful and vastly outperforms the existing alternatives, especially when the target quantile is high valued and the available buffer capacity severely limited.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Abstract interpretation has been widely used for the analysis of object-oriented languages and, in particular, Java source and bytecode. However, while most existing work deals with the problem of flnding expressive abstract domains that track accurately the characteristics of a particular concrete property, the underlying flxpoint algorithms have received comparatively less attention. In fact, many existing (abstract interpretation based—) flxpoint algorithms rely on relatively inefHcient techniques for solving inter-procedural caligraphs or are speciflc and tied to particular analyses. We also argüe that the design of an efficient fixpoint algorithm is pivotal to supporting the analysis of large programs. In this paper we introduce a novel algorithm for analysis of Java bytecode which includes a number of optimizations in order to reduce the number of iterations. The algorithm is parametric -in the sense that it is independent of the abstract domain used and it can be applied to different domains as "plug-ins"-, multivariant, and flow-sensitive. Also, is based on a program transformation, prior to the analysis, that results in a highly uniform representation of all the features in the language and therefore simplifies analysis. Detailed descriptions of decompilation solutions are given and discussed with an example. We also provide some performance data from a preliminary implementation of the analysis.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Abstract interpretation has been widely used for the analysis of object-oriented languages and, more precisely, Java source and bytecode. However, while most of the existing work deals with the problem of finding expressive abstract domains that track accurately the characteristics of a particular concrete property, the underlying fixpoint algorithms have received comparatively less attention. In fact, many existing (abstract interpretation based) fixpoint algorithms rely on relatively inefficient techniques to solve inter-procedural call graphs or are specific and tied to particular analyses. We argue that the design of an efficient fixpoint algorithm is pivotal to support the analysis of large programs. In this paper we introduce a novel algorithm for analysis of Java bytecode which includes a number of optimizations in order to reduce the number of iterations. Also, the algorithm is parametric in the sense that it is independent of the abstract domain used and it can be applied to different domains as "plug-ins". It is also incremental in the sense that, if desired, analysis data can be saved so that only a reduced amount of reanalysis is needed after a small program change, which can be instrumental for large programs. The algorithm is also multivariant and flowsensitive. Finally, another interesting characteristic of the algorithm is that it is based on a program transformation, prior to the analysis, that results in a highly uniform representation of all the features in the language and therefore simplifies analysis. Detailed descriptions of decompilation solutions are provided and discussed with an example.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper, we propose a novel algorithm for the rigorous design of distillation columns that integrates a process simulator in a generalized disjunctive programming formulation. The optimal distillation column, or column sequence, is obtained by selecting, for each column section, among a set of column sections with different number of theoretical trays. The selection of thermodynamic models, properties estimation etc., are all in the simulation environment. All the numerical issues related to the convergence of distillation columns (or column sections) are also maintained in the simulation environment. The model is formulated as a Generalized Disjunctive Programming (GDP) problem and solved using the logic based outer approximation algorithm without MINLP reformulation. Some examples involving from a single column to thermally coupled sequence or extractive distillation shows the performance of the new algorithm.