964 resultados para efficient algorithm


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dragon stream cipher is one of the focus ciphers which have reached Phase 2 of the eSTREAMproject. In this paper, we present a new method of building a linear distinguisher for Dragon. The distinguisher is constructed by exploiting the biases of two S-boxes and the modular addition which are basic components of the nonlinear function F. The bias of the distinguisher is estimated to be around 2−75.32 which is better than the bias of the distinguisher presented by Englund and Maximov. We have shown that Dragon is distinguishable from a random cipher by using around 2150.6 keystream words and 259 memory. In addition, we present a very efficient algorithm for computing the bias of linear approximation of modular addition.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents an efficient algorithm for optimizing the operation of battery storage in a low voltage distribution network with a high penetration of PV generation. A predictive control solution is presented that uses wavelet neural networks to predict the load and PV generation at hourly intervals for twelve hours into the future. The load and generation forecast, and the previous twelve hours of load and generation history, is used to assemble load profile. A diurnal charging profile can be compactly represented by a vector of Fourier coefficients allowing a direct search optimization algorithm to be applied. The optimal profile is updated hourly allowing the state of charge profile to respond to changing forecasts in load.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose a new way to build a combined list from K base lists, each containing N items. A combined list consists of top segments of various sizes from each base list so that the total size of all top segments equals N. A sequence of item requests is processed and the goal is to minimize the total number of misses. That is, we seek to build a combined list that contains all the frequently requested items. We first consider the special case of disjoint base lists. There, we design an efficient algorithm that computes the best combined list for a given sequence of requests. In addition, we develop a randomized online algorithm whose expected number of misses is close to that of the best combined list chosen in hindsight. We prove lower bounds that show that the expected number of misses of our randomized algorithm is close to the optimum. In the presence of duplicate items, we show that computing the best combined list is NP-hard. We show that our algorithms still apply to a linearized notion of loss in this case. We expect that this new way of aggregating lists will find many ranking applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We study linear control problems with quadratic losses and adversarially chosen tracking targets. We present an efficient algorithm for this problem and show that, under standard conditions on the linear system, its regret with respect to an optimal linear policy grows as O(log^2 T), where T is the number of rounds of the game. We also study a problem with adversarially chosen transition dynamics; we present an exponentiallyweighted average algorithm for this problem, and we give regret bounds that grow as O(sqtr p T).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Between-subject and within-subject variability is ubiquitous in biology and physiology and understanding and dealing with this is one of the biggest challenges in medicine. At the same time it is difficult to investigate this variability by experiments alone. A recent modelling and simulation approach, known as population of models (POM), allows this exploration to take place by building a mathematical model consisting of multiple parameter sets calibrated against experimental data. However, finding such sets within a high-dimensional parameter space of complex electrophysiological models is computationally challenging. By placing the POM approach within a statistical framework, we develop a novel and efficient algorithm based on sequential Monte Carlo (SMC). We compare the SMC approach with Latin hypercube sampling (LHS), a method commonly adopted in the literature for obtaining the POM, in terms of efficiency and output variability in the presence of a drug block through an in-depth investigation via the Beeler-Reuter cardiac electrophysiological model. We show improved efficiency via SMC and that it produces similar responses to LHS when making out-of-sample predictions in the presence of a simulated drug block.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the modern business environment, meeting due dates and avoiding delay penalties are very important goals that can be accomplished by minimizing total weighted tardiness. We consider a scheduling problem in a system of parallel processors with the objective of minimizing total weighted tardiness. Our aim in the present work is to develop an efficient algorithm for solving the parallel processor problem as compared to the available heuristics in the literature and we propose the ant colony optimization approach for this problem. An extensive experimentation is conducted to evaluate the performance of the ACO approach on different problem sizes with the varied tardiness factors. Our experimentation shows that the proposed ant colony optimization algorithm is giving promising results compared to the best of the available heuristics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Database schemes can be viewed as hypergraphs with individual relation schemes corresponding to the edges of a hypergraph. Under this setting, a new class of "acyclic" database schemes was recently introduced and was shown to have a claim to a number of desirable properties. However, unlike the case of ordinary undirected graphs, there are several unequivalent notions of acyclicity of hypergraphs. Of special interest among these are agr-, beta-, and gamma-, degrees of acyclicity, each characterizing an equivalence class of desirable properties for database schemes, represented as hypergraphs. In this paper, two complementary approaches to designing beta-acyclic database schemes have been presented. For the first part, a new notion called "independent cycle" is introduced. Based on this, a criterion for beta-acyclicity is developed and is shown equivalent to the existing definitions of beta-acyclicity. From this and the concept of the dual of a hypergraph, an efficient algorithm for testing beta-acyclicity is developed. As for the second part, a procedure is evolved for top-down generation of beta-acyclic schemes and its correctness is established. Finally, extensions and applications of ideas are described.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Understanding of the shape and size of different features of the human body from scanned data is necessary for automated design and evaluation of product ergonomics. In this paper, a computational framework is presented for automatic detection and recognition of important facial feature regions, from scanned head and shoulder polyhedral models. A noise tolerant methodology is proposed using discrete curvature computations, band-pass filtering, and morphological operations for isolation of the primary feature regions of the face, namely, the eyes, nose, and mouth. Spatial disposition of the critical points of these isolated feature regions is analyzed for the recognition of these critical points as the standard landmarks associated with the primary facial features. A number of clinically identified landmarks lie on the facial midline. An efficient algorithm for detection and processing of the midline, using a point sampling technique, is also presented. The results obtained using data of more than 20 subjects are verified through visualization and physical measurements. A color based and triangle skewness based schemes for isolation of geometrically nonprominent features and ear region are also presented. [DOI: 10.1115/1.3330420]

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We describe a real-time system that supports design of optimal flight paths over terrains. These paths either maximize view coverage or minimize vehicle exposure to ground. A volume-rendered display of multi-viewpoint visibility and a haptic interface assists the user in selecting, assessing, and refining the computed flight path. We design a three-dimensional scalar field representing the visibility of a point above the terrain, describe an efficient algorithm to compute the visibility field, and develop visual and haptic schemes to interact with the visibility field. Given the origin and destination, the desired flight path is computed using an efficient simulation of an articulated rope under the influence of the visibility gradient. The simulation framework also accepts user input, via the haptic interface, thereby allowing manual refinement of the flight path.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An efficient algorithm within the finite deformation framework is developed for finite element implementation of a recently proposed isotropic, Mohr-Coulomb type material model, which captures the elastic-viscoplastic, pressure sensitive and plastically dilatant response of bulk metallic glasses. The constitutive equations are first reformulated and implemented using an implicit numerical integration procedure based on the backward Euler method. The resulting system of nonlinear algebraic equations is solved by the Newton-Raphson procedure. This is achieved by developing the principal space return mapping technique for the present model which involves simultaneous shearing and dilatation on multiple potential slip systems. The complete stress update algorithm is presented and the expressions for viscoplastic consistent tangent moduli are derived. The stress update scheme and the viscoplastic consistent tangent are implemented in the commercial finite element code ABAQUS/Standard. The accuracy and performance of the numerical implementation are verified by considering several benchmark examples, which includes a simulation of multiple shear bands in a 3D prismatic bar under uniaxial compression.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thanks to advances in sensor technology, today we have many applications (space-borne imaging, medical imaging, etc.) where images of large sizes are generated. Straightforward application of wavelet techniques for above images involves certain difficulties. Embedded coders such as EZW and SPIHT require that the wavelet transform of the full image be buffered for coding. Since the transform coefficients also require storing in high precision, buffering requirements for large images become prohibitively high. In this paper, we first devise a technique for embedded coding of large images using zero trees with reduced memory requirements. A 'strip buffer' capable of holding few lines of wavelet coefficients from all the subbands belonging to the same spatial location is employed. A pipeline architecure for a line implementation of above technique is then proposed. Further, an efficient algorithm to extract an encoded bitstream corresponding to a region of interest in the image has also been developed. Finally, the paper describes a strip based non-embedded coding which uses a single pass algorithm. This is to handle high-input data rates. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In computational molecular biology, the aim of restriction mapping is to locate the restriction sites of a given enzyme on a DNA molecule. Double digest and partial digest are two well-studied techniques for restriction mapping. While double digest is NP-complete, there is no known polynomial-time algorithm for partial digest. Another disadvantage of the above techniques is that there can be multiple solutions for reconstruction. In this paper, we study a simple technique called labeled partial digest for restriction mapping. We give a fast polynomial time (O(n(2) log n) worst-case) algorithm for finding all the n sites of a DNA molecule using this technique. An important advantage of the algorithm is the unique reconstruction of the DNA molecule from the digest. The technique is also robust in handling errors in fragment lengths which arises in the laboratory. We give a robust O(n(4)) worst-case algorithm that can provably tolerate an absolute error of O(Delta/n) (where Delta is the minimum inter-site distance), while giving a unique reconstruction. We test our theoretical results by simulating the performance of the algorithm on a real DNA molecule. Motivated by the similarity to the labeled partial digest problem, we address a related problem of interest-the de novo peptide sequencing problem (ACM-SIAM Symposium on Discrete Algorithms (SODA), 2000, pp. 389-398), which arises in the reconstruction of the peptide sequence of a protein molecule. We give a simple and efficient algorithm for the problem without using dynamic programming. The algorithm runs in time O(k log k), where k is the number of ions and is an improvement over the algorithm in Chen et al. (C) 2002 Elsevier Science (USA). All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With the emergence of large-volume and high-speed streaming data, the recent techniques for stream mining of CFIpsilas (closed frequent itemsets) will become inefficient. When concept drift occurs at a slow rate in high speed data streams, the rate of change of information across different sliding windows will be negligible. So, the user wonpsilat be devoid of change in information if we slide window by multiple transactions at a time. Therefore, we propose a novel approach for mining CFIpsilas cumulatively by making sliding width(ges1) over high speed data streams. However, it is nontrivial to mine CFIpsilas cumulatively over stream, because such growth may lead to the generation of exponential number of candidates for closure checking. In this study, we develop an efficient algorithm, stream-close, for mining CFIpsilas over stream by exploring some interesting properties. Our performance study reveals that stream-close achieves good scalability and has promising results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Study of symmetric or repeating patterns in scalar fields is important in scientific data analysis because it gives deep insights into the properties of the underlying phenomenon. Though geometric symmetry has been well studied within areas like shape processing, identifying symmetry in scalar fields has remained largely unexplored due to the high computational cost of the associated algorithms. We propose a computationally efficient algorithm for detecting symmetric patterns in a scalar field distribution by analysing the topology of level sets of the scalar field. Our algorithm computes the contour tree of a given scalar field and identifies subtrees that are similar. We define a robust similarity measure for comparing subtrees of the contour tree and use it to group similar subtrees together. Regions of the domain corresponding to subtrees that belong to a common group are extracted and reported to be symmetric. Identifying symmetry in scalar fields finds applications in visualization, data exploration, and feature detection. We describe two applications in detail: symmetry-aware transfer function design and symmetry-aware isosurface extraction.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

It is well known that extremely long low-density parity-check (LDPC) codes perform exceptionally well for error correction applications, short-length codes are preferable in practical applications. However, short-length LDPC codes suffer from performance degradation owing to graph-based impairments such as short cycles, trapping sets and stopping sets and so on in the bipartite graph of the LDPC matrix. In particular, performance degradation at moderate to high E-b/N-0 is caused by the oscillations in bit node a posteriori probabilities induced by short cycles and trapping sets in bipartite graphs. In this study, a computationally efficient algorithm is proposed to improve the performance of short-length LDPC codes at moderate to high E-b/N-0. This algorithm makes use of the information generated by the belief propagation (BP) algorithm in previous iterations before a decoding failure occurs. Using this information, a reliability-based estimation is performed on each bit node to supplement the BP algorithm. The proposed algorithm gives an appreciable coding gain as compared with BP decoding for LDPC codes of a code rate equal to or less than 1/2 rate coding. The coding gains are modest to significant in the case of optimised (for bipartite graph conditioning) regular LDPC codes, whereas the coding gains are huge in the case of unoptimised codes. Hence, this algorithm is useful for relaxing some stringent constraints on the graphical structure of the LDPC code and for developing hardware-friendly designs.