53 resultados para Tutorial on Computing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a genetic algorithm (GA) is applied on Maximum Betweennes Problem (MBP). The maximum of the objective function is obtained by finding a permutation which satisfies a maximal number of betweenness constraints. Every permutation considered is genetically coded with an integer representation. Standard operators are used in the GA. Instances in the experimental results are randomly generated. For smaller dimensions, optimal solutions of MBP are obtained by total enumeration. For those instances, the GA reached all optimal solutions except one. The GA also obtained results for larger instances of up to 50 elements and 1000 triples. The running time of execution and finding optimal results is quite short.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A modification of the Nekrassov method for finding a solution of a linear system of algebraic equations is given and a numerical example is shown.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An approximate number is an ordered pair consisting of a (real) number and an error bound, briefly error, which is a (real) non-negative number. To compute with approximate numbers the arithmetic operations on errors should be well-known. To model computations with errors one should suitably define and study arithmetic operations and order relations over the set of non-negative numbers. In this work we discuss the algebraic properties of non-negative numbers starting from familiar properties of real numbers. We focus on certain operations of errors which seem not to have been sufficiently studied algebraically. In this work we restrict ourselves to arithmetic operations for errors related to addition and multiplication by scalars. We pay special attention to subtractability-like properties of errors and the induced “distance-like” operation. This operation is implicitly used under different names in several contemporary fields of applied mathematics (inner subtraction and inner addition in interval analysis, generalized Hukuhara difference in fuzzy set theory, etc.) Here we present some new results related to algebraic properties of this operation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a new approach to the mathematical modelling of microbial growth. Our approach differs from familiar Monod type models by considering two phases in the physiological states of the microorganisms and makes use of basic relations from enzyme kinetics. Such an approach may be useful in the modelling and control of biotechnological processes, where microorganisms are used for various biodegradation purposes and are often put under extreme inhibitory conditions. Some computational experiments are performed in support of our modelling approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Let nq(k, d) denote the smallest value of n for which an [n, k, d]q code exists for given integers k and d with k ≥ 3, 1 ≤ d ≤ q^(k−1) and a prime or a prime power q. The purpose of this note is to show that there exists a series of the functions h3,q, h4,q, ..., hk,q such that nq(k, d) can be expressed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ACM Computing Classification System (1998): G.2.1.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ACM Computing Classification System (1998): K.3.1, K.3.2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ACM Computing Classification System (1998): G.2.2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we compute some bounds of the Balaban index and then by means of group action we compute the Balaban index of vertex transitive graphs. ACM Computing Classification System (1998): G.2.2 , F.2.2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe an approach for recovering the plaintext in block ciphers having a design structure similar to the Data Encryption Standard but with improperly constructed S-boxes. The experiments with a backtracking search algorithm performing this kind of attack against modified DES/Triple-DES in ECB mode show that the unknown plaintext can be recovered with a small amount of uncertainty and this algorithm is highly efficient both in time and memory costs for plaintext sources with relatively low entropy. Our investigations demonstrate once again that modifications resulting to S-boxes which still satisfy some design criteria may lead to very weak ciphers. ACM Computing Classification System (1998): E.3, I.2.7, I.2.8.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In 1900 E. B. Van Vleck proposed a very efficient method to compute the Sturm sequence of a polynomial p (x) ∈ Z[x] by triangularizing one of Sylvester’s matrices of p (x) and its derivative p′(x). That method works fine only for the case of complete sequences provided no pivots take place. In 1917, A. J. Pell and R. L. Gordon pointed out this “weakness” in Van Vleck’s theorem, rectified it but did not extend his method, so that it also works in the cases of: (a) complete Sturm sequences with pivot, and (b) incomplete Sturm sequences. Despite its importance, the Pell-Gordon Theorem for polynomials in Q[x] has been totally forgotten and, to our knowledge, it is referenced by us for the first time in the literature. In this paper we go over Van Vleck’s theorem and method, modify slightly the formula of the Pell-Gordon Theorem and present a general triangularization method, called the VanVleck-Pell-Gordon method, that correctly computes in Z[x] polynomial Sturm sequences, both complete and incomplete.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ACM Computing Classification System (1998): G.1.1, G.1.2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research evaluates pattern recognition techniques on a subclass of big data where the dimensionality of the input space (p) is much larger than the number of observations (n). Specifically, we evaluate massive gene expression microarray cancer data where the ratio κ is less than one. We explore the statistical and computational challenges inherent in these high dimensional low sample size (HDLSS) problems and present statistical machine learning methods used to tackle and circumvent these difficulties. Regularization and kernel algorithms were explored in this research using seven datasets where κ < 1. These techniques require special attention to tuning necessitating several extensions of cross-validation to be investigated to support better predictive performance. While no single algorithm was universally the best predictor, the regularization technique produced lower test errors in five of the seven datasets studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2015

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2015