978 resultados para Computer Science(all)


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The basic requirements for secure communication in a vehicular ad hoc network (VANET) are anonymous authentication with source non-repudiation and integrity. The existing security protocols in VANETs do not differentiate between the anonymity requirements of different vehicles and the level of anonymity provided by these protocols is the same for all the vehicles in a network. To provide high level of anonymity, the resource requirements of security protocol would also be high. Hence, in a resource constrained VANET, it is necessary to differentiate between the anonymity requirements of different vehicles and to provide the level of anonymity to a vehicle as per its requirement. In this paper, we have proposed a novel protocol for authentication which can provide multiple levels of anonymity in VANETs. The protocol makes use of identity based signature mechanism and pseudonyms to implement anonymous authentication with source non-repudiation and integrity. By controlling the number of pseudonyms issued to a vehicle and the lifetime of each pseudonym for a vehicle, the protocol is able to control the level of anonymity provided to a vehicle. In addition, the protocol includes a novel pseudonym issuance policy using which the protocol can ensure the uniqueness of a newly generated pseudonym by checking only a very small subset of the set of pseudonyms previously issued to all the vehicles. The protocol cryptographically binds an expiry date to each pseudonym, and in this way, enforces an implicit revocation for the pseudonyms. Analytical and simulation results confirm the effectiveness of the proposed protocol.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Maximum entropy approach to classification is very well studied in applied statistics and machine learning and almost all the methods that exists in literature are discriminative in nature. In this paper, we introduce a maximum entropy classification method with feature selection for large dimensional data such as text datasets that is generative in nature. To tackle the curse of dimensionality of large data sets, we employ conditional independence assumption (Naive Bayes) and we perform feature selection simultaneously, by enforcing a `maximum discrimination' between estimated class conditional densities. For two class problems, in the proposed method, we use Jeffreys (J) divergence to discriminate the class conditional densities. To extend our method to the multi-class case, we propose a completely new approach by considering a multi-distribution divergence: we replace Jeffreys divergence by Jensen-Shannon (JS) divergence to discriminate conditional densities of multiple classes. In order to reduce computational complexity, we employ a modified Jensen-Shannon divergence (JS(GM)), based on AM-GM inequality. We show that the resulting divergence is a natural generalization of Jeffreys divergence to a multiple distributions case. As far as the theoretical justifications are concerned we show that when one intends to select the best features in a generative maximum entropy approach, maximum discrimination using J-divergence emerges naturally in binary classification. Performance and comparative study of the proposed algorithms have been demonstrated on large dimensional text and gene expression datasets that show our methods scale up very well with large dimensional datasets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Cubic Sieve Method for solving the Discrete Logarithm Problem in prime fields requires a nontrivial solution to the Cubic Sieve Congruence (CSC) x(3) equivalent to y(2)z (mod p), where p is a given prime number. A nontrivial solution must also satisfy x(3) not equal y(2)z and 1 <= x, y, z < p(alpha), where alpha is a given real number such that 1/3 < alpha <= 1/2. The CSC problem is to find an efficient algorithm to obtain a nontrivial solution to CSC. CSC can be parametrized as x equivalent to v(2)z (mod p) and y equivalent to v(3)z (mod p). In this paper, we give a deterministic polynomial-time (O(ln(3) p) bit-operations) algorithm to determine, for a given v, a nontrivial solution to CSC, if one exists. Previously it took (O) over tilde (p(alpha)) time in the worst case to determine this. We relate the CSC problem to the gap problem of fractional part sequences, where we need to determine the non-negative integers N satisfying the fractional part inequality {theta N} < phi (theta and phi are given real numbers). The correspondence between the CSC problem and the gap problem is that determining the parameter z in the former problem corresponds to determining N in the latter problem. We also show in the alpha = 1/2 case of CSC that for a certain class of primes the CSC problem can be solved deterministically in <(O)over tilde>(p(1/3)) time compared to the previous best of (O) over tilde (p(1/2)). It is empirically observed that about one out of three primes is covered by the above class. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we extend the characterization of Zx]/(f), where f is an element of Zx] to be a free Z-module to multivariate polynomial rings over any commutative Noetherian ring, A. The characterization allows us to extend the Grobner basis method of computing a k-vector space basis of residue class polynomial rings over a field k (Macaulay-Buchberger Basis Theorem) to rings, i.e. Ax(1), ... , x(n)]/a, where a subset of Ax(1), ... , x(n)] is an ideal. We give some insights into the characterization for two special cases, when A = Z and A = ktheta(1), ... , theta(m)]. As an application of this characterization, we show that the concept of Border bases can be extended to rings when the corresponding residue class ring is a finitely generated, free A-module. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the self-organized public key management approaches, public key verification is achieved through verification routes constituted by the transitive trust relationships among the network principals. Most of the existing approaches do not distinguish among different available verification routes. Moreover, to ensure stronger security, it is important to choose an appropriate metric to evaluate the strength of a route. Besides, all of the existing self-organized approaches use certificate-chains for achieving authentication, which are highly resource consuming. In this paper, we present a self-organized certificate-less on-demand public key management (CLPKM) protocol, which aims at providing the strongest verification routes for authentication purposes. It restricts the compromise probability for a verification route by restricting its length. Besides, we evaluate the strength of a verification route using its end-to-end trust value. The other important aspect of the protocol is that it uses a MAC function instead of RSA certificates to perform public key verifications. By doing this, the protocol saves considerable computation power, bandwidth and storage space. We have used an extended strand space model to analyze the correctness of the protocol. The analytical, simulation, and the testbed implementation results confirm the effectiveness of the proposed protocol. (c) 2014 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Natural multispecies acoustic choruses such as the dusk chorus of a tropical rain forest consist of simultaneously signalling individuals of different species whose calls travel through a common shared medium before reaching their `intended' receivers. This causes masking interference between signals and impedes signal detection, recognition and localization. The levels of acoustic overlap depend on a number of factors, including call structure, intensity, habitat-dependent signal attenuation and receiver tuning. In addition, acoustic overlaps should also depend on caller density and the species composition of choruses, including relative and absolute abundance of the different calling species. In this study, we used simulations to examine the effects of chorus species relative abundance and caller density on the levels of effective heterospecific acoustic overlap in multispecies choruses composed of the calls of five species of crickets and katydids that share the understorey of a rain forest in southern India. We found that on average species-even choruses resulted in higher levels of effective heterospecific acoustic overlap than choruses with strong dominance structures. This effect was found consistently across dominance levels ranging from 0.4 to 0.8 for larger choruses of forty individuals. For smaller choruses of twenty individuals, the effect was seen consistently for dominance levels of 0.6 and 0.8 but not 0.4. Effective acoustic overlap (EAO) increased with caller density but the manner and extent of increase depended both on the species' call structure and the acoustic context provided by the composition scenario. The Phaloria sp. experienced very low levels of EAO and was highly buffered to changes in acoustic context whereas other species experienced high FAO across contexts or were poorly buffered. These differences were not simply predictable from call structures. These simulation-based findings may have important implications for acoustic biodiversity monitoring and for the study of acoustic masking interference in natural environments. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The present article describes a beautiful contribution of Alan Turing to our understanding of how animal coat patterns form. The question that Turing posed was the following. A collection of identical cells (or processors for that matter), all running the exact same program, and all communicating with each other in the exact same way, should always be in the same state. Yet they produce nonhomogeneous periodic patterns, like those seen on animal coats. How does this happen? Turing gave an elegant explanation for this phenomenon, namely that differences between the cells due to small amounts of random noise can actually be amplified into structured periodic patterns. We attempt to describe his core conceptual contribution below.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Let P be a set of n points in R-d. A point x is said to be a centerpoint of P if x is contained in every convex object that contains more than dn/d+1 points of P. We call a point x a strong centerpoint for a family of objects C if x is an element of P is contained in every object C is an element of C that contains more than a constant fraction of points of P. A strong centerpoint does not exist even for halfspaces in R-2. We prove that a strong centerpoint exists for axis-parallel boxes in Rd and give exact bounds. We then extend this to small strong epsilon-nets in the plane. Let epsilon(S)(i) represent the smallest real number in 0, 1] such that there exists an epsilon(S)(i)-net of size i with respect to S. We prove upper and lower bounds for epsilon(S)(i) where S is the family of axis-parallel rectangles, halfspaces and disks. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Smoothed functional (SF) schemes for gradient estimation are known to be efficient in stochastic optimization algorithms, especially when the objective is to improve the performance of a stochastic system However, the performance of these methods depends on several parameters, such as the choice of a suitable smoothing kernel. Different kernels have been studied in the literature, which include Gaussian, Cauchy, and uniform distributions, among others. This article studies a new class of kernels based on the q-Gaussian distribution, which has gained popularity in statistical physics over the last decade. Though the importance of this family of distributions is attributed to its ability to generalize the Gaussian distribution, we observe that this class encompasses almost all existing smoothing kernels. This motivates us to study SF schemes for gradient estimation using the q-Gaussian distribution. Using the derived gradient estimates, we propose two-timescale algorithms for optimization of a stochastic objective function in a constrained setting with a projected gradient search approach. We prove the convergence of our algorithms to the set of stationary points of an associated ODE. We also demonstrate their performance numerically through simulations on a queuing model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work is a follow up to 2, FUN 2010], which initiated a detailed analysis of the popular game of UNO (R). We consider the solitaire version of the game, which was shown to be NP-complete. In 2], the authors also demonstrate a (O)(n)(c(2)) algorithm, where c is the number of colors across all the cards, which implies, in particular that the problem is polynomial time when the number of colors is a constant. In this work, we propose a kernelization algorithm, a consequence of which is that the problem is fixed-parameter tractable when the number of colors is treated as a parameter. This removes the exponential dependence on c and answers the question stated in 2] in the affirmative. We also introduce a natural and possibly more challenging version of UNO that we call ``All Or None UNO''. For this variant, we prove that even the single-player version is NP-complete, and we show a single-exponential FPT algorithm, along with a cubic kernel.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The boxicity (resp. cubicity) of a graph G(V, E) is the minimum integer k such that G can be represented as the intersection graph of axis parallel boxes (resp. cubes) in R-k. Equivalently, it is the minimum number of interval graphs (resp. unit interval graphs) on the vertex set V, such that the intersection of their edge sets is E. The problem of computing boxicity (resp. cubicity) is known to be inapproximable, even for restricted graph classes like bipartite, co-bipartite and split graphs, within an O(n(1-epsilon))-factor for any epsilon > 0 in polynomial time, unless NP = ZPP. For any well known graph class of unbounded boxicity, there is no known approximation algorithm that gives n(1-epsilon)-factor approximation algorithm for computing boxicity in polynomial time, for any epsilon > 0. In this paper, we consider the problem of approximating the boxicity (cubicity) of circular arc graphs intersection graphs of arcs of a circle. Circular arc graphs are known to have unbounded boxicity, which could be as large as Omega(n). We give a (2 + 1/k) -factor (resp. (2 + log n]/k)-factor) polynomial time approximation algorithm for computing the boxicity (resp. cubicity) of any circular arc graph, where k >= 1 is the value of the optimum solution. For normal circular arc (NCA) graphs, with an NCA model given, this can be improved to an additive two approximation algorithm. The time complexity of the algorithms to approximately compute the boxicity (resp. cubicity) is O(mn + n(2)) in both these cases, and in O(mn + kn(2)) = O(n(3)) time we also get their corresponding box (resp. cube) representations, where n is the number of vertices of the graph and m is its number of edges. Our additive two approximation algorithm directly works for any proper circular arc graph, since their NCA models can be computed in polynomial time. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Given a point set P and a class C of geometric objects, G(C)(P) is a geometric graph with vertex set P such that any two vertices p and q are adjacent if and only if there is some C is an element of C containing both p and q but no other points from P. We study G(del)(P) graphs where del is the class of downward equilateral triangles (i.e., equilateral triangles with one of their sides parallel to the x-axis and the corner opposite to this side below that side). For point sets in general position, these graphs have been shown to be equivalent to half-Theta(6) graphs and TD-Delaunay graphs. The main result in our paper is that for point sets P in general position, G(del)(P) always contains a matching of size at least vertical bar P vertical bar-1/3] and this bound is tight. We also give some structural properties of G(star)(P) graphs, where is the class which contains both upward and downward equilateral triangles. We show that for point sets in general position, the block cut point graph of G(star)(P) is simply a path. Through the equivalence of G(star)(P) graphs with Theta(6) graphs, we also derive that any Theta(6) graph can have at most 5n-11 edges, for point sets in general position. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Given a connected outerplanar graph G of pathwidth p, we give an algorithm to add edges to G to get a supergraph of G, which is 2-vertex-connected, outerplanar and of pathwidth O(p). This settles an open problem raised by Biedl 1], in the context of computing minimum height planar straight line drawings of outerplanar graphs, with their vertices placed on a two-dimensional grid. In conjunction with the result of this paper, the constant factor approximation algorithm for this problem obtained by Biedl 1] for 2-vertex-connected outerplanar graphs will work for all outer planar graphs. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We address the parameterized complexity ofMaxColorable Induced Subgraph on perfect graphs. The problem asks for a maximum sized q-colorable induced subgraph of an input graph G. Yannakakis and Gavril IPL 1987] showed that this problem is NP-complete even on split graphs if q is part of input, but gave a n(O(q)) algorithm on chordal graphs. We first observe that the problem is W2]-hard parameterized by q, even on split graphs. However, when parameterized by l, the number of vertices in the solution, we give two fixed-parameter tractable algorithms. The first algorithm runs in time 5.44(l) (n+#alpha(G))(O(1)) where #alpha(G) is the number of maximal independent sets of the input graph. The second algorithm runs in time q(l+o()l())n(O(1))T(alpha) where T-alpha is the time required to find a maximum independent set in any induced subgraph of G. The first algorithm is efficient when the input graph contains only polynomially many maximal independent sets; for example split graphs and co-chordal graphs. The running time of the second algorithm is FPT in l alone (whenever T-alpha is a polynomial in n), since q <= l for all non-trivial situations. Finally, we show that (under standard complexitytheoretic assumptions) the problem does not admit a polynomial kernel on split and perfect graphs in the following sense: (a) On split graphs, we do not expect a polynomial kernel if q is a part of the input. (b) On perfect graphs, we do not expect a polynomial kernel even for fixed values of q >= 2.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The problem of finding an optimal vertex cover in a graph is a classic NP-complete problem, and is a special case of the hitting set question. On the other hand, the hitting set problem, when asked in the context of induced geometric objects, often turns out to be exactly the vertex cover problem on restricted classes of graphs. In this work we explore a particular instance of such a phenomenon. We consider the problem of hitting all axis-parallel slabs induced by a point set P, and show that it is equivalent to the problem of finding a vertex cover on a graph whose edge set is the union of two Hamiltonian Paths. We show the latter problem to be NP-complete, and also give an algorithm to find a vertex cover of size at most k, on graphs of maximum degree four, whose running time is 1.2637(k) n(O(1)).