931 resultados para Interval discrete log problem
Resumo:
Randomness in the source condition other than the heterogeneity in the system parameters can also be a major source of uncertainty in the concentration field. Hence, a more general form of the problem formulation is necessary to consider randomness in both source condition and system parameters. When the source varies with time, the unsteady problem, can be solved using the unit response function. In the case of random system parameters, the response function becomes a random function and depends on the randomness in the system parameters. In the present study, the source is modelled as a random discrete process with either a fixed interval or a random interval (the Poisson process). In this study, an attempt is made to assess the relative effects of various types of source uncertainties on the probabilistic behaviour of the concentration in a porous medium while the system parameters are also modelled as random fields. Analytical expressions of mean and covariance of concentration due to random discrete source are derived in terms of mean and covariance of unit response function. The probabilistic behaviour of the random response function is obtained by using a perturbation-based stochastic finite element method (SFEM), which performs well for mild heterogeneity. The proposed method is applied for analysing both the 1-D as well as the 3-D solute transport problems. The results obtained with SFEM are compared with the Monte Carlo simulation for 1-D problems.
Resumo:
The keyword based search technique suffers from the problem of synonymic and polysemic queries. Current approaches address only theproblem of synonymic queries in which different queries might have the same information requirement. But the problem of polysemic queries,i.e., same query having different intentions, still remains unaddressed. In this paper, we propose the notion of intent clusters, the members of which will have the same intention. We develop a clustering algorithm that uses the user session information in query logs in addition to query URL entries to identify cluster of queries having the same intention. The proposed approach has been studied through case examples from the actual log data from AOL, and the clustering algorithm is shown to be successful in discerning the user intentions.
Resumo:
The problem of admission control of packets in communication networks is studied in the continuous time queueing framework under different classes of service and delayed information feedback. We develop and use a variant of a simulation based two timescale simultaneous perturbation stochastic approximation (SPSA) algorithm for finding an optimal feedback policy within the class of threshold type policies. Even though SPSA has originally been designed for continuous parameter optimization, its variant for the discrete parameter case is seen to work well. We give a proof of the hypothesis needed to show convergence of the algorithm on our setting along with a sketch of the convergence analysis. Extensive numerical experiments with the algorithm are illustrated for different parameter specifications. In particular, we study the effect of feedback delays on the system performance.
Resumo:
Tanner Graph representation of linear block codes is widely used by iterative decoding algorithms for recovering data transmitted across a noisy communication channel from errors and erasures introduced by the channel. The stopping distance of a Tanner graph T for a binary linear block code C determines the number of erasures correctable using iterative decoding on the Tanner graph T when data is transmitted across a binary erasure channel using the code C. We show that the problem of finding the stopping distance of a Tanner graph is hard to approximate within any positive constant approximation ratio in polynomial time unless P = NP. It is also shown as a consequence that there can be no approximation algorithm for the problem achieving an approximation ratio of 2(log n)(1-epsilon) for any epsilon > 0 unless NP subset of DTIME(n(poly(log n))).
Resumo:
The notion of optimization is inherent in protein design. A long linear chain of twenty types of amino acid residues are known to fold to a 3-D conformation that minimizes the combined inter-residue energy interactions. There are two distinct protein design problems, viz. predicting the folded structure from a given sequence of amino acid monomers (folding problem) and determining a sequence for a given folded structure (inverse folding problem). These two problems have much similarity to engineering structural analysis and structural optimization problems respectively. In the folding problem, a protein chain with a given sequence folds to a conformation, called a native state, which has a unique global minimum energy value when compared to all other unfolded conformations. This involves a search in the conformation space. This is somewhat akin to the principle of minimum potential energy that determines the deformed static equilibrium configuration of an elastic structure of given topology, shape, and size that is subjected to certain boundary conditions. In the inverse-folding problem, one has to design a sequence with some objectives (having a specific feature of the folded structure, docking with another protein, etc.) and constraints (sequence being fixed in some portion, a particular composition of amino acid types, etc.) while obtaining a sequence that would fold to the desired conformation satisfying the criteria of folding. This requires a search in the sequence space. This is similar to structural optimization in the design-variable space wherein a certain feature of structural response is optimized subject to some constraints while satisfying the governing static or dynamic equilibrium equations. Based on this similarity, in this work we apply the topology optimization methods to protein design, discuss modeling issues and present some initial results.
Resumo:
A distributed system is a collection of networked autonomous processing units which must work in a cooperative manner. Currently, large-scale distributed systems, such as various telecommunication and computer networks, are abundant and used in a multitude of tasks. The field of distributed computing studies what can be computed efficiently in such systems. Distributed systems are usually modelled as graphs where nodes represent the processors and edges denote communication links between processors. This thesis concentrates on the computational complexity of the distributed graph colouring problem. The objective of the graph colouring problem is to assign a colour to each node in such a way that no two nodes connected by an edge share the same colour. In particular, it is often desirable to use only a small number of colours. This task is a fundamental symmetry-breaking primitive in various distributed algorithms. A graph that has been coloured in this manner using at most k different colours is said to be k-coloured. This work examines the synchronous message-passing model of distributed computation: every node runs the same algorithm, and the system operates in discrete synchronous communication rounds. During each round, a node can communicate with its neighbours and perform local computation. In this model, the time complexity of a problem is the number of synchronous communication rounds required to solve the problem. It is known that 3-colouring any k-coloured directed cycle requires at least ½(log* k - 3) communication rounds and is possible in ½(log* k + 7) communication rounds for all k ≥ 3. This work shows that for any k ≥ 3, colouring a k-coloured directed cycle with at most three colours is possible in ½(log* k + 3) rounds. In contrast, it is also shown that for some values of k, colouring a directed cycle with at most three colours requires at least ½(log* k + 1) communication rounds. Furthermore, in the case of directed rooted trees, reducing a k-colouring into a 3-colouring requires at least log* k + 1 rounds for some k and possible in log* k + 3 rounds for all k ≥ 3. The new positive and negative results are derived using computational methods, as the existence of distributed colouring algorithms corresponds to the colourability of so-called neighbourhood graphs. The colourability of these graphs is analysed using Boolean satisfiability (SAT) solvers. Finally, this thesis shows that similar methods are applicable in capturing the existence of distributed algorithms for other graph problems, such as the maximal matching problem.
Resumo:
Let G(V, E) be a simple, undirected graph where V is the set of vertices and E is the set of edges. A b-dimensional cube is a Cartesian product l(1) x l(2) x ... x l(b), where each l(i) is a closed interval of unit length on the real line. The cub/city of G, denoted by cub(G), is the minimum positive integer b such that the vertices in G can be mapped to axis parallel b-dimensional cubes in such a way that two vertices are adjacent in G if and only if their assigned cubes intersect. An interval graph is a graph that can be represented as the intersection of intervals on the real line-i.e. the vertices of an interval graph can be mapped to intervals on the real line such that two vertices are adjacent if and only if their corresponding intervals overlap. Suppose S(m) denotes a star graph on m+1 nodes. We define claw number psi(G) of the graph to be the largest positive integer m such that S(m) is an induced subgraph of G. It can be easily shown that the cubicity of any graph is at least log(2) psi(G)]. In this article, we show that for an interval graph G log(2) psi(G)-]<= cub(G)<=log(2) psi(G)]+2. It is not clear whether the upper bound of log(2) psi(G)]+2 is tight: till now we are unable to find any interval graph with cub(G)> (log(2)psi(G)]. We also show that for an interval graph G, cub(G) <= log(2) alpha], where alpha is the independence number of G. Therefore, in the special case of psi(G)=alpha, cub(G) is exactly log(2) alpha(2)]. The concept of cubicity can be generalized by considering boxes instead of cubes. A b-dimensional box is a Cartesian product l(1) x l(2) x ... x l(b), where each I is a closed interval on the real line. The boxicity of a graph, denoted box(G), is the minimum k such that G is the intersection graph of k-dimensional boxes. It is clear that box(G)<= cub(G). From the above result, it follows that for any graph G, cub(G) <= box(G)log(2) alpha]. (C) 2010 Wiley Periodicals, Inc. J Graph Theory 65: 323-333, 2010
Resumo:
This work is a survey of the average cost control problem for discrete-time Markov processes. The authors have attempted to put together a comprehensive account of the considerable research on this problem over the past three decades. The exposition ranges from finite to Borel state and action spaces and includes a variety of methodologies to find and characterize optimal policies. The authors have included a brief historical perspective of the research efforts in this area and have compiled a substantial yet not exhaustive bibliography. The authors have also identified several important questions that are still open to investigation.
Resumo:
We show by numerical simulations that discretized versions of commonly studied continuum nonlinear growth equations (such as the Kardar-Parisi-Zhangequation and the Lai-Das Sarma-Villain equation) and related atomistic models of epitaxial growth have a generic instability in which isolated pillars (or grooves) on an otherwise flat interface grow in time when their height (or depth) exceeds a critical value. Depending on the details of the model, the instability found in the discretized version may or may not be present in the truly continuum growth equation, indicating that the behavior of discretized nonlinear growth equations may be very different from that of their continuum counterparts. This instability can be controlled either by the introduction of higher-order nonlinear terms with appropriate coefficients or by restricting the growth of pillars (or grooves) by other means. A number of such ''controlled instability'' models are studied by simulation. For appropriate choice of the parameters used for controlling the instability, these models exhibit intermittent behavior, characterized by multiexponent scaling of height fluctuations, over the time interval during which the instability is active. The behavior found in this regime is very similar to the ''turbulent'' behavior observed in recent simulations of several one- and two-dimensional atomistic models of epitaxial growth.
Resumo:
We consider Gaussian multiple-input multiple-output (MIMO) channels with discrete input alphabets. We propose a non-diagonal precoder based on X-Codes in to increase the mutual information. The MIMO channel is transformed into a set of parallel subchannels using Singular Value Decomposition (SVD) and X-codes are then used to pair the subchannels. X-Codes are fully characterized by the pairings and the 2 × 2 real rotation matrices for each pair (parameterized with a single angle). This precoding structure enables to express the total mutual information as a sum of the mutual information of all the pairs. The problem of finding the optimal precoder with the above structure, which maximizes the total mutual information, is equivalent to i) optimizing the rotation angle and the power allocation within each pair and ii) finding the optimal pairing and power allocation among the pairs. It is shown that the mutual information achieved with the proposed pairing scheme is very close to that achieved with the optimal precoder by Cruz et al., and significantly better than mercury/waterfilling strategy by Lozano et al.. Our approach greatly simplifies both the precoder optimization and the detection complexity, making it suitable for practical applications.
Resumo:
We consider Gaussian multiple-input multiple-output (MIMO) channels with discrete input alphabets. We propose a non-diagonal precoder based on the X-Codes in 1] to increase the mutual information. The MIMO channel is transformed into a set of parallel subchannels using singular value decomposition (SVD) and X-Codes are then used to pair the subchannels. X-Codes are fully characterized by the pairings and a 2 x 2 real rotation matrix for each pair (parameterized with a single angle). This precoding structure enables us to express the total mutual information as a sum of the mutual information of all the pairs. The problem of finding the optimal precoder with the above structure, which maximizes the total mutual information, is solved by: i) optimizing the rotation angle and the power allocation within each pair and ii) finding the optimal pairing and power allocation among the pairs. It is shown that the mutual information achieved with the proposed pairing scheme is very close to that achieved with the optimal precoder by Cruz et al., and is significantly better than Mercury/waterfilling strategy by Lozano et al. Our approach greatly simplifies both the precoder optimization and the detection complexity, making it suitable for practical applications.
Resumo:
The problem of electromagnetic wave propagation in a rectangular waveguide containing a thick iris is considered for its complete solution by reducing it to two suitable integral equations, one of which is of the first kind and the other is of the second kind. These integral equations are solved approximately, by using truncated Fourier series for the unknown functions. The reflection coefficient is computed numerically from the two integral equation approaches, and almost the same numerical results are obtained. This is also depicted graphically against the wave number and compared with thin iris results, which are computed by using complementary formulations coupled with Galerkin approximations. While the reflection coefficient for a thin iris steadily increases with the wave number, for a thick iris it fluctuates and zero reflection occurs. The number of zeros of the reflection coefficient for a thick iris increases with the thickness. Thus a thick iris becomes completely transparent for some discrete wave numbers. This phenomenon may be significant in the modelling of rectangular waveguides.
Resumo:
In this study, we investigated nonlinear measures of chaos of QT interval time series in 28 normal control subjects, 36 patients with panic disorder and 18 patients with major depression in supine and standing postures. We obtained the minimum embedding dimension (MED) and the largest Lyapunov exponent (LLE) of instantaneous heart rate (HR) and QT interval series. MED quantifies the system's complexity and LLE predictability. There was a significantly lower MED and a significantly increased LLE of QT interval time series in patients. Most importantly, nonlinear indices of QT/HR time series, MEDqthr (MED of QT/HR) and LLEqthr (LLE of QT/HR), were highly significantly different between controls and both patient groups in either posture. Results remained the same even after adjusting for age. The increased LLE of QT interval time, series in patients with anxiety and depression is in line with our previous findings of higher QTvi (QT variability index, a log ratio of QT variability corrected for mean QT squared divided by heart rate variability corrected for mean heart rate squared) in these patients, using linear techniques. Increased LLEqthr (LLE of QT/HR) may be a more sensitive tool to study cardiac repolarization and a valuable addition to the time domain measures such as QTvi. This is especially important in light of the finding that LLEqthr correlated poorly and nonsignificantly with QTvi. These findings suggest an increase in relative cardiac sympathetic activity and a decrease in certain aspects of cardiac vagal function in patients with anxiety as well as depression. The lack of correlation between QTvi and LLEqthr suggests that this nonlinear index is a valuable addition to the linear measures. These findings may also help to explain the higher incidence of cardiovascular mortality in patients with anxiety and depressive disorders. (C) 2002 Elsevier Science Ireland Ltd. All rights reserved.
Resumo:
Bid optimization is now becoming quite popular in sponsored search auctions on the Web. Given a keyword and the maximum willingness to pay of each advertiser interested in the keyword, the bid optimizer generates a profile of bids for the advertisers with the objective of maximizing customer retention without compromising the revenue of the search engine. In this paper, we present a bid optimization algorithm that is based on a Nash bargaining model where the first player is the search engine and the second player is a virtual agent representing all the bidders. We make the realistic assumption that each bidder specifies a maximum willingness to pay values and a discrete, finite set of bid values. We show that the Nash bargaining solution for this problem always lies on a certain edge of the convex hull such that one end point of the edge is the vector of maximum willingness to pay of all the bidders. We show that the other endpoint of this edge can be computed as a solution of a linear programming problem. We also show how the solution can be transformed to a bid profile of the advertisers.
Resumo:
We present two efficient discrete parameter simulation optimization (DPSO) algorithms for the long-run average cost objective. One of these algorithms uses the smoothed functional approximation (SFA) procedure, while the other is based on simultaneous perturbation stochastic approximation (SPSA). The use of SFA for DPSO had not been proposed previously in the literature. Further, both algorithms adopt an interesting technique of random projections that we present here for the first time. We give a proof of convergence of our algorithms. Next, we present detailed numerical experiments on a problem of admission control with dependent service times. We consider two different settings involving parameter sets that have moderate and large sizes, respectively. On the first setting, we also show performance comparisons with the well-studied optimal computing budget allocation (OCBA) algorithm and also the equal allocation algorithm. Note to Practitioners-Even though SPSA and SFA have been devised in the literature for continuous optimization problems, our results indicate that they can be powerful techniques even when they are adapted to discrete optimization settings. OCBA is widely recognized as one of the most powerful methods for discrete optimization when the parameter sets are of small or moderate size. On a setting involving a parameter set of size 100, we observe that when the computing budget is small, both SPSA and OCBA show similar performance and are better in comparison to SFA, however, as the computing budget is increased, SPSA and SFA show better performance than OCBA. Both our algorithms also show good performance when the parameter set has a size of 10(8). SFA is seen to show the best overall performance. Unlike most other DPSO algorithms in the literature, an advantage with our algorithms is that they are easily implementable regardless of the size of the parameter sets and show good performance in both scenarios.