32 resultados para ANSWER
em Indian Institute of Science - Bangalore - Índia
Resumo:
In this paper we study two problems in feedback stabilization. The first is the simultaneous stabilization problem, which can be stated as follows. Given plantsG_{0}, G_{1},..., G_{l}, does there exist a single compensatorCthat stabilizes all of them? The second is that of stabilization by a stable compensator, or more generally, a "least unstable" compensator. Given a plantG, we would like to know whether or not there exists a stable compensatorCthat stabilizesG; if not, what is the smallest number of right half-place poles (counted according to their McMillan degree) that any stabilizing compensator must have? We show that the two problems are equivalent in the following sense. The problem of simultaneously stabilizingl + 1plants can be reduced to the problem of simultaneously stabilizinglplants using a stable compensator, which in turn can be stated as the following purely algebraic problem. Given2lmatricesA_{1}, ..., A_{l}, B_{1}, ..., B_{l}, whereA_{i}, B_{i}are right-coprime for alli, does there exist a matrixMsuch thatA_{i} + MB_{i}, is unimodular for alli?Conversely, the problem of simultaneously stabilizinglplants using a stable compensator can be formulated as one of simultaneously stabilizingl + 1plants. The problem of determining whether or not there exists anMsuch thatA + BMis unimodular, given a right-coprime pair (A, B), turns out to be a special case of a question concerning a matrix division algorithm in a proper Euclidean domain. We give an answer to this question, and we believe this result might be of some independent interest. We show that, given twon times mplantsG_{0} and G_{1}we can generically stabilize them simultaneously provided eithernormis greater than one. In contrast, simultaneous stabilizability, of two single-input-single-output plants, g0and g1, is not generic.
Resumo:
Various field test (namely vibration tests on blocks or plates, steady-state vibration or Rayleigh wave tests, wave propagation tests, and cyclic load tests) were conducted at a number of sites in India to determine the dynamic shear modulus, G. Data obtained at different sites are described. The values of G obtained from the different tests at a given site vary widely. The rational approach for selecting the value of G from field tests for use in the analysis and design of soil-structure interaction problems under dynamic loads must account for the factors affecting G. The suggested approach, which provides a possible answer, is suitable in cohesionless soils below the water table where it is rather difficult, if not impossible, to obtain undisturbed samples.
Resumo:
Query incentive networks capture the role of incentives in extracting information from decentralized information networks such as a social network. Several game theoretic tilt:Kids of query incentive networks have been proposed in the literature to study and characterize the dependence, of the monetary reward required to extract the answer for a query, on various factors such as the structure of the network, the level of difficulty of the query, and the required success probability.None of the existing models, however, captures the practical andimportant factor of quality of answers. In this paper, we develop a complete mechanism design based framework to incorporate the quality of answers, in the monetization of query incentive networks. First, we extend the model of Kleinberg and Raghavan [2] to allow the nodes to modulate the incentive on the basis of the quality of the answer they receive. For this qualify conscious model. we show are existence of a unique Nash equilibrium and study the impact of quality of answers on the growth rate of the initial reward, with respect to the branching factor of the network. Next, we present two mechanisms; the direct comparison mechanism and the peer prediction mechanism, for truthful elicitation of quality from the agents. These mechanisms are based on scoring rules and cover different; scenarios which may arise in query incentive networks. We show that the proposed quality elicitation mechanisms are incentive compatible and ex-ante budget balanced. We also derive conditions under which ex-post budget balance can beachieved by these mechanisms.
Resumo:
By using the bender and extender elements tests, the travel times of the shear (S) and the primary (P) waves were measured for dry sand samples at different relative densities and effective confining pressures. Three methods of interpretations, namely, (i) the first time of arrival, (ii) the first peak to peak, and (iii) the cross-correlation method, were employed. All the methods provide almost a unique answer associated with the P-wave measurements. On contrary, a difference was noted in the arrival times obtained from the different methods for the S-wave due to the near field effect. The resonant column tests in the torsional mode were also performed to check indirectly the travel time of the shear wave. The study reveals that as compared to the S-wave, it is more reliable to depend on the arrival times' measurement for the P-wave. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
By using the bender and extender elements tests, the travel times of the shear (S) and the primary (P) waves were measured for dry sand samples at different relative densities and effective confining pressures. Three methods of interpretations, namely, (i) the first time of arrival, (ii) the first peak to peak, and (iii) the cross-correlation method, were employed. All the methods provide almost a unique answer associated with the P-wave measurements. On contrary, a difference was noted in the arrival times obtained from the different methods for the S-wave due to the near field effect. The resonant column tests in the torsional mode were also performed to check indirectly the travel time of the shear wave. The study reveals that as compared to the S-wave, it is more reliable to depend on the arrival times’ measurement for the P-wave.
Resumo:
Design creativity involves developing novel and useful solutions to design problems The research in this article is an attempt to understand how novelty of a design resulting from a design process is related to the kind of outcomes. described here as constructs, involved in the design process A model of causality, the SAPPhIRE model, is used as the basis of the analysis The analysis is based on previous research that shows that designing involves development and exploration of the seven basic constructs of the SAPPhIRE model that constitute the causal connection between the various levels of abstraction at which a design can be described The constructs am state change, action, parts. phenomenon. input. organs. and effect The following two questions are asked. Is there a relationship between novelty and the constructs? If them is a relationship, what is the degree of this relationship? A hypothesis is developed to answer the questions an increase in the number and variety of ideas explored while designing should enhance the variety of concept space. leading to an increase in the novelty of the concept space Eight existing observational studies of designing sessions are used to empirically validate the hypothesis Each designing session involves an individual designer. experienced or novice. solving a design problem by producing concepts and following a think-aloud protocol. The results indicate dependence of novelty of concept space on variety of concept space and dependence of variety of concept space on variety of idea space. thereby validating the hypothesis The Jesuits also reveal a strong correlation between novelty and the constructs, correlation value decreases as the abstraction level of the constructs reduces. signifying the importance of using constructs at higher abstraction levels for enhancing novelty
Resumo:
We derive the heat kernel for arbitrary tensor fields on S-3 and (Euclidean) AdS(3) using a group theoretic approach. We use these results to also obtain the heat kernel on certain quotients of these spaces. In particular, we give a simple, explicit expression for the one loop determinant for a field of arbitrary spin s in thermal AdS(3). We apply this to the calculation of the one loop partition function of N = 1 supergravity on AdS(3). We find that the answer factorizes into left- and right-moving super Virasoro characters built on the SL(2, C) invariant vacuum, as argued by Maloney and Witten on general grounds.
Resumo:
We consider the problem of matching people to jobs, where each person ranks a subset of jobs in an order of preference, possibly involving ties. There are several notions of optimality about how to best match each person to a job; in particular, popularity is a natural and appealing notion of optimality. However, popular matchings do not always provide an answer to the problem of determining an optimal matching since there are simple instances that do not adroit popular matchings. This motivates the following extension of the popular rnatchings problem:Given a graph G; = (A boolean OR J, E) where A is the set of people and J is the set of jobs, and a list < c(1), c(vertical bar J vertical bar)) denoting upper bounds on the capacities of each job, does there exist (x(1), ... , x(vertical bar J vertical bar)) such that setting the capacity of i-th, job to x(i) where 1 <= x(i) <= c(i), for each i, enables the resulting graph to admit a popular matching. In this paper we show that the above problem is NP-hard. We show that the problem is NP-hard even when each c is 1 or 2.
Resumo:
Let G - (V, E) be a weighted undirected graph having nonnegative edge weights. An estimate (delta) over cap (u, v) of the actual distance d( u, v) between u, v is an element of V is said to be of stretch t if and only if delta(u, v) <= (delta) over cap (u, v) <= t . delta(u, v). Computing all-pairs small stretch distances efficiently ( both in terms of time and space) is a well-studied problem in graph algorithms. We present a simple, novel, and generic scheme for all-pairs approximate shortest paths. Using this scheme and some new ideas and tools, we design faster algorithms for all-pairs t-stretch distances for a whole range of stretch t, and we also answer an open question posed by Thorup and Zwick in their seminal paper [J. ACM, 52 (2005), pp. 1-24].
Resumo:
The clusters of binary patterns can be considered as Boolean functions of the (binary) features. Such a relationship between the linearly separable (LS) Boolean functions and LS clusters of binary patterns is examined. An algorithm is presented to answer the questions of the type: “Is the cluster formed by the subsets of the (binary) data set having certain features AND/NOT having certain other features, LS from the remaining set?” The algorithm uses the sequences of Numbered Binary Form (NBF) notation and some elementary (NPN) transformations of the binary data.
Resumo:
The clusters of binary patterns can be considered as Boolean functions of the (binary) features. Such a relationship between the linearly separable (LS) Boolean functions and LS clusters of binary patterns is examined. An algorithm is presented to answer the questions of the type: “Is the cluster formed by the subsets of the (binary) data set having certain features AND/NOT having certain other features, LS from the remaining set?” The algorithm uses the sequences of Numbered Binary Form (NBF) notation and some elementary (NPN) transformations of the binary data.
Resumo:
In order to answer the practically important question of whether the down conductors of lightning protection systems to tall towers and buildings can be electrically isolated from the structure itself, this work is conducted. As a first step in this regard, it is presumed that the down conductor placed on metallic tower will be a pessimistic representation of the actual problem. This opinion was based on the fact that the proximity of heavy metallic structure will have a large damping effect. The post-stroke current distributions along the down conductors and towers, which can be quite different from that in the lightning channel, govern the post-stroke near field and the resulting gradient in the soil. Also, for a reliable estimation of the actual stroke current from the measured down conductor currents, it is essential to know the current distribution characteristics along the down conductors. In view of these, the present work attempts to deduce the post-stroke current and voltage distribution along typical down conductors and towers. A solution of the governing field equations on an electromagnetic model of the system is sought for the investigation. Simulation results providing the spatio-temporal distribution of the post-stroke current and voltage has provided very interesting results. It is concluded that it is almost impossible to achieve electrical isolation between the structure and the down conductor. Furthermore, there will be significant induction into the steel matrix of the supporting structure.
Resumo:
We consider the problem of matching people to items, where each person ranks a subset of items in an order of preference, possibly involving ties. There are several notions of optimality about how to best match a person to an item; in particular, popularity is a natural and appealing notion of optimality. A matching M* is popular if there is no matching M such that the number of people who prefer M to M* exceeds the number who prefer M* to M. However, popular matchings do not always provide an answer to the problem of determining an optimal matching since there are simple instances that do not admit popular matchings. This motivates the following extension of the popular matchings problem: Given a graph G = (A U 3, E) where A is the set of people and 2 is the set of items, and a list < c(1),...., c(vertical bar B vertical bar)> denoting upper bounds on the number of copies of each item, does there exist < x(1),...., x(vertical bar B vertical bar)> such that for each i, having x(i) copies of the i-th item, where 1 <= xi <= c(i), enables the resulting graph to admit a popular matching? In this paper we show that the above problem is NP-hard. We show that the problem is NP-hard even when each c(i) is 1 or 2. We show a polynomial time algorithm for a variant of the above problem where the total increase in copies is bounded by an integer k. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
We consider the following question: Let S (1) and S (2) be two smooth, totally-real surfaces in C-2 that contain the origin. If the union of their tangent planes is locally polynomially convex at the origin, then is S-1 boolean OR S-2 locally polynomially convex at the origin? If T (0) S (1) a (c) T (0) S (2) = {0}, then it is a folk result that the answer is yes. We discuss an obstruction to the presumed proof, and provide a different approach. When dim(R)(T0S1 boolean AND T0S2) = 1, we present a geometric condition under which no consistent answer to the above question exists. We then discuss conditions under which we can expect local polynomial convexity.
Resumo:
The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.