199 resultados para Bounds


Relevância:

10.00% 10.00%

Publicador:

Resumo:

General relativity has very specific predictions for the gravitational waveforms from inspiralling compact binaries obtained using the post-Newtonian (PN) approximation. We investigate the extent to which the measurement of the PN coefficients, possible with the second generation gravitational-wave detectors such as the Advanced Laser Interferometer Gravitational-Wave Observatory (LIGO) and the third generation gravitational-wave detectors such as the Einstein Telescope (ET), could be used to test post-Newtonian theory and to put bounds on a subclass of parametrized-post-Einstein theories which differ from general relativity in a parametrized sense. We demonstrate this possibility by employing the best inspiralling waveform model for nonspinning compact binaries which is 3.5PN accurate in phase and 3PN in amplitude. Within the class of theories considered, Advanced LIGO can test the theory at 1.5PN and thus the leading tail term. Future observations of stellar mass black hole binaries by ET can test the consistency between the various PN coefficients in the gravitational-wave phasing over the mass range of 11-44M(circle dot). The choice of the lower frequency cutoff is important for testing post-Newtonian theory using the ET. The bias in the test arising from the assumption of nonspinning binaries is indicated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the ergodic control for a controlled nondegenerate diffusion when m other (m finite) ergodic costs are required to satisfy prescribed bounds. Under a condition on the cost functions that penalizes instability, the existence of an optimal stable Markov control is established by convex analytic arguments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Observational studies indicate that the convective activity of the monsoon systems undergo intraseasonal variations with multi-week time scales. The zone of maximum monsoon convection exhibits substantial transient behavior with successive propagating from the North Indian Ocean to the heated continent. Over South Asia the zone achieves its maximum intensity. These propagations may extend over 3000 km in latitude and perhaps twice the distance in longitude and remain as coherent entities for periods greater than 2-3 weeks. Attempts to explain this phenomena using simple ocean-atmosphere models of the monsoon system had concluded that the interactive ground hydrology so modifies the total heating of the atmosphere that a steady state solution is not possible, thus promoting lateral propagation. That is, the ground hydrology forces the total heating of the atmosphere and the vertical velocity to be slightly out of phase, causing a migration of the convection towards the region of maximum heating. Whereas the lateral scale of the variations produced by the Webster (1983) model were essentially correct, they occurred at twice the frequency of the observed events and were formed near the coastal margin, rather than over the ocean. Webster's (1983) model used to pose the theories was deficient in a number of aspects. Particularly, both the ground moisture content and the thermal inertia of the model were severely underestimated. At the same time, the sea surface temperatures produced by the model between the equator and the model's land-sea boundary were far too cool. Both the atmosphere and the ocean model were modified to include a better hydrological cycle and ocean structure. The convective events produced by the modified model possessed the observed frequency and were generated well south of the coastline. The improved simulation of monsoon variability allowed the hydrological cycle feedback to be generalized. It was found that monsoon variability was constrained to lie within the bounds of a positive gradient of a convective intensity potential (I). The function depends primarily on the surface temperature, the availability of moisture and the stability of the lower atmosphere which varies very slowly on the time scale of months. The oscillations of the monsoon perturb the mean convective intensity potential causing local enhancements of the gradient. These perturbations are caused by the hydrological feedbacks, discussed above, or by the modification of the air-sea fluxes caused by variations of the low level wind during convective events. The final result is the slow northward propagation of convection within an even slower convective regime. The ECMWF analyses show very similar behavior of the convective intensity potential. Although it is considered premature to use the model to conduct simulations of the African monsoon system, the ECMWF analysis indicates similar behavior in the convective intensity potential suggesting, at least, that the same processes control the low frequency structure of the African monsoon. The implications of the hypotheses on numerical weather prediction of monsoon phenomenon are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A von Mises truss with stochastically varying material properties is investigated for snapthrough instability. The variability of the snap-through load is calculated analytically as a function of the material property variability represented as a stochastic process. The bounds are established which are independent of the knowledge of the complete description of correlation structure which is seldom possible using the experimental data. Two processes are considered to represent the material property variability and the results are presented graphically. Ein von Mises Fachwerk mit stochastisch verteilten Materialeigenschaften wird bezüglich der Durchschlagsinstabilität untersucht. Die Spannbreite der Durchschlagslast wird analytisch als Funktion der Spannbreite der Materialeigenschaften berechnet, die stochastisch verteilt angenommen werden. Eine explizite Gesamtbeschreibung der Struktur ist bei Benutzung experimenteller Daten selten möglich. Deshalb werden Grenzen für die Durchschlagskraft entwickelt, die von der Kenntnis dieser Gesamtbeschreibung unabhängig sind. Zwei Grenzfälle werden betrachtet, um die Spannbreite der Materialeigenschaften darzustellen. Die Ergebnisse werden grafisch dargestellt.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The effect of uncertainties on performance predictions of a helicopter is studied in this article. The aeroelastic parameters such as the air density, blade profile drag coefficient, main rotor angular velocity, main rotor radius, and blade chord are considered as uncertain variables. The propagation of these uncertainties in the performance parameters such as thrust coefficient, figure of merit, induced velocity, and power required are studied using Monte Carlo simulation and the first-order reliability method. The Rankine-Froude momentum theory is used for performance prediction in hover, axial climb, and forward flight. The propagation of uncertainty causes large deviations from the baseline deterministic predictions, which undoubtedly affect both the achievable performance and the safety of the helicopter. The numerical results in this article provide useful bounds on helicopter power requirements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is huge knowledge gap in our understanding of many terrestrial carbon cycle processes. In this paper, we investigate the bounds on terrestrial carbon uptake over India that arises solely due to CO (2) -fertilization. For this purpose, we use a terrestrial carbon cycle model and consider two extreme scenarios: unlimited CO2-fertilization is allowed for the terrestrial vegetation with CO2 concentration level at 735 ppm in one case, and CO2-fertilization is capped at year 1975 levels for another simulation. Our simulations show that, under equilibrium conditions, modeled carbon stocks in natural potential vegetation increase by 17 Gt-C with unlimited fertilization for CO2 levels and climate change corresponding to the end of 21st century but they decline by 5.5 Gt-C if fertilization is limited at 1975 levels of CO2 concentration. The carbon stock changes are dominated by forests. The area covered by natural potential forests increases by about 36% in the unlimited fertilization case but decreases by 15% in the fertilization-capped case. Thus, the assumption regarding CO2-fertilization has the potential to alter the sign of terrestrial carbon uptake over India. Our model simulations also imply that the maximum potential terrestrial sequestration over India, under equilibrium conditions and best case scenario of unlimited CO2-fertilization, is only 18% of the 21st century SRES A2 scenarios emissions from India. The limited uptake potential of the natural potential vegetation suggests that reduction of CO2 emissions and afforestation programs should be top priorities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hydrogen storage in the three-dimensional carbon foams is analyzed using classical grand canonical Monte Carlo simulations. The calculated storage capacities of the foams meet the material-based DOE targets and are comparable to the capacities of a bundle of well-separated similar diameter open nanotubes. The pore sizes in the foams are optimized for the best hydrogen uptake. The capacity depends sensitively on the C-H-2 interaction potential, and therefore, the results are presented for its ``weak'' and ``strong'' choices, to offer the lower and upper bounds for the expected capacities. Furthermore, quantum effects on the effective C-H-2 as well as H-2-H-2 interaction potentials are considered. We find that the quantum effects noticeably change the adsorption properties of foams and must be accounted for even at room temperature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let G be a simple, undirected, finite graph with vertex set V(G) and edge set E(C). A k-dimensional box is a Cartesian product of closed intervals a(1), b(1)] x a(2), b(2)] x ... x a(k), b(k)]. The boxicity of G, box(G) is the minimum integer k such that G can be represented as the intersection graph of k-dimensional boxes, i.e. each vertex is mapped to a k-dimensional box and two vertices are adjacent in G if and only if their corresponding boxes intersect. Let P = (S, P) be a poset where S is the ground set and P is a reflexive, anti-symmetric and transitive binary relation on S. The dimension of P, dim(P) is the minimum integer l such that P can be expressed as the intersection of t total orders. Let G(P) be the underlying comparability graph of P. It is a well-known fact that posets with the same underlying comparability graph have the same dimension. The first result of this paper links the dimension of a poset to the boxicity of its underlying comparability graph. In particular, we show that for any poset P, box(G(P))/(chi(G(P)) - 1) <= dim(P) <= 2box(G(P)), where chi(G(P)) is the chromatic number of G(P) and chi(G(P)) not equal 1. The second result of the paper relates the boxicity of a graph G with a natural partial order associated with its extended double cover, denoted as G(c). Let P-c be the natural height-2 poset associated with G(c) by making A the set of minimal elements and B the set of maximal elements. We show that box(G)/2 <= dim(P-c) <= 2box(G) + 4. These results have some immediate and significant consequences. The upper bound dim(P) <= 2box(G(P)) allows us to derive hitherto unknown upper bounds for poset dimension. In the other direction, using the already known bounds for partial order dimension we get the following: (I) The boxicity of any graph with maximum degree Delta is O(Delta log(2) Delta) which is an improvement over the best known upper bound of Delta(2) + 2. (2) There exist graphs with boxicity Omega(Delta log Delta). This disproves a conjecture that the boxicity of a graph is O(Delta). (3) There exists no polynomial-time algorithm to approximate the boxicity of a bipartite graph on n vertices with a factor of O(n(0.5-epsilon)) for any epsilon > 0, unless NP=ZPP.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Presented here, in a vector formulation, is an O(mn2) direct concise algorithm that prunes/identifies the linearly dependent (ld) rows of an arbitrary m X n matrix A and computes its reflexive type minimum norm inverse A(mr)-, which will be the true inverse A-1 if A is nonsingular and the Moore-Penrose inverse A+ if A is full row-rank. The algorithm, without any additional computation, produces the projection operator P = (I - A(mr)- A) that provides a means to compute any of the solutions of the consistent linear equation Ax = b since the general solution may be expressed as x = A(mr)+b + Pz, where z is an arbitrary vector. The rank r of A will also be produced in the process. Some of the salient features of this algorithm are that (i) the algorithm is concise, (ii) the minimum norm least squares solution for consistent/inconsistent equations is readily computable when A is full row-rank (else, a minimum norm solution for consistent equations is obtainable), (iii) the algorithm identifies ld rows, if any, and reduces concerned computation and improves accuracy of the result, (iv) error-bounds for the inverse as well as the solution x for Ax = b are readily computable, (v) error-free computation of the inverse, solution vector, rank, and projection operator and its inherent parallel implementation are straightforward, (vi) it is suitable for vector (pipeline) machines, and (vii) the inverse produced by the algorithm can be used to solve under-/overdetermined linear systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of matching people to items, where each person ranks a subset of items in an order of preference, possibly involving ties. There are several notions of optimality about how to best match a person to an item; in particular, popularity is a natural and appealing notion of optimality. A matching M* is popular if there is no matching M such that the number of people who prefer M to M* exceeds the number who prefer M* to M. However, popular matchings do not always provide an answer to the problem of determining an optimal matching since there are simple instances that do not admit popular matchings. This motivates the following extension of the popular matchings problem: Given a graph G = (A U 3, E) where A is the set of people and 2 is the set of items, and a list < c(1),...., c(vertical bar B vertical bar)> denoting upper bounds on the number of copies of each item, does there exist < x(1),...., x(vertical bar B vertical bar)> such that for each i, having x(i) copies of the i-th item, where 1 <= xi <= c(i), enables the resulting graph to admit a popular matching? In this paper we show that the above problem is NP-hard. We show that the problem is NP-hard even when each c(i) is 1 or 2. We show a polynomial time algorithm for a variant of the above problem where the total increase in copies is bounded by an integer k. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Convolutional network-error correcting codes (CNECCs) are known to provide error correcting capability in acyclic instantaneous networks within the network coding paradigm under small field size conditions. In this work, we investigate the performance of CNECCs under the error model of the network where the edges are assumed to be statistically independent binary symmetric channels, each with the same probability of error pe(0 <= p(e) < 0.5). We obtain bounds on the performance of such CNECCs based on a modified generating function (the transfer function) of the CNECCs. For a given network, we derive a mathematical condition on how small p(e) should be so that only single edge network-errors need to be accounted for, thus reducing the complexity of evaluating the probability of error of any CNECC. Simulations indicate that convolutional codes are required to possess different properties to achieve good performance in low p(e) and high p(e) regimes. For the low p(e) regime, convolutional codes with good distance properties show good performance. For the high p(e) regime, convolutional codes that have a good slope ( the minimum normalized cycle weight) are seen to be good. We derive a lower bound on the slope of any rate b/c convolutional code with a certain degree.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper studies the problem of constructing robust classifiers when the training is plagued with uncertainty. The problem is posed as a Chance-Constrained Program (CCP) which ensures that the uncertain data points are classified correctly with high probability. Unfortunately such a CCP turns out to be intractable. The key novelty is in employing Bernstein bounding schemes to relax the CCP as a convex second order cone program whose solution is guaranteed to satisfy the probabilistic constraint. Prior to this work, only the Chebyshev based relaxations were exploited in learning algorithms. Bernstein bounds employ richer partial information and hence can be far less conservative than Chebyshev bounds. Due to this efficient modeling of uncertainty, the resulting classifiers achieve higher classification margins and hence better generalization. Methodologies for classifying uncertain test data points and error measures for evaluating classifiers robust to uncertain data are discussed. Experimental results on synthetic and real-world datasets show that the proposed classifiers are better equipped to handle data uncertainty and outperform state-of-the-art in many cases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Shannon cipher system is studied in the context of general sources using a notion of computational secrecy introduced by Merhav and Arikan. Bounds are derived on limiting exponents of guessing moments for general sources. The bounds are shown to be tight for i.i.d., Markov, and unifilar sources, thus recovering some known results. A close relationship between error exponents and correct decoding exponents for fixed rate source compression on the one hand and exponents for guessing moments on the other hand is established.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

his paper studies the problem of designing a logical topology over a wavelength-routed all-optical network (AON) physical topology, The physical topology consists of the nodes and fiber links in the network, On an AON physical topology, we can set up lightpaths between pairs of nodes, where a lightpath represents a direct optical connection without any intermediate electronics, The set of lightpaths along with the nodes constitutes the logical topology, For a given network physical topology and traffic pattern (relative traffic distribution among the source-destination pairs), our objective is to design the logical topology and the routing algorithm on that topology so as to minimize the network congestion while constraining the average delay seen by a source-destination pair and the amount of processing required at the nodes (degree of the logical topology), We will see that ignoring the delay constraints can result in fairly convoluted logical topologies with very long delays, On the other hand, in all our examples, imposing it results in a minimal increase in congestion, While the number of wavelengths required to imbed the resulting logical topology on the physical all optical topology is also a constraint in general, we find that in many cases of interest this number can be quite small, We formulate the combined logical topology design and routing problem described above (ignoring the constraint on the number of available wavelengths) as a mixed integer linear programming problem which we then solve for a number of cases of a six-node network, Since this programming problem is computationally intractable for larger networks, we split it into two subproblems: logical topology design, which is computationally hard and will probably require heuristic algorithms, and routing, which can be solved by a linear program, We then compare the performance of several heuristic topology design algorithms (that do take wavelength assignment constraints into account) against that of randomly generated topologies, as well as lower bounds derived in the paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper reports new results concerning the capabilities of a family of service disciplines aimed at providing per-connection end-to-end delay (and throughput) guarantees in high-speed networks. This family consists of the class of rate-controlled service disciplines, in which traffic from a connection is reshaped to conform to specific traffic characteristics, at every hop on its path. When used together with a scheduling policy at each node, this reshaping enables the network to provide end-to-end delay guarantees to individual connections. The main advantages of this family of service disciplines are their implementation simplicity and flexibility. On the other hand, because the delay guarantees provided are based on summing worst case delays at each node, it has also been argued that the resulting bounds are very conservative which may more than offset the benefits. In particular, other service disciplines such as those based on Fair Queueing or Generalized Processor Sharing (GPS), have been shown to provide much tighter delay bounds. As a result, these disciplines, although more complex from an implementation point-of-view, have been considered for the purpose of providing end-to-end guarantees in high-speed networks. In this paper, we show that through ''proper'' selection of the reshaping to which we subject the traffic of a connection, the penalty incurred by computing end-to-end delay bounds based on worst cases at each node can be alleviated. Specifically, we show how rate-controlled service disciplines can be designed to outperform the Rate Proportional Processor Sharing (RPPS) service discipline. Based on these findings, we believe that rate-controlled service disciplines provide a very powerful and practical solution to the problem of providing end-to-end guarantees in high-speed networks.