122 resultados para Lower Bounds
Resumo:
In this paper we study the planetary-scale wave features using concurrent observations of mesospheric wind and temperature, ionospheric h'F, and tropospheric wind from Tirunelveli, Gadanki, and Kolhapur, all located in the Indian low latitudes, made during February 2009. Our investigations reveal that 3 to 5 day periodicity, characterized as ultrafast Kelvin (UFK) waves, was persistent throughout the atmosphere during this period. These waves show clear signatures of upward wave propagation from troposphere to the upper mesosphere, linking the ionosphere through a clear correlation between mesospheric winds and h'F variations. We also note that the amplitude of this wave decreased as we moved away from the equator. These results are the first of their kind from Indian sector, portraying the vertical as well as latitudinal characteristics of the 3 to 5 day UFK waves simultaneously from the troposphere to the ionosphere.
Resumo:
The short-lived radionuclide Ca-41 plays an important role in constraining the immediate astrophysical environment and the formation timescale of the nascent solar system due to its extremely short half-life (0.1 Myr). Nearly 20 years ago, the initial ratio of Ca-41/Ca-40 in the solar system was determined to be (1.41 +/- 0.14) x 10(-8), primarily based on two Ca-Al-rich Inclusions (CAIs) from the CV chondrite Efremovka. With an advanced analytical technique for isotopic measurements, we reanalyzed the potassium isotopic compositions of the two Efremovka CAIs and inferred the initial ratios of Ca-41/Ca-40 to be (2.6 +/- 0.9) x 10(-9) and (1.4 +/- 0.6) x 10(-9) (2 sigma), a factor of 7-10 lower than the previously inferred value. Considering possible thermal processing that led to lower Al-26/Al-27 ratios in the two CAIs, we propose that the true solar system initial value of Ca-41/Ca-40 should have been similar to 4.2 x 10(-9). Synchronicity could have existed between Al-26 and Ca-41, indicating a uniform distribution of the two radionuclides at the time of CAI formation. The new initial Ca-41 abundance is 4-16 times lower than the calculated value for steady-state galactic nucleosynthesis. Therefore, Ca-41 could have originated as part of molecular cloud materials with a free decay time of 0.2-0.4 Myr. Alternative possibilities, such as a last-minute input from a stellar source and early solar system irradiation, could not be definitively ruled out. This underscores the need for more data from diverse CAIs to determine the true astrophysical origin of Ca-41.
Resumo:
Our work is motivated by geographical forwarding of sporadic alarm packets to a base station in a wireless sensor network (WSN), where the nodes are sleep-wake cycling periodically and asynchronously. We seek to develop local forwarding algorithms that can be tuned so as to tradeoff the end-to-end delay against a total cost, such as the hop count or total energy. Our approach is to solve, at each forwarding node enroute to the sink, the local forwarding problem of minimizing one-hop waiting delay subject to a lower bound constraint on a suitable reward offered by the next-hop relay; the constraint serves to tune the tradeoff. The reward metric used for the local problem is based on the end-to-end total cost objective (for instance, when the total cost is hop count, we choose to use the progress toward sink made by a relay as the reward). The forwarding node, to begin with, is uncertain about the number of relays, their wake-up times, and the reward values, but knows the probability distributions of these quantities. At each relay wake-up instant, when a relay reveals its reward value, the forwarding node's problem is to forward the packet or to wait for further relays to wake-up. In terms of the operations research literature, our work can be considered as a variant of the asset selling problem. We formulate our local forwarding problem as a partially observable Markov decision process (POMDP) and obtain inner and outer bounds for the optimal policy. Motivated by the computational complexity involved in the policies derived out of these bounds, we formulate an alternate simplified model, the optimal policy for which is a simple threshold rule. We provide simulation results to compare the performance of the inner and outer bound policies against the simple policy, and also against the optimal policy when the source knows the exact number of relays. Observing the good performance and the ease of implementation of the simple policy, we apply it to our motivating problem, i.e., local geographical routing of sporadic alarm packets in a large WSN. We compare the end-to-end performance (i.e., average total delay and average total cost) obtained by the simple policy, when used for local geographical forwarding, against that obtained by the globally optimal forwarding algorithm proposed by Kim et al. 1].
Resumo:
The component and system reliability based design of bridge abutments under earthquake loading is presented in the paper. Planar failure surface has been used in conjunction with pseudo-dynamic approach to compute seismic active earth pressures on an abutment. The pseudo-dynamic method, considers the effect of phase difference in shear waves, soil amplification along with the horizontal seismic accelerations, strain localization in backfill soil and associated post-peak reduction in the shear resistance from peak to residual values along a previously formed failure plane. Four modes of stability viz. sliding, overturning, eccentricity and bearing capacity of the foundation soil are considered in the analysis. The series system reliability is computed with an assumption of independent failure modes. The lower and upper bounds of system reliability are also computed by taking into account the correlations between four failure modes, which is evaluated using the direction cosines of the tangent planes at the most probable points of failure.
Resumo:
A rigorous lower bound solution, with the usage of the finite elements limit analysis, has been obtained for finding the ultimate bearing capacity of two interfering strip footings placed on a sandy medium. Smooth as well as rough footingsoil interfaces are considered in the analysis. The failure load for an interfering footing becomes always greater than that for a single isolated footing. The effect of the interference on the failure load (i) for rough footings becomes greater than that for smooth footings, (ii) increases with an increase in phi, and (iii) becomes almost negligible beyond S/B>3. Compared with various theoretical and experimental results reported in literature, the present analysis generally provides the lowest magnitude of the collapse load. Copyright (c) 2011 John Wiley & Sons, Ltd.
Resumo:
The linearization of the Drucker-Prager yield criterion associated with an axisymmetric problem has been achieved by simulating a sphere with the truncated icosahedron with 32 faces and 60 vertices. On this basis, a numerical formulation has been proposed for solving an axisymmetric stability problem with the usage of the lower-bound limit analysis, finite elements, and linear optimization. To compare the results, the linearization of the Mohr-Coulomb yield criterion, by replacing the three cones with interior polyhedron, as proposed earlier by Pastor and Turgeman for an axisymmetric problem, has also been implemented. The two formulations have been applied for determining the collapse loads for a circular footing resting on a cohesive-friction material with nonzero unit weight. The computational results are found to be quite convincing. (C) 2013 American Society of Civil Engineers.
Resumo:
Amplify-and-forward (AF) relay based cooperation has been investigated in the literature given its simplicity and practicality. Two models for AF, namely, fixed gain and fixed power relaying, have been extensively studied. In fixed gain relaying, the relay gain is fixed but its transmit power varies as a function of the source-relay (SR) channel gain. In fixed power relaying, the relay's instantaneous transmit power is fixed, but its gain varies. We propose a general AF cooperation model in which an average transmit power constrained relay jointly adapts its gain and transmit power as a function of the channel gains. We derive the optimal AF gain policy that minimizes the fading- averaged symbol error probability (SEP) of MPSK and present insightful and tractable lower and upper bounds for it. We then analyze the SEP of the optimal policy. Our results show that the optimal scheme is up to 39.7% and 47.5% more energy-efficient than fixed power relaying and fixed gain relaying, respectively. Further, the weaker the direct source-destination link, the greater are the energy-efficiency gains.
Resumo:
In this paper, a simple single-phase grid-connected photovoltaic (PV) inverter topology consisting of a boost section, a low-voltage single-phase inverter with an inductive filter, and a step-up transformer interfacing the grid is considered. Ideally, this topology will not inject any lower order harmonics into the grid due to high-frequency pulse width modulation operation. However, the nonideal factors in the system such as core saturation-induced distorted magnetizing current of the transformer and the dead time of the inverter, etc., contribute to a significant amount of lower order harmonics in the grid current. A novel design of inverter current control that mitigates lower order harmonics is presented in this paper. An adaptive harmonic compensation technique and its design are proposed for the lower order harmonic compensation. In addition, a proportional-resonant-integral (PRI) controller and its design are also proposed. This controller eliminates the dc component in the control system, which introduces even harmonics in the grid current in the topology considered. The dynamics of the system due to the interaction between the PRI controller and the adaptive compensation scheme is also analyzed. The complete design has been validated with experimental results and good agreement with theoretical analysis of the overall system is observed.
Resumo:
A pairwise independent network (PIN) model consists of pairwise secret keys (SKs) distributed among m terminals. The goal is to generate, through public communication among the terminals, a group SK that is information-theoretically secure from an eavesdropper. In this paper, we study the Harary graph PIN model, which has useful fault-tolerant properties. We derive the exact SK capacity for a regular Harary graph PIN model. Lower and upper bounds on the fault-tolerant SK capacity of the Harary graph PIN model are also derived.
Resumo:
We consider bounds for the capacity region of the Gaussian X channel (XC), a system consisting of two transmit-receive pairs, where each transmitter communicates with both the receivers. We first classify the XC into two classes, the strong XC and the mixed XC. In the strong XC, either the direct channels are stronger than the cross channels or vice-versa, whereas in the mixed XC, one of the direct channels is stronger than the corresponding cross channel and vice-versa. After this classification, we give outer bounds on the capacity region for each of the two classes. This is based on the idea that when one of the messages is eliminated from the XC, the rate region of the remaining three messages are enlarged. We make use of the Z channel, a system obtained by eliminating one message and its corresponding channel from the X channel, to bound the rate region of the remaining messages. The outer bound to the rate region of the remaining messages defines a subspace in R-+(4) and forms an outer bound to the capacity region of the XC. Thus, the outer bound to the capacity region of the XC is obtained as the intersection of the outer bounds to the four combinations of the rate triplets of the XC. Using these outer bounds on the capacity region of the XC, we derive new sum-rate outer bounds for both strong and mixed Gaussian XCs and compare them with those existing in literature. We show that the sum-rate outer bound for strong XC gives the sum-rate capacity in three out of the four sub-regions of the strong Gaussian XC capacity region. In case of mixed Gaussian XC, we recover the recent results in 11] which showed that the sum-rate capacity is achieved in two out of the three sub-regions of the mixed XC capacity region and give a simple alternate proof of the same.
Resumo:
We consider the MIMO X channel (XC), a system consisting of two transmit-receive pairs, where each transmitter communicates with both the receivers. Both the transmitters and receivers are equipped with multiple antennas. First, we derive an upper bound on the sum-rate capacity of the MIMO XC under individual power constraint at each transmitter. The sum-rate capacity of the two-user multiple access channel (MAC) that results when receiver cooperation is assumed forms an upper bound on the sum-rate capacity of the MIMO XC. We tighten this bound by considering noise correlation between the receivers and deriving the worst noise covariance matrix. It is shown that the worst noise covariance matrix is a saddle-point of a zero-sum, two-player convex-concave game, which is solved through a primal-dual interior point method that solves the maximization and the minimization parts of the problem simultaneously. Next, we propose an achievable scheme which employs dirty paper coding at the transmitters and successive decoding at the receivers. We show that the derived upper bound is close to the achievable region of the proposed scheme at low to medium SNRs.
Resumo:
Estimating program worst case execution time(WCET) accurately and efficiently is a challenging task. Several programs exhibit phase behavior wherein cycles per instruction (CPI) varies in phases during execution. Recent work has suggested the use of phases in such programs to estimate WCET with minimal instrumentation. However the suggested model uses a function of mean CPI that has no probabilistic guarantees. We propose to use Chebyshev's inequality that can be applied to any arbitrary distribution of CPI samples, to probabilistically bound CPI of a phase. Applying Chebyshev's inequality to phases that exhibit high CPI variation leads to pessimistic upper bounds. We propose a mechanism that refines such phases into sub-phases based on program counter(PC) signatures collected using profiling and also allows the user to control variance of CPI within a sub-phase. We describe a WCET analyzer built on these lines and evaluate it with standard WCET and embedded benchmark suites on two different architectures for three chosen probabilities, p={0.9, 0.95 and 0.99}. For p= 0.99, refinement based on PC signatures alone, reduces average pessimism of WCET estimate by 36%(77%) on Arch1 (Arch2). Compared to Chronos, an open source static WCET analyzer, the average improvement in estimates obtained by refinement is 5%(125%) on Arch1 (Arch2). On limiting variance of CPI within a sub-phase to {50%, 10%, 5% and 1%} of its original value, average accuracy of WCET estimate improves further to {9%, 11%, 12% and 13%} respectively, on Arch1. On Arch2, average accuracy of WCET improves to 159% when CPI variance is limited to 50% of its original value and improvement is marginal beyond that point.
Resumo:
An n-length block code C is said to be r-query locally correctable, if for any codeword x ∈ C, one can probabilistically recover any one of the n coordinates of the codeword x by querying at most r coordinates of a possibly corrupted version of x. It is known that linear codes whose duals contain 2-designs are locally correctable. In this article, we consider linear codes whose duals contain t-designs for larger t. It is shown here that for such codes, for a given number of queries r, under linear decoding, one can, in general, handle a larger number of corrupted bits. We exhibit to our knowledge, for the first time, a finite length code, whose dual contains 4-designs, which can tolerate a fraction of up to 0.567/r corrupted symbols as against a maximum of 0.5/r in prior constructions. We also present an upper bound that shows that 0.567 is the best possible for this code length and query complexity over this symbol alphabet thereby establishing optimality of this code in this respect. A second result in the article is a finite-length bound which relates the number of queries r and the fraction of errors that can be tolerated, for a locally correctable code that employs a randomized algorithm in which each instance of the algorithm involves t-error correction.
Resumo:
We use the recently measured accurate BaBaR data on the modulus of the pion electromagnetic form factor,Fπ(t), up to an energy of 3 GeV, the I=1P-wave phase of the π π scattering ampli-tude up to the ω−π threshold, the pion charge radius known from Chiral Perturbation Theory,and the recently measured JLAB value of Fπ in the spacelike region at t=−2.45GeV2 as inputs in a formalism that leads to bounds on Fπ in the intermediate spacelike region. We compare our constraints with experimental data and with perturbative QCD along with the results of several theoretical models for the non-perturbative contribution s proposed in the literature.
Resumo:
In this paper, we revisit the combinatorial error model of Mazumdar et al. that models errors in high-density magnetic recording caused by lack of knowledge of grain boundaries in the recording medium. We present new upper bounds on the cardinality/rate of binary block codes that correct errors within this model. All our bounds, except for one, are obtained using combinatorial arguments based on hypergraph fractional coverings. The exception is a bound derived via an information-theoretic argument. Our bounds significantly improve upon existing bounds from the prior literature.