948 resultados para fixed-point arithmetic
Resumo:
The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.
Resumo:
Regenerating codes are a class of recently developed codes for distributed storage that, like Reed-Solomon codes, permit data recovery from any subset of k nodes within the n-node network. However, regenerating codes possess in addition, the ability to repair a failed node by connecting to an arbitrary subset of d nodes. It has been shown that for the case of functional repair, there is a tradeoff between the amount of data stored per node and the bandwidth required to repair a failed node. A special case of functional repair is exact repair where the replacement node is required to store data identical to that in the failed node. Exact repair is of interest as it greatly simplifies system implementation. The first result of this paper is an explicit, exact-repair code for the point on the storage-bandwidth tradeoff corresponding to the minimum possible repair bandwidth, for the case when d = n-1. This code has a particularly simple graphical description, and most interestingly has the ability to carry out exact repair without any need to perform arithmetic operations. We term this ability of the code to perform repair through mere transfer of data as repair by transfer. The second result of this paper shows that the interior points on the storage-bandwidth tradeoff cannot be achieved under exact repair, thus pointing to the existence of a separate tradeoff under exact repair. Specifically, we identify a set of scenarios which we term as ``helper node pooling,'' and show that it is the necessity to satisfy such scenarios that overconstrains the system.
Resumo:
We revisit the extraction of alpha(s)(M-tau(2)) from the QCD perturbative corrections to the hadronic tau branching ratio, using an improved fixed-order perturbation theory based on the explicit summation of all renormalization-group accessible logarithms, proposed some time ago in the literature. In this approach, the powers of the coupling in the expansion of the QCD Adler function are multiplied by a set of functions D-n, which depend themselves on the coupling and can be written in a closed form by iteratively solving a sequence of differential equations. We find that the new expansion has an improved behavior in the complex energy plane compared to that of the standard fixed-order perturbation theory (FOPT), and is similar but not identical to the contour-improved perturbation theory (CIPT). With five terms in the perturbative expansion we obtain in the (MS) over bar scheme alpha(s)(M-tau(2)) = 0.338 +/- 0.010, using as input a precise value for the perturbative contribution to the hadronic width of the tau lepton reported recently in the literature.
Resumo:
Organo-clay was prepared by incorporating different amounts (in terms of CEC, ranging from 134-840 mg of quaternary ammonium cation (QACs) such as hexadecytrimethylammonium bromide (C19H42N]Br) into the montmorillonite clay. Prepared organo-clays are characterized by CHN analyser and XRD to measure the amount of elemental content and interlayer spacing of surfactant modified clay. The batch experiments of sorption of permanganate from aqueous media by organo-clays was studied at different acidic strengths (pH 1-7). The experimental results show that the rate and amount of adsorption of permanganate was higher at lower pH compared to raw montmorillonite. Laboratory fixed bed experiments were conducted to evaluate the breakthrough time and nature of breakthrough curves. The shape of the breakthrough curves shows that the initial cationic surfactant loadings at 1.0 CEC of the clay is enough to enter the permanganate ions in to the interlamellar region of the surfactant modified smectile clays. These fixed bed studies were also applied to quantify the effect of bed-depth and breakthrough time during the uptake of permanganate. Calculation of thermodynamic parameters shows that the sorption of permanganate is spontaneous and follows the first order kinetics.
Resumo:
We consider a dense, ad hoc wireless network, confined to a small region. The wireless network is operated as a single cell, i.e., only one successful transmission is supported at a time. Data packets are sent between source-destination pairs by multihop relaying. We assume that nodes self-organize into a multihop network such that all hops are of length d meters, where d is a design parameter. There is a contention-based multiaccess scheme, and it is assumed that every node always has data to send, either originated from it or a transit packet (saturation assumption). In this scenario, we seek to maximize a measure of the transport capacity of the network (measured in bit-meters per second) over power controls (in a fading environment) and over the hop distance d, subject to an average power constraint. We first motivate that for a dense collection of nodes confined to a small region, single cell operation is efficient for single user decoding transceivers. Then, operating the dense ad hoc wireless network (described above) as a single cell, we study the hop length and power control that maximizes the transport capacity for a given network power constraint. More specifically, for a fading channel and for a fixed transmission time strategy (akin to the IEEE 802.11 TXOP), we find that there exists an intrinsic aggregate bit rate (Theta(opt) bits per second, depending on the contention mechanism and the channel fading characteristics) carried by the network, when operating at the optimal hop length and power control. The optimal transport capacity is of the form d(opt)((P) over bar (t)) x Theta(opt) with d(opt) scaling as (P) over bar (t) (1/eta), where (P) over bar (t) is the available time average transmit power and eta is the path loss exponent. Under certain conditions on the fading distribution, we then provide a simple characterization of the optimal operating point. Simulation results are provided comparing the performance of the optimal strategy derived here with some simple strategies for operating the network.
Resumo:
Introduction of processor based instruments in power systems is resulting in the rapid growth of the measured data volume. The present practice in most of the utilities is to store only some of the important data in a retrievable fashion for a limited period. Subsequently even this data is either deleted or stored in some back up devices. The investigations presented here explore the application of lossless data compression techniques for the purpose of archiving all the operational data - so that they can be put to more effective use. Four arithmetic coding methods suitably modified for handling power system steady state operational data are proposed here. The performance of the proposed methods are evaluated using actual data pertaining to the Southern Regional Grid of India. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Sum rules constraining the R-current spectral densities are derived holographically for the case of D3-branes, M2-branes and M5-branes all at finite chemical potentials. In each of the cases the sum rule relates a certain integral of the spectral density over the frequency to terms which depend both on long distance physics, hydrodynamics and short distance physics of the theory. The terms which which depend on the short distance physics result from the presence of certain chiral primaries in the OPE of two it-currents which are turned on at finite chemical potential. Since these sum rules contain information of the OPE they provide an alternate method to obtain the structure constants of the two R-currents and the chiral primary. As a consistency check we show that the 3 point function derived from the sum rule precisely matches with that obtained using Witten diagrams.
Resumo:
Using the spectral multiplicities of the standard torus, we endow the Laplace eigenspaces with Gaussian probability measures. This induces a notion of random Gaussian Laplace eigenfunctions on the torus (''arithmetic random waves''). We study the distribution of the nodal length of random eigenfunctions for large eigenvalues, and our primary result is that the asymptotics for the variance is nonuniversal. Our result is intimately related to the arithmetic of lattice points lying on a circle with radius corresponding to the energy.
Resumo:
Savitzky-Golay (S-G) filters are finite impulse response lowpass filters obtained while smoothing data using a local least-squares (LS) polynomial approximation. Savitzky and Golay proved in their hallmark paper that local LS fitting of polynomials and their evaluation at the mid-point of the approximation interval is equivalent to filtering with a fixed impulse response. The problem that we address here is, ``how to choose a pointwise minimum mean squared error (MMSE) S-G filter length or order for smoothing, while preserving the temporal structure of a time-varying signal.'' We solve the bias-variance tradeoff involved in the MMSE optimization using Stein's unbiased risk estimator (SURE). We observe that the 3-dB cutoff frequency of the SURE-optimal S-G filter is higher where the signal varies fast locally, and vice versa, essentially enabling us to suitably trade off the bias and variance, thereby resulting in near-MMSE performance. At low signal-to-noise ratios (SNRs), it is seen that the adaptive filter length algorithm performance improves by incorporating a regularization term in the SURE objective function. We consider the algorithm performance on real-world electrocardiogram (ECG) signals. The results exhibit considerable SNR improvement. Noise performance analysis shows that the proposed algorithms are comparable, and in some cases, better than some standard denoising techniques available in the literature.
Resumo:
The n-interior point variant of the Erdos-Szekeres problem is to show the following: For any n, n-1, every point set in the plane with sufficient number of interior points contains a convex polygon containing exactly n-interior points. This has been proved only for n-3. In this paper, we prove it for pointsets having atmost logarithmic number of convex layers. We also show that any pointset containing atleast n interior points, there exists a 2-convex polygon that contains exactly n-interior points.
Resumo:
A path in an edge colored graph is said to be a rainbow path if no two edges on the path have the same color. An edge colored graph is (strongly) rainbow connected if there exists a (geodesic) rainbow path between every pair of vertices. The (strong) rainbow connectivity of a graph G, denoted by (src(G), respectively) rc(G) is the smallest number of colors required to edge color the graph such that G is (strongly) rainbow connected. In this paper we study the rainbow connectivity problem and the strong rainbow connectivity problem from a computational point of view. Our main results can be summarised as below: 1) For every fixed k >= 3, it is NP-Complete to decide whether src(G) <= k even when the graph G is bipartite. 2) For every fixed odd k >= 3, it is NP-Complete to decide whether rc(G) <= k. This resolves one of the open problems posed by Chakraborty et al. (J. Comb. Opt., 2011) where they prove the hardness for the even case. 3) The following problem is fixed parameter tractable: Given a graph G, determine the maximum number of pairs of vertices that can be rainbow connected using two colors. 4) For a directed graph G, it is NP-Complete to decide whether rc(G) <= 2.
Resumo:
This paper investigates a new approach for point matching in multi-sensor satellite images. The feature points are matched using multi-objective optimization (angle criterion and distance condition) based on Genetic Algorithm (GA). This optimization process is more efficient as it considers both the angle criterion and distance condition to incorporate multi-objective switching in the fitness function. This optimization process helps in matching three corresponding corner points detected in the reference and sensed image and thereby using the affine transformation, the sensed image is aligned with the reference image. From the results obtained, the performance of the image registration is evaluated and it is concluded that the proposed approach is efficient.
Resumo:
In a cooperative system with an amplify-and-forward relay, the cascaded channel training protocol enables the destination to estimate the source-destination channel gain and the product of the source-relay (SR) and relay-destination (RD) channel gains using only two pilot transmissions from the source. Notably, the destination does not require a separate estimate of the SR channel. We develop a new expression for the symbol error probability (SEP) of AF relaying when imperfect channel state information (CSI) is acquired using the above training protocol. A tight SEP upper bound is also derived; it shows that full diversity is achieved, albeit at a high signal-to-noise ratio (SNR). Our analysis uses fewer simplifying assumptions, and leads to expressions that are accurate even at low SNRs and are different from those in the literature. For instance, it does not approximate the estimate of the product of SR and RD channel gains by the product of the estimates of the SR and RD channel gains. We show that cascaded channel estimation often outperforms a channel estimation protocol that incurs a greater training overhead by forwarding a quantized estimate of the SR channel gain to the destination. The extent of pilot power boosting, if allowed, that is required to improve performance is also quantified.
Resumo:
Super-resolution imaging techniques are of paramount interest for applications in bioimaging and fluorescence microscopy. Recent advances in bioimaging demand application-tailored point spread functions. Here, we present some approaches for generating application-tailored point spread functions along with fast imaging capabilities. Aperture engineering techniques provide interesting solutions for obtaining desired system point spread functions. Specially designed spatial filters—realized by optical mask—are outlined both in a single-lens and 4Pi configuration. Applications include depth imaging, multifocal imaging, and super-resolution imaging. Such an approach is suitable for fruitful integration with most existing state-of-art imaging microscopy modalities.