157 resultados para Error estimate.
Resumo:
Matroidal networks were introduced by Dougherty et al. and have been well studied in the recent past. It was shown that a network has a scalar linear network coding solution if and only if it is matroidal associated with a representable matroid. A particularly interesting feature of this development is the ability to construct (scalar and vector) linearly solvable networks using certain classes of matroids. Furthermore, it was shown through the connection between network coding and matroid theory that linear network coding is not always sufficient for general network coding scenarios. The current work attempts to establish a connection between matroid theory and network-error correcting and detecting codes. In a similar vein to the theory connecting matroids and network coding, we abstract the essential aspects of linear network-error detecting codes to arrive at the definition of a matroidal error detecting network (and similarly, a matroidal error correcting network abstracting from network-error correcting codes). An acyclic network (with arbitrary sink demands) is then shown to possess a scalar linear error detecting (correcting) network code if and only if it is a matroidal error detecting (correcting) network associated with a representable matroid. Therefore, constructing such network-error correcting and detecting codes implies the construction of certain representable matroids that satisfy some special conditions, and vice versa. We then present algorithms that enable the construction of matroidal error detecting and correcting networks with a specified capability of network-error correction. Using these construction algorithms, a large class of hitherto unknown scalar linearly solvable networks with multisource, multicast, and multiple-unicast network-error correcting codes is made available for theoretical use and practical implementation, with parameters, such as number of information symbols, number of sinks, number of coding nodes, error correcting capability, and so on, being arbitrary but for computing power (for the execution of the algorithms). The complexity of the construction of these networks is shown to be comparable with the complexity of existing algorithms that design multicast scalar linear network-error correcting codes. Finally, we also show that linear network coding is not sufficient for the general network-error correction (detection) problem with arbitrary demands. In particular, for the same number of network errors, we show a network for which there is a nonlinear network-error detecting code satisfying the demands at the sinks, whereas there are no linear network-error detecting codes that do the same.
Resumo:
This work considers the identification of the available whitespace, i.e., the regions that do not contain any existing transmitter within a given geographical area. To this end, n sensors are deployed at random locations within the area. These sensors detect for the presence of a transmitter within their radio range r(s) using a binary sensing model, and their individual decisions are combined to estimate the available whitespace. The limiting behavior of the recovered whitespace as a function of n and r(s) is analyzed. It is shown that both the fraction of the available whitespace that the nodes fail to recover as well as their radio range optimally scale as log(n)/n as n gets large. The problem of minimizing the sum absolute error in transmitter localization is also analyzed, and the corresponding optimal scaling of the radio range and the necessary minimum transmitter separation is determined.
Resumo:
A new class of exact-repair regenerating codes is constructed by stitching together shorter erasure correction codes, where the stitching pattern can be viewed as block designs. The proposed codes have the help-by-transfer property where the helper nodes simply transfer part of the stored data directly, without performing any computation. This embedded error correction structure makes the decoding process straightforward, and in some cases the complexity is very low. We show that this construction is able to achieve performance better than space-sharing between the minimum storage regenerating codes and the minimum repair-bandwidth regenerating codes, and it is the first class of codes to achieve this performance. In fact, it is shown that the proposed construction can achieve a nontrivial point on the optimal functional-repair tradeoff, and it is asymptotically optimal at high rate, i.e., it asymptotically approaches the minimum storage and the minimum repair-bandwidth simultaneously.
Resumo:
Regionalization of extreme rainfall is useful for various applications in hydro-meteorology. There is dearth of regionalization studies on extreme rainfall in India. In this perspective, a set of 25 regions that are homogeneous in 1-, 2-, 3-, 4- and 5-day extreme rainfall is delineated based on seasonality measure of extreme rainfall and location indicators (latitude, longitude and altitude) by using global fuzzy c-means (GFCM) cluster analysis. The regions are validated for homogeneity in L-moment framework. One of the applications of the regions is in arriving at quantile estimates of extreme rainfall at sparsely gauged/ungauged locations using options such as regional frequency analysis (RFA). The RFA involves use of rainfall-related information from gauged sites in a region as the basis to estimate quantiles of extreme rainfall for target locations that resemble the region in terms of rainfall characteristics. A procedure for RFA based on GFCM-delineated regions is presented and its effectiveness is evaluated by leave-one-out cross validation. Error in quantile estimates for ungauged sites is compared with that resulting from the use of region-of-influence (ROI) approach that forms site-specific regions exclusively for quantile estimation. Results indicate that error in quantile estimates based on GFCM regions and ROI are fairly close, and neither of them is consistent in yielding the least error over all the sites. The cluster analysis approach was effective in reducing the number of regions to be delineated for RFA.
Resumo:
Regional frequency analysis is widely used for estimating quantiles of hydrological extreme events at sparsely gauged/ungauged target sites in river basins. It involves identification of a region (group of watersheds) resembling watershed of the target site, and use of information pooled from the region to estimate quantile for the target site. In the analysis, watershed of the target site is assumed to completely resemble watersheds in the identified region in terms of mechanism underlying generation of extreme event. In reality, it is rare to find watersheds that completely resemble each other. Fuzzy clustering approach can account for partial resemblance of watersheds and yield region(s) for the target site. Formation of regions and quantile estimation requires discerning information from fuzzy-membership matrix obtained based on the approach. Practitioners often defuzzify the matrix to form disjoint clusters (regions) and use them as the basis for quantile estimation. The defuzzification approach (DFA) results in loss of information discerned on partial resemblance of watersheds. The lost information cannot be utilized in quantile estimation, owing to which the estimates could have significant error. To avert the loss of information, a threshold strategy (TS) was considered in some prior studies. In this study, it is analytically shown that the strategy results in under-prediction of quantiles. To address this, a mathematical approach is proposed in this study and its effectiveness in estimating flood quantiles relative to DFA and TS is demonstrated through Monte-Carlo simulation experiments and case study on Mid-Atlantic water resources region, USA. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
The high species richness of tropical forests has long been recognized, yet there remains substantial uncertainty regarding the actual number of tropical tree species. Using a pantropical tree inventory database from closed canopy forests, consisting of 657,630 trees belonging to 11,371 species, we use a fitted value of Fisher's alpha and an approximate pantropical stem total to estimate the minimum number of tropical forest tree species to fall between similar to 40,000 and similar to 53,000, i.e., at the high end of previous estimates. Contrary to common assumption, the Indo-Pacific region was found to be as species-rich as the Neotropics, with both regions having a minimum of similar to 19,000-25,000 tree species. Continental Africa is relatively depauperate with a minimum of similar to 4,500-6,000 tree species. Very few species are shared among the African, American, and the Indo-Pacific regions. We provide a methodological framework for estimating species richness in trees that may help refine species richness estimates of tree-dependent taxa.
Resumo:
We revisit the a posteriori error analysis of discontinuous Galerkin methods for the obstacle problem derived in 25]. Under a mild assumption on the trace of obstacle, we derive a reliable a posteriori error estimator which does not involve min/max functions. A key in this approach is an auxiliary problem with discrete obstacle. Applications to various discontinuous Galerkin finite element methods are presented. Numerical experiments show that the new estimator obtained in this article performs better.
Resumo:
The use of pit-toilets has severely contaminated the groundwater with nitrate ions in Mulbagal town, Karnataka, India. This paper examines the potential of nitrate ions in the pit-toilet effluents to transform to N2O and to escape to atmosphere from 16 wards of Mulbagal town. Anaerobic conditions prevailing in the pit-toilet convert 25 % of the available N to ammonium ions. Only 3-33 % of ammonium ions transform to nitrate ions in the pit-toilet and escape with the effluent. During migration to aquifer, only 4.5 % of available nitrate concentration in the effluent transforms to N-2 and N2O gases in the 1.5-m-thick saturated zone underlying the pit-toilet; 36-55 % of the gases comprise N2O and the remainder of N-2. Further only 18 % of N2O formed escapes to atmosphere, while the remainder is retained in soil solution. Calculations show that 9.88 x 10(13) molecules of N2O/cm(2) would be cumulatively released from 16 wards of Mulbagal town, over an area of 4.9 km(2).
Resumo:
We consider carrier frequency offset (CFO) estimation in the context of multiple-input multiple-output (MIMO) orthogonal frequency-division multiplexing (OFDM) systems over noisy frequency-selective wireless channels with both single- and multiuser scenarios. We conceived a new approach for parameter estimation by discretizing the continuous-valued CFO parameter into a discrete set of bins and then invoked detection theory, analogous to the minimum-bit-error-ratio optimization framework for detecting the finite-alphabet received signal. Using this radical approach, we propose a novel CFO estimation method and study its performance using both analytical results and Monte Carlo simulations. We obtain expressions for the variance of the CFO estimation error and the resultant BER degradation with the single- user scenario. Our simulations demonstrate that the overall BER performance of a MIMO-OFDM system using the proposed method is substantially improved for all the modulation schemes considered, albeit this is achieved at increased complexity.
Resumo:
One-dimensional transient heat flow is interpreted as a procession of `macro-scale translatory motion of indexed isothermal surfaces'. A new analytical model is proposed by introducing velocity of isothermal surface in Fourier heat diffusion equation. The velocity dependent function is extracted by revisiting `the concept of thermal layer of heat conduction in solid' and `exact solution' to estimate thermal diffusivity. The experimental approach involves establishment of 1 D unsteady heat flow inside the sample through Step-temperature excitation. A novel self-reference interferometer is utilized to separate a `unique isothermal surface' in time-varying temperature field. The translatory motion of the said isothermal surface is recorded using digital camera to estimate its velocity. From the knowledge of thermo-optic coefficient, temperature of the said isothermal surface is predicted. The performance of proposed method is evaluated for Quartz sample and compared with literature.
Resumo:
This paper studies a pilot-assisted physical layer data fusion technique known as Distributed Co-Phasing (DCP). In this two-phase scheme, the sensors first estimate the channel to the fusion center (FC) using pilots sent by the latter; and then they simultaneously transmit their common data by pre-rotating them by the estimated channel phase, thereby achieving physical layer data fusion. First, by analyzing the symmetric mutual information of the system, it is shown that the use of higher order constellations (HOC) can improve the throughput of DCP compared to the binary signaling considered heretofore. Using an HOC in the DCP setting requires the estimation of the composite DCP channel at the FC for data decoding. To this end, two blind algorithms are proposed: 1) power method, and 2) modified K-means algorithm. The latter algorithm is shown to be computationally efficient and converges significantly faster than the conventional K-means algorithm. Analytical expressions for the probability of error are derived, and it is found that even at moderate to low SNRs, the modified K-means algorithm achieves a probability of error comparable to that achievable with a perfect channel estimate at the FC, while requiring no pilot symbols to be transmitted from the sensor nodes. Also, the problem of signal corruption due to imperfect DCP is investigated, and constellation shaping to minimize the probability of signal corruption is proposed and analyzed. The analysis is validated, and the promising performance of DCP for energy-efficient physical layer data fusion is illustrated, using Monte Carlo simulations.
Resumo:
The effect of multiplicative noise on a signal when compared with that of additive noise is very large. In this paper, we address the problem of suppressing multiplicative noise in one-dimensional signals. To deal with signals that are corrupted with multiplicative noise, we propose a denoising algorithm based on minimization of an unbiased estimator (MURE) of meansquare error (MSE). We derive an expression for an unbiased estimate of the MSE. The proposed denoising is carried out in wavelet domain (soft thresholding) by considering time-domain MURE. The parameters of thresholding function are obtained by minimizing the unbiased estimator MURE. We show that the parameters for optimal MURE are very close to the optimal parameters considering the oracle MSE. Experiments show that the SNR improvement for the proposed denoising algorithm is competitive with a state-of-the-art method.
Resumo:
We propose optimal bilateral filtering techniques for Gaussian noise suppression in images. To achieve maximum denoising performance via optimal filter parameter selection, we adopt Stein's unbiased risk estimate (SURE)-an unbiased estimate of the mean-squared error (MSE). Unlike MSE, SURE is independent of the ground truth and can be used in practical scenarios where the ground truth is unavailable. In our recent work, we derived SURE expressions in the context of the bilateral filter and proposed SURE-optimal bilateral filter (SOBF). We selected the optimal parameters of SOBF using the SURE criterion. To further improve the denoising performance of SOBF, we propose variants of SOBF, namely, SURE-optimal multiresolution bilateral filter (SMBF), which involves optimal bilateral filtering in a wavelet framework, and SURE-optimal patch-based bilateral filter (SPBF), where the bilateral filter parameters are optimized on small image patches. Using SURE guarantees automated parameter selection. The multiresolution and localized denoising in SMBF and SPBF, respectively, yield superior denoising performance when compared with the globally optimal SOBF. Experimental validations and comparisons show that the proposed denoisers perform on par with some state-of-the-art denoising techniques. (C) 2015 SPIE and IS&T
Resumo:
A reliable and efficient a posteriori error estimator is derived for a class of discontinuous Galerkin (DG) methods for the Signorini problem. A common property shared by many DG methods leads to a unified error analysis with the help of a constraint preserving enriching map. The error estimator of DG methods is comparable with the error estimator of the conforming methods. Numerical experiments illustrate the performance of the error estimator. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
We propose a multiple initialization based spectral peak tracking (MISPT) technique for heart rate monitoring from photoplethysmography (PPG) signal. MISPT is applied on the PPG signal after removing the motion artifact using an adaptive noise cancellation filter. MISPT yields several estimates of the heart rate trajectory from the spectrogram of the denoised PPG signal which are finally combined using a novel measure called trajectory strength. Multiple initializations help in correcting erroneous heart rate trajectories unlike the typical SPT which uses only single initialization. Experiments on the PPG data from 12 subjects recorded during intensive physical exercise show that the MISPT based heart rate monitoring indeed yields a better heart rate estimate compared to the SPT with single initialization. On the 12 datasets MISPT results in an average absolute error of 1.11 BPM which is lower than 1.28 BPM obtained by the state-of-the-art online heart rate monitoring algorithm.