985 resultados para Error estimate.
Resumo:
In this paper, we propose a multiple-input multiple-output (MIMO) receiver algorithm that exploits channel hardening that occurs in large MIMO channels. Channel hardening refers to the phenomenon where the off-diagonal terms of the matrix become increasingly weaker compared to the diagonal terms as the size of the channel gain matrix increases. Specifically, we propose a message passing detection (MPD) algorithm which works with the real-valued matched filtered received vector (whose signal term becomes, where is the transmitted vector), and uses a Gaussian approximation on the off-diagonal terms of the matrix. We also propose a simple estimation scheme which directly obtains an estimate of (instead of an estimate of), which is used as an effective channel estimate in the MPD algorithm. We refer to this receiver as the channel hardening-exploiting message passing (CHEMP) receiver. The proposed CHEMP receiver achieves very good performance in large-scaleMIMO systems (e.g., in systems with 16 to 128 uplink users and 128 base station antennas). For the considered large MIMO settings, the complexity of the proposed MPD algorithm is almost the same as or less than that of the minimum mean square error (MMSE) detection. This is because the MPD algorithm does not need a matrix inversion. It also achieves a significantly better performance compared to MMSE and other message passing detection algorithms using MMSE estimate of. Further, we design optimized irregular low density parity check (LDPC) codes specific to the considered large MIMO channel and the CHEMP receiver through EXIT chart matching. The LDPC codes thus obtained achieve improved coded bit error rate performance compared to off-the-shelf irregular LDPC codes.
Resumo:
Information available in frequency response data is equivalently available in the time domain as a response due to an impulse excitation. The idea to pursue this equivalence to estimate series capacitance is linked to the well-known fact that under impulse excitation, the line/neutral current in a transformer has three distinct components, of which, the initial capacitive component is the first to manifest, followed by the oscillatory and inductive components. Of these, the capacitive component is temporally well separated from the rest-a crucial feature permitting its direct access and analysis. Further, the winding initially behaves as a pure capacitive network, so the initial component must obviously originate from only the (series and shunt) capacitances. With this logic, it should therefore be possible to estimate series capacitance, just by measuring the initial capacitive component of line current and the total shunt capacitance. The principle of the method and details of its implementation on two actual isolated transformerwindings (uniformly wound) are presented. For implementation, a low-voltage recurrent surge generator, a current probe, and a digital oscilloscope are all that is needed. The method is simple and requires no programming and needs least user intervention, thus paving the way for its widespread use.
Resumo:
Buckling of nanotubes has been studied using many methods such as molecular dynamics (MD), molecular mechanics, and continuum-based shell theories. In MD, motion of the individual atoms is tracked under applied temperature and pressure, ensuring a reliable estimate of the material response. The response thus simulated varies for individual nanotubes and is only as accurate as the force field used to model the atomic interactions. On the other hand, there exists a rich literature on the understanding of continuum mechanics-based shell theories. Based on the observations on the behavior of nanotubes, there have been a number of shell theory-based approaches to study the buckling of nanotubes. Although some of these methods yield a reasonable estimate of the buckling stress, investigation and comparison of buckled mode shapes obtained from continuum analysis and MD are sparse. Previous studies show that the direct application of shell theories to study nanotube buckling often leads to erroneous results. The present study reveals that a major source of this error can be attributed to the departure of the shape of the nanotube from a perfect cylindrical shell. Analogous to the shell buckling in the macro-scale, in this work, the nanotube is modeled as a thin-shell with initial imperfection. Then, a nonlinear buckling analysis is carried out using the Riks method. It is observed that this proposed approach yields significantly improved estimate of the buckling stress and mode shapes. It is also shown that the present method can account for the variation of buckling stress as a function of the temperature considered. Hence, this can prove to be a robust method for a continuum analysis of nanosystems taking in the effect of variation of temperature as well.
Resumo:
In this paper, we consider an intrusion detection application for Wireless Sensor Networks. We study the problem of scheduling the sleep times of the individual sensors, where the objective is to maximize the network lifetime while keeping the tracking error to a minimum. We formulate this problem as a partially-observable Markov decision process (POMDP) with continuous stateaction spaces, in a manner similar to Fuemmeler and Veeravalli (IEEE Trans Signal Process 56(5), 2091-2101, 2008). However, unlike their formulation, we consider infinite horizon discounted and average cost objectives as performance criteria. For each criterion, we propose a convergent on-policy Q-learning algorithm that operates on two timescales, while employing function approximation. Feature-based representations and function approximation is necessary to handle the curse of dimensionality associated with the underlying POMDP. Our proposed algorithm incorporates a policy gradient update using a one-simulation simultaneous perturbation stochastic approximation estimate on the faster timescale, while the Q-value parameter (arising from a linear function approximation architecture for the Q-values) is updated in an on-policy temporal difference algorithm-like fashion on the slower timescale. The feature selection scheme employed in each of our algorithms manages the energy and tracking components in a manner that assists the search for the optimal sleep-scheduling policy. For the sake of comparison, in both discounted and average settings, we also develop a function approximation analogue of the Q-learning algorithm. This algorithm, unlike the two-timescale variant, does not possess theoretical convergence guarantees. Finally, we also adapt our algorithms to include a stochastic iterative estimation scheme for the intruder's mobility model and this is useful in settings where the latter is not known. Our simulation results on a synthetic 2-dimensional network setting suggest that our algorithms result in better tracking accuracy at the cost of only a few additional sensors, in comparison to a recent prior work.
Resumo:
Time-varying linear prediction has been studied in the context of speech signals, in which the auto-regressive (AR) coefficients of the system function are modeled as a linear combination of a set of known bases. Traditionally, least squares minimization is used for the estimation of model parameters of the system. Motivated by the sparse nature of the excitation signal for voiced sounds, we explore the time-varying linear prediction modeling of speech signals using sparsity constraints. Parameter estimation is posed as a 0-norm minimization problem. The re-weighted 1-norm minimization technique is used to estimate the model parameters. We show that for sparsely excited time-varying systems, the formulation models the underlying system function better than the least squares error minimization approach. Evaluation with synthetic and real speech examples show that the estimated model parameters track the formant trajectories closer than the least squares approach.
Resumo:
We address the problem of parameter estimation of an ellipse from a limited number of samples. We develop a new approach for solving the ellipse fitting problem by showing that the x and y coordinate functions of an ellipse are finite-rate-of-innovation (FRI) signals. Uniform samples of x and y coordinate functions of the ellipse are modeled as a sum of weighted complex exponentials, for which we propose an efficient annihilating filter technique to estimate the ellipse parameters from the samples. The FRI framework allows for estimating the ellipse parameters reliably from partial or incomplete measurements even in the presence of noise. The efficiency and robustness of the proposed method is compared with state-of-art direct method. The experimental results show that the estimated parameters have lesser bias compared with the direct method and the estimation error is reduced by 5-10 dB relative to the direct method.
Resumo:
Models of river flow time series are essential in efficient management of a river basin. It helps policy makers in developing efficient water utilization strategies to maximize the utility of scarce water resource. Time series analysis has been used extensively for modeling river flow data. The use of machine learning techniques such as support-vector regression and neural network models is gaining increasing popularity. In this paper we compare the performance of these techniques by applying it to a long-term time-series data of the inflows into the Krishnaraja Sagar reservoir (KRS) from three tributaries of the river Cauvery. In this study flow data over a period of 30 years from three different observation points established in upper Cauvery river sub-basin is analyzed to estimate their contribution to KRS. Specifically, ANN model uses a multi-layer feed forward network trained with a back-propagation algorithm and support vector regression with epsilon intensive-loss function is used. Auto-regressive moving average models are also applied to the same data. The performance of different techniques is compared using performance metrics such as root mean squared error (RMSE), correlation, normalized root mean squared error (NRMSE) and Nash-Sutcliffe Efficiency (NSE).
Resumo:
Surface energy processes has an essential role in urban weather, climate and hydrosphere cycles, as well in urban heat redistribution. The research was undertaken to analyze the potential of Landsat and MODIS data in retrieving biophysical parameters in estimating land surface temperature & heat fluxes diurnally in summer and winter seasons of years 2000 and 2010 and understanding its effect on anthropogenic heat disturbance over Delhi and surrounding region. Results show that during years 2000-2010, settlement and industrial area increased from 5.66 to 11.74% and 4.92 to 11.87% respectively which in turn has direct effect on land surface temperature (LST) and heat fluxes including anthropogenic heat flux. Based on the energy balance model for land surface, a method to estimate the increase in anthropogenic heat flux (Has) has been proposed. The settlement and industrial areas has higher amounts of energy consumed and has high values of Has in all seasons. The comparison of satellite derived LST with that of field measured values show that Landsat estimated values are in close agreement within error of 2 degrees C than MODIS with an error of 3 degrees C. It was observed that, during 2000 and 2010, the average change in surface temperature using Landsat over settlement & industrial areas of both seasons is 1.4 degrees C & for MODIS data is 3.7 degrees C. The seasonal average change in anthropogenic heat flux (Has) estimated using Landsat & MODIS is up by around 38 W/m(2) and 62 W/m(2) respectively while higher change is observed over settlement and concrete structures. The study reveals that the dynamic range of Has values has increased in the 10 year period due to the strong anthropogenic influence over the area. The study showed that anthropogenic heat flux is an indicator of the strength of urban heat island effect, and can be used to quantify the magnitude of the urban heat island effect. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Matroidal networks were introduced by Dougherty et al. and have been well studied in the recent past. It was shown that a network has a scalar linear network coding solution if and only if it is matroidal associated with a representable matroid. A particularly interesting feature of this development is the ability to construct (scalar and vector) linearly solvable networks using certain classes of matroids. Furthermore, it was shown through the connection between network coding and matroid theory that linear network coding is not always sufficient for general network coding scenarios. The current work attempts to establish a connection between matroid theory and network-error correcting and detecting codes. In a similar vein to the theory connecting matroids and network coding, we abstract the essential aspects of linear network-error detecting codes to arrive at the definition of a matroidal error detecting network (and similarly, a matroidal error correcting network abstracting from network-error correcting codes). An acyclic network (with arbitrary sink demands) is then shown to possess a scalar linear error detecting (correcting) network code if and only if it is a matroidal error detecting (correcting) network associated with a representable matroid. Therefore, constructing such network-error correcting and detecting codes implies the construction of certain representable matroids that satisfy some special conditions, and vice versa. We then present algorithms that enable the construction of matroidal error detecting and correcting networks with a specified capability of network-error correction. Using these construction algorithms, a large class of hitherto unknown scalar linearly solvable networks with multisource, multicast, and multiple-unicast network-error correcting codes is made available for theoretical use and practical implementation, with parameters, such as number of information symbols, number of sinks, number of coding nodes, error correcting capability, and so on, being arbitrary but for computing power (for the execution of the algorithms). The complexity of the construction of these networks is shown to be comparable with the complexity of existing algorithms that design multicast scalar linear network-error correcting codes. Finally, we also show that linear network coding is not sufficient for the general network-error correction (detection) problem with arbitrary demands. In particular, for the same number of network errors, we show a network for which there is a nonlinear network-error detecting code satisfying the demands at the sinks, whereas there are no linear network-error detecting codes that do the same.
Resumo:
This work considers the identification of the available whitespace, i.e., the regions that do not contain any existing transmitter within a given geographical area. To this end, n sensors are deployed at random locations within the area. These sensors detect for the presence of a transmitter within their radio range r(s) using a binary sensing model, and their individual decisions are combined to estimate the available whitespace. The limiting behavior of the recovered whitespace as a function of n and r(s) is analyzed. It is shown that both the fraction of the available whitespace that the nodes fail to recover as well as their radio range optimally scale as log(n)/n as n gets large. The problem of minimizing the sum absolute error in transmitter localization is also analyzed, and the corresponding optimal scaling of the radio range and the necessary minimum transmitter separation is determined.
Resumo:
A new class of exact-repair regenerating codes is constructed by stitching together shorter erasure correction codes, where the stitching pattern can be viewed as block designs. The proposed codes have the help-by-transfer property where the helper nodes simply transfer part of the stored data directly, without performing any computation. This embedded error correction structure makes the decoding process straightforward, and in some cases the complexity is very low. We show that this construction is able to achieve performance better than space-sharing between the minimum storage regenerating codes and the minimum repair-bandwidth regenerating codes, and it is the first class of codes to achieve this performance. In fact, it is shown that the proposed construction can achieve a nontrivial point on the optimal functional-repair tradeoff, and it is asymptotically optimal at high rate, i.e., it asymptotically approaches the minimum storage and the minimum repair-bandwidth simultaneously.
Resumo:
Regionalization of extreme rainfall is useful for various applications in hydro-meteorology. There is dearth of regionalization studies on extreme rainfall in India. In this perspective, a set of 25 regions that are homogeneous in 1-, 2-, 3-, 4- and 5-day extreme rainfall is delineated based on seasonality measure of extreme rainfall and location indicators (latitude, longitude and altitude) by using global fuzzy c-means (GFCM) cluster analysis. The regions are validated for homogeneity in L-moment framework. One of the applications of the regions is in arriving at quantile estimates of extreme rainfall at sparsely gauged/ungauged locations using options such as regional frequency analysis (RFA). The RFA involves use of rainfall-related information from gauged sites in a region as the basis to estimate quantiles of extreme rainfall for target locations that resemble the region in terms of rainfall characteristics. A procedure for RFA based on GFCM-delineated regions is presented and its effectiveness is evaluated by leave-one-out cross validation. Error in quantile estimates for ungauged sites is compared with that resulting from the use of region-of-influence (ROI) approach that forms site-specific regions exclusively for quantile estimation. Results indicate that error in quantile estimates based on GFCM regions and ROI are fairly close, and neither of them is consistent in yielding the least error over all the sites. The cluster analysis approach was effective in reducing the number of regions to be delineated for RFA.
Resumo:
Regional frequency analysis is widely used for estimating quantiles of hydrological extreme events at sparsely gauged/ungauged target sites in river basins. It involves identification of a region (group of watersheds) resembling watershed of the target site, and use of information pooled from the region to estimate quantile for the target site. In the analysis, watershed of the target site is assumed to completely resemble watersheds in the identified region in terms of mechanism underlying generation of extreme event. In reality, it is rare to find watersheds that completely resemble each other. Fuzzy clustering approach can account for partial resemblance of watersheds and yield region(s) for the target site. Formation of regions and quantile estimation requires discerning information from fuzzy-membership matrix obtained based on the approach. Practitioners often defuzzify the matrix to form disjoint clusters (regions) and use them as the basis for quantile estimation. The defuzzification approach (DFA) results in loss of information discerned on partial resemblance of watersheds. The lost information cannot be utilized in quantile estimation, owing to which the estimates could have significant error. To avert the loss of information, a threshold strategy (TS) was considered in some prior studies. In this study, it is analytically shown that the strategy results in under-prediction of quantiles. To address this, a mathematical approach is proposed in this study and its effectiveness in estimating flood quantiles relative to DFA and TS is demonstrated through Monte-Carlo simulation experiments and case study on Mid-Atlantic water resources region, USA. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
The high species richness of tropical forests has long been recognized, yet there remains substantial uncertainty regarding the actual number of tropical tree species. Using a pantropical tree inventory database from closed canopy forests, consisting of 657,630 trees belonging to 11,371 species, we use a fitted value of Fisher's alpha and an approximate pantropical stem total to estimate the minimum number of tropical forest tree species to fall between similar to 40,000 and similar to 53,000, i.e., at the high end of previous estimates. Contrary to common assumption, the Indo-Pacific region was found to be as species-rich as the Neotropics, with both regions having a minimum of similar to 19,000-25,000 tree species. Continental Africa is relatively depauperate with a minimum of similar to 4,500-6,000 tree species. Very few species are shared among the African, American, and the Indo-Pacific regions. We provide a methodological framework for estimating species richness in trees that may help refine species richness estimates of tree-dependent taxa.
Resumo:
We revisit the a posteriori error analysis of discontinuous Galerkin methods for the obstacle problem derived in 25]. Under a mild assumption on the trace of obstacle, we derive a reliable a posteriori error estimator which does not involve min/max functions. A key in this approach is an auxiliary problem with discrete obstacle. Applications to various discontinuous Galerkin finite element methods are presented. Numerical experiments show that the new estimator obtained in this article performs better.