94 resultados para Robust estimates

em Indian Institute of Science - Bangalore - Índia


Relevância:

70.00% 70.00%

Publicador:

Resumo:

We consider the zero-crossing rate (ZCR) of a Gaussian process and establish a property relating the lagged ZCR (LZCR) to the corresponding normalized autocorrelation function. This is a generalization of Kedem's result for the lag-one case. For the specific case of a sinusoid in white Gaussian noise, we use the higher-order property between lagged ZCR and higher-lag autocorrelation to develop an iterative higher-order autoregressive filtering scheme, which stabilizes the ZCR and consequently provide robust estimates of the lagged autocorrelation. Simulation results show that the autocorrelation estimates converge in about 20 to 40 iterations even for low signal-to-noise ratio.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Let a and s denote the inter arrival times and service times in a GI/GI/1 queue. Let a (n), s (n) be the r.v.s, with distributions as the estimated distributions of a and s from iid samples of a and s of sizes n. Let w be a r.v. with the stationary distribution lr of the waiting times of the queue with input (a, s). We consider the problem of estimating E [w~], tx > 0 and 7r via simulations when (a (n), s (n)) are used as input. Conditions for the accuracy of the asymptotic estimate, continuity of the asymptotic variance and uniformity in the rate of convergence to the estimate are obtained. We also obtain rates of convergence for sample moments, the empirical process and the quantile process for the regenerative processes. Robust estimates are also obtained when an outlier contaminated sample of a and s is provided. In the process we obtain consistency, continuity and asymptotic normality of M-estimators for stationary sequences. Some robustness results for Markov processes are included.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimates of predicate selectivities by database query optimizers often differ significantly from those actually encountered during query execution, leading to poor plan choices and inflated response times. In this paper, we investigate mitigating this problem by replacing selectivity error-sensitive plan choices with alternative plans that provide robust performance. Our approach is based on the recent observation that even the complex and dense "plan diagrams" associated with industrial-strength optimizers can be efficiently reduced to "anorexic" equivalents featuring only a few plans, without materially impacting query processing quality. Extensive experimentation with a rich set of TPC-H and TPC-DS-based query templates in a variety of database environments indicate that plan diagram reduction typically retains plans that are substantially resistant to selectivity errors on the base relations. However, it can sometimes also be severely counter-productive, with the replacements performing much worse. We address this problem through a generalized mathematical characterization of plan cost behavior over the parameter space, which lends itself to efficient criteria of when it is safe to reduce. Our strategies are fully non-invasive and have been implemented in the Picasso optimizer visualization tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interaction between the hepatitis C virus (HCV) envelope protein E2 and the host receptor CD81 is essential for HCV entry into target cells. The number of E2-CD81 complexes necessary for HCV entry has remained difficult to estimate experimentally. Using the recently developed cell culture systems that allow persistent HCV infection in vitro, the dependence of HCV entry and kinetics on CD81 expression has been measured. We reasoned that analysis of the latter experiments using a mathematical model of viral kinetics may yield estimates of the number of E2-CD81 complexes necessary for HCV entry. Here, we constructed a mathematical model of HCV viral kinetics in vitro, in which we accounted explicitly for the dependence of HCV entry on CD81 expression. Model predictions of viral kinetics are in quantitative agreement with experimental observations. Specifically, our model predicts triphasic viral kinetics in vitro, where the first phase is characterized by cell proliferation, the second by the infection of susceptible cells and the third by the growth of cells refractory to infection. By fitting model predictions to the above data, we were able to estimate the threshold number of E2-CD81 complexes necessary for HCV entry into human hepatoma-derived cells. We found that depending on the E2-CD81 binding affinity, between 1 and 13 E2-CD81 complexes are necessary for HCV entry. With this estimate, our model captured data from independent experiments that employed different HCV clones and cells with distinct CD81 expression levels, indicating that the estimate is robust. Our study thus quantifies the molecular requirements of HCV entry and suggests guidelines for intervention strategies that target the E2-CD81 interaction. Further, our model presents a framework for quantitative analyses of cell culture studies now extensively employed to investigate HCV infection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chebyshev-inequality-based convex relaxations of Chance-Constrained Programs (CCPs) are shown to be useful for learning classifiers on massive datasets. In particular, an algorithm that integrates efficient clustering procedures and CCP approaches for computing classifiers on large datasets is proposed. The key idea is to identify high density regions or clusters from individual class conditional densities and then use a CCP formulation to learn a classifier on the clusters. The CCP formulation ensures that most of the data points in a cluster are correctly classified by employing a Chebyshev-inequality-based convex relaxation. This relaxation is heavily dependent on the second-order statistics. However, this formulation and in general such relaxations that depend on the second-order moments are susceptible to moment estimation errors. One of the contributions of the paper is to propose several formulations that are robust to such errors. In particular a generic way of making such formulations robust to moment estimation errors is illustrated using two novel confidence sets. An important contribution is to show that when either of the confidence sets is employed, for the special case of a spherical normal distribution of clusters, the robust variant of the formulation can be posed as a second-order cone program. Empirical results show that the robust formulations achieve accuracies comparable to that with true moments, even when moment estimates are erroneous. Results also illustrate the benefits of employing the proposed methodology for robust classification of large-scale datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the design and development of a novel optical vehicle classifier system, which is based on interruption of laser beams, that is suitable for use in places with poor transportation infrastructure. The system can estimate the speed, axle count, wheelbase, tire diameter, and the lane of motion of a vehicle. The design of the system eliminates the need for careful optical alignment, whereas the proposed estimation strategies render the estimates insensitive to angular mounting errors and to unevenness of the road. Strategies to estimate vehicular parameters are described along with the optimization of the geometry of the system to minimize estimation errors due to quantization. The system is subsequently fabricated, and the proposed features of the system are experimentally demonstrated. The relative errors in the estimation of velocity and tire diameter are shown to be within 0.5% and to change by less than 17% for angular mounting errors up to 30 degrees. In the field, the classifier demonstrates accuracy better than 97.5% and 94%, respectively, in the estimation of the wheelbase and lane of motion and can classify vehicles with an average accuracy of over 89.5%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A finite difference method for a time-dependent singularly perturbed convection-diffusion-reaction problem involving two small parameters in one space dimension is considered. We use the classical implicit Euler method for time discretization and upwind scheme on the Shishkin-Bakhvalov mesh for spatial discretization. The method is analysed for convergence and is shown to be uniform with respect to both the perturbation parameters. The use of the Shishkin-Bakhvalov mesh gives first-order convergence unlike the Shishkin mesh where convergence is deteriorated due to the presence of a logarithmic factor. Numerical results are presented to validate the theoretical estimates obtained.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a robust method for mosaicing of document images using features derived from connected components. Each connected component is described using the Angular Radial Tran. form (ART). To ensure geometric consistency during feature matching, the ART coefficients of a connected component are augmented with those of its two nearest neighbors. The proposed method addresses two critical issues often encountered in correspondence matching: (i) The stability of features and (ii) Robustness against false matches due to the multiple instances of characters in a document image. The use of connected components guarantees a stable localization across images. The augmented features ensure a successful correspondence matching even in the presence of multiple similar regions within the page. We illustrate the effectiveness of the proposed method on camera captured document images exhibiting large variations in viewpoint, illumination and scale.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a novel technique for robust voiced/unvoiced segment detection in noisy speech, based on local polynomial regression. The local polynomial model is well-suited for voiced segments in speech. The unvoiced segments are noise-like and do not exhibit any smooth structure. This property of smoothness is used for devising a new metric called the variance ratio metric, which, after thresholding, indicates the voiced/unvoiced boundaries with 75% accuracy for 0dB global signal-to-noise ratio (SNR). A novelty of our algorithm is that it processes the signal continuously, sample-by-sample rather than frame-by-frame. Simulation results on TIMIT speech database (downsampled to 8kHz) for various SNRs are presented to illustrate the performance of the new algorithm. Results indicate that the algorithm is robust even in high noise levels.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The estimation of the frequency of a sinusoidal signal is a well researched problem. In this work we propose an initialization scheme to the popular dichotomous search of the periodogram peak algorithm(DSPA) that is used to estimate the frequency of a sinusoid in white gaussian noise. Our initialization is computationally low cost and gives the same performance as the DSPA, while reducing the number of iterations needed for the fine search stage. We show that our algorithm remains stable as we reduce the number of iterations in the fine search stage. We also compare the performance of our modification to a previous modification of the DSPA and show that we enhance the performance of the algorithm with our initialization technique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present robust joint nonlinear transceiver designs for multiuser multiple-input multiple-output (MIMO) downlink in the presence of imperfections in the channel state information at the transmitter (CSIT). The base station (BS) is equipped with multiple transmit antennas, and each user terminal is equipped with one or more receive antennas. The BS employs Tomlinson-Harashima precoding (THP) for interuser interference precancellation at the transmitter. We consider robust transceiver designs that jointly optimize the transmit THP filters and receive filter for two models of CSIT errors. The first model is a stochastic error (SE) model, where the CSIT error is Gaussian-distributed. This model is applicable when the CSIT error is dominated by channel estimation error. In this case, the proposed robust transceiver design seeks to minimize a stochastic function of the sum mean square error (SMSE) under a constraint on the total BS transmit power. We propose an iterative algorithm to solve this problem. The other model we consider is a norm-bounded error (NBE) model, where the CSIT error can be specified by an uncertainty set. This model is applicable when the CSIT error is dominated by quantization errors. In this case, we consider a worst-case design. For this model, we consider robust (i) minimum SMSE, (ii) MSE-constrained, and (iii) MSE-balancing transceiver designs. We propose iterative algorithms to solve these problems, wherein each iteration involves a pair of semidefinite programs (SDPs). Further, we consider an extension of the proposed algorithm to the case with per-antenna power constraints. We evaluate the robustness of the proposed algorithms to imperfections in CSIT through simulation, and show that the proposed robust designs outperform nonrobust designs as well as robust linear transceiver designs reported in the recent literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We are addressing a new problem of improving automatic speech recognition performance, given multiple utterances of patterns from the same class. We have formulated the problem of jointly decoding K multiple patterns given a single Hidden Markov Model. It is shown that such a solution is possible by aligning the K patterns using the proposed Multi Pattern Dynamic Time Warping algorithm followed by the Constrained Multi Pattern Viterbi Algorithm The new formulation is tested in the context of speaker independent isolated word recognition for both clean and noisy patterns. When 10 percent of speech is affected by a burst noise at -5 dB Signal to Noise Ratio (local), it is shown that joint decoding using only two noisy patterns reduces the noisy speech recognition error rate to about 51 percent, when compared to the single pattern decoding using the Viterbi Algorithm. In contrast a simple maximization of individual pattern likelihoods, provides only about 7 percent reduction in error rate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we consider robust joint linear precoder/receive filter designs for multiuser multi-input multi-output (MIMO) downlink that minimize the sum mean square error (SMSE) in the presence of imperfect channel state information at the transmitter (CSIT). The base station (BS) is equipped with multiple transmit antennas, and each user terminal is equipped with one or more receive antennas. We consider a stochastic error (SE) model and a norm-bounded error (NBE) model for the CSIT error. In the case of CSIT error following SE model, we compute the desired downlink precoder/receive filter matrices by solving the simpler uplink problem by exploiting the uplink-downlink duality for the MSE region. In the case of the CSIT error following the NBE model, we consider the worst-case SMSE as the objective function, and propose an iterative algorithm for the robust transceiver design. The robustness of the proposed algorithms to imperfections in CSIT is illustrated through simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter presents the real time validation of fixed order robust 112 controller designed for the lateral stabilisation of a micro air vehicle named Sarika2. Digital signal processor (DSP) based onboard computer named flight instrumentation controller (FIC) is designed to operate under automatic or manual mode. FIC gathers data from multitude of sensors and is capable of closed loop control to enable autonomous flight. Fixed order lateral H-2 controller designed with the features such as incorporation of level I flying qualities, gust alleviation and noise rejection is coded on to the FIC. Challenging real time hardware in loop simulation (HILS) is done with dSPACE1104 RTI/RTW. Responses obtained from the HILS are compared with those obtained from the offline simulation. Finally, flight trials are conducted to demonstrate the satisfactory performance of the closed loop system. The generic design methodology developed is applicable to all classes of Mini and Micro air vehicles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern database systems incorporate a query optimizer to identify the most efficient "query execution plan" for executing the declarative SQL queries submitted by users. A dynamic-programming-based approach is used to exhaustively enumerate the combinatorially large search space of plan alternatives and, using a cost model, to identify the optimal choice. While dynamic programming (DP) works very well for moderately complex queries with up to around a dozen base relations, it usually fails to scale beyond this stage due to its inherent exponential space and time complexity. Therefore, DP becomes practically infeasible for complex queries with a large number of base relations, such as those found in current decision-support and enterprise management applications. To address the above problem, a variety of approaches have been proposed in the literature. Some completely jettison the DP approach and resort to alternative techniques such as randomized algorithms, whereas others have retained DP by using heuristics to prune the search space to computationally manageable levels. In the latter class, a well-known strategy is "iterative dynamic programming" (IDP) wherein DP is employed bottom-up until it hits its feasibility limit, and then iteratively restarted with a significantly reduced subset of the execution plans currently under consideration. The experimental evaluation of IDP indicated that by appropriate choice of algorithmic parameters, it was possible to almost always obtain "good" (within a factor of twice of the optimal) plans, and in the few remaining cases, mostly "acceptable" (within an order of magnitude of the optimal) plans, and rarely, a "bad" plan. While IDP is certainly an innovative and powerful approach, we have found that there are a variety of common query frameworks wherein it can fail to consistently produce good plans, let alone the optimal choice. This is especially so when star or clique components are present, increasing the complexity of th- e join graphs. Worse, this shortcoming is exacerbated when the number of relations participating in the query is scaled upwards.