275 resultados para Traffic estimation.


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes an algorithm for joint data detection and tracking of the dominant singular mode of a time varying channel at the transmitter and receiver of a time division duplex multiple input multiple output beamforming system. The method proposed is a modified expectation maximization algorithm which utilizes an initial estimate to track the dominant modes of the channel at the transmitter and the receiver blindly; and simultaneously detects the un known data. Furthermore, the estimates are constrained to be within a confidence interval of the previous estimate in order to improve the tracking performance and mitigate the effect of error propagation. Monte-Carlo simulation results of the symbol error rate and the mean square inner product between the estimated and the true singular vector are plotted to show the performance benefits offered by the proposed method compared to existing techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents methodologies for incorporating phasor measurements into conventional state estimator. The angle measurements obtained from Phasor Measurement Units are handled as angle difference measurements rather than incorporating the angle measurements directly. Handling in such a manner overcomes the problems arising due to the choice of reference bus. Current measurements obtained from Phasor Measurement Units are treated as equivalent pseudo-voltage measurements at the neighboring buses. Two solution approaches namely normal equations approach and linear programming approach are presented to show how the Phasor Measurement Unit measurements can be handled. Comparative evaluation of both the approaches is also presented. Test results on IEEE 14 bus system are presented to validate both the approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traction insulators are solid core insulators widely used for railway electrification. Constant exposure to detrimental effects of vandalism, and mechanical vibrations begets certain faults like shorting of sheds or cracks in the sheds. Due to fault in one/two sheds, stress on the remaining healthy sheds increases, owing to atmospheric pollution the stress may lead to a flashover of the insulator. Presently due to non availability of the electric stress data for the insulators, simulation study is carried out to find the potential and electric field for most widely used traction insulators in the country. The results of potential and electric field stress obtained for normal and faulty imposed insulators are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The goal of speech enhancement algorithms is to provide an estimate of clean speech starting from noisy observations. The often-employed cost function is the mean square error (MSE). However, the MSE can never be computed in practice. Therefore, it becomes necessary to find practical alternatives to the MSE. In image denoising problems, the cost function (also referred to as risk) is often replaced by an unbiased estimator. Motivated by this approach, we reformulate the problem of speech enhancement from the perspective of risk minimization. Some recent contributions in risk estimation have employed Stein's unbiased risk estimator (SURE) together with a parametric denoising function, which is a linear expansion of threshold/bases (LET). We show that the first-order case of SURE-LET results in a Wiener-filter type solution if the denoising function is made frequency-dependent. We also provide enhancement results obtained with both techniques and characterize the improvement by means of local as well as global SNR calculations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We address the problem of speech enhancement using a risk- estimation approach. In particular, we propose the use the Stein’s unbiased risk estimator (SURE) for solving the problem. The need for a suitable finite-sample risk estimator arises because the actual risks invariably depend on the unknown ground truth. We consider the popular mean-squared error (MSE) criterion first, and then compare it against the perceptually-motivated Itakura-Saito (IS) distortion, by deriving unbiased estimators of the corresponding risks. We use a generalized SURE (GSURE) development, recently proposed by Eldar for MSE. We consider dependent observation models from the exponential family with an additive noise model,and derive an unbiased estimator for the risk corresponding to the IS distortion, which is non-quadratic. This serves to address the speech enhancement problem in a more general setting. Experimental results illustrate that the IS metric is efficient in suppressing musical noise, which affects the MSE-enhanced speech. However, in terms of global signal-to-noise ratio (SNR), the minimum MSE solution gives better results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, there has been an upsurge of research interest in cooperative wireless communications in both academia and industry. This article presents a simple overview of the pivotal topics in both mobile station (MS)- and base station (BS)- assisted cooperation in the context of cellular radio systems. Owing to the ever-increasing amount of literature in this particular field, this article is by no means exhaustive, but is intended to serve as a roadmap by assembling a representative sample of recent results and to stimulate further research. The emphasis is initially on relay-base cooperation, relying on network coding, followed by the design of cross-layer cooperative protocols conceived for MS cooperation and the concept of coalition network element (CNE)-assisted BS cooperation. Then, a range of complexity and backhaul traffic reduction techniques that have been proposed for BS cooperation are reviewed. A more detailed discussion is provided in the context of MS cooperation concerning the pros and cons of dispensing with high-complexity, power-hungry channel estimation. Finally, generalized design guidelines, conceived for cooperative wireless communications, are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of the paper is to estimate Safe Shutdown Earthquake (SSE) and Operating/Design Basis Earthquake (OBE/DBE) for the Nuclear Power Plant (NPP) site located at Kalpakkam, Tamil Nadu, India. The NPP is located at 12.558 degrees N, 80.175 degrees E and a 500 km circular area around NPP site is considered as `seismic study area' based on past regional earthquake damage distribution. The geology, seismicity and seismotectonics of the study area are studied and the seismotectonic map is prepared showing the seismic sources and the past earthquakes. Earthquake data gathered from many literatures are homogenized and declustered to form a complete earthquake catalogue for the seismic study area. The conventional maximum magnitude of each source is estimated considering the maximum observed magnitude (M-max(obs)) and/or the addition of 0.3 to 0.5 to M-max(obs). In this study maximum earthquake magnitude has been estimated by establishing a region's rupture character based on source length and associated M-max(obs). A final source-specific M-max is selected from the three M-max values by following the logical criteria. To estimate hazard at the NPP site, ten Ground-Motion Prediction Equations (GMPEs) valid for the study area are considered. These GMPEs are ranked based on Log-Likelihood (LLH) values. Top five GMPEs are considered to estimate the peak ground acceleration (PGA) for the site. Maximum PGA is obtained from three faults and named as vulnerable sources to decide the magnitudes of OBE and SSE. The average and normalized site specific response spectrum is prepared considering three vulnerable sources and further used to establish site-specific design spectrum at NPP site.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Resonance Raman spectroscopy is a powerful analytical tool for detecting and identifying analytes, but the associated strong fluorescence background severely limits the use of the technique. Here, we show that by attaching beta-cyclodextrin (beta-CD) cavities to reduced graphene-oxide (rGO) sheets we obtain a water dispersible material (beta-CD: rGO) that combines the hydrophobicity associated with rGO with that of the cyclodextrin cavities and provides a versatile platform for resonance Raman detection. Planar aromatic and dye molecules that adsorb on the rGO domains and nonplanar molecules included within the tethered beta-CD cavities have their fluorescence effectively quenched. We show that it is possible using the water dispersible beta-CD: rGO sheets to record the resonance Raman spectra of adsorbed and included organic chromophores directly in aqueous media without having to extract or deposit on a substrate. This is significant, as it allows us to identify and estimate organic analytes present in water by resonance Raman spectroscopy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

State estimation is one of the most important functions in an energy control centre. An computationally efficient state estimator which is free from numerical instability/ill-conditioning is essential for security assessment of electric power grid. Whereas approaches to successfully overcome the numerical ill-conditioning issues have been proposed, an efficient algorithm for addressing the convergence issues in the presence of topological errors is yet to be evolved. Trust region (TR) methods have been successfully employed to overcome the divergence problem to certain extent. In this study, case studies are presented where the conventional algorithms including the existing TR methods would fail to converge. A linearised model-based TR method for successfully overcoming the convergence issues is proposed. On the computational front, unlike the existing TR methods for state estimation which employ quadratic models, the proposed linear model-based estimator is computationally efficient because the model minimiser can be computed in a single step. The model minimiser at each step is computed by minimising the linearised model in the presence of TR and measurement mismatch constraints. The infinity norm is used to define the geometry of the TR. Measurement mismatch constraints are employed to improve the accuracy. The proposed algorithm is compared with the quadratic model-based TR algorithm with case studies on the IEEE 30-bus system, 205-bus and 514-bus equivalent systems of part of Indian grid.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Typical image-guided diffuse optical tomographic image reconstruction procedures involve reduction of the number of optical parameters to be reconstructed equal to the number of distinct regions identified in the structural information provided by the traditional imaging modality. This makes the image reconstruction problem less ill-posed compared to traditional underdetermined cases. Still, the methods that are deployed in this case are same as those used for traditional diffuse optical image reconstruction, which involves a regularization term as well as computation of the Jacobian. A gradient-free Nelder-Mead simplex method is proposed here to perform the image reconstruction procedure and is shown to provide solutions that closely match ones obtained using established methods, even in highly noisy data. The proposed method also has the distinct advantage of being more efficient owing to being regularization free, involving only repeated forward calculations. (C) 2013 Society of Photo-Optical Instrumentation Engineers (SPIE)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bentonite clays are proven to be attractive as buffer and backfill material in high-level nuclear waste repositories around the world. A quick estimation of swelling pressures of the compacted bentonites for different clay-water-electrolyte interactions is essential in the design of buffer and backfill materials. The theoretical studies on the swelling behavior of bentonites are based on diffuse double layer (DDL) theory. To establish theoretical relationship between void ratio and swelling pressure (e versus P), evaluation of elliptic integral and inverse analysis are unavoidable. In this paper, a novel procedure is presented to establish theoretical relationship of e versus P based on the Gouy-Chapman method. The proposed procedure establishes a unique relationship between electric potentials of interacting and non-interacting diffuse clay-water-electrolyte systems. A procedure is, thus, proposed to deduce the relation between swelling pressures and void ratio from the established relation between electric potentials. This approach is simple and alleviates the need for elliptic integral evaluation and also the inverse analysis. Further, application of the proposed approach to estimate swelling pressures of four compacted bentonites, for example, MX 80, Febex, Montigel and Kunigel V1, at different dry densities, shows that the method is very simple and predicts solutions with very good accuracy. Moreover, the proposed procedure provides continuous distributions of e versus P and thus it is computationally efficient when compared with the existing techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, the grid mismatch problem for a single snapshot direction of arrival estimation problem is studied. We derive a Bayesian Cramer-Rao bound for the grid mismatch problem with the errors in variables model and propose a block sparse estimator for grid matching and sparse recovery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Streaming applications demand hard bandwidth and throughput guarantees in a multiprocessor environment amidst resource competing processes. We present a Label Switching based Network-on-Chip (LS-NoC) motivated by throughput guarantees offered by bandwidth reservation. Label switching is a packet relaying technique in which individual packets carry route information in the form of labels. A centralized LS-NoC Management framework engineers traffic into Quality of Service (QoS) guaranteed routes. LS-NoC caters to the requirements of streaming applications where communication channels are fixed over the lifetime of the application. The proposed NoC framework inherently supports heterogeneous and ad hoc system-on-chips. The LS-NoC can be used in conjunction with conventional best effort NoC as a QoS guaranteed communication network or as a replacement to the conventional NoC. A multicast, broadcast capable label switched router for the LS-NoC has been designed. A 5 port, 256 bit data bus, 4 bit label router occupies 0.431 mm(2) in 130 nm and delivers peak bandwidth of 80 Gbits/s per link at 312.5 MHz. Bandwidth and latency guarantees of LS-NoC have been demonstrated on traffic from example streaming applications and on constant and variable bit rate traffic patterns. LS-NoC was found to have a competitive AreaxPower/Throughput figure of merit with state-of-the-art NoCs providing QoS. Circuit switching with link sharing abilities and support for asynchronous operation make LS-NoC a desirable choice for QoS servicing in chip multiprocessors. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Estimating program worst case execution time(WCET) accurately and efficiently is a challenging task. Several programs exhibit phase behavior wherein cycles per instruction (CPI) varies in phases during execution. Recent work has suggested the use of phases in such programs to estimate WCET with minimal instrumentation. However the suggested model uses a function of mean CPI that has no probabilistic guarantees. We propose to use Chebyshev's inequality that can be applied to any arbitrary distribution of CPI samples, to probabilistically bound CPI of a phase. Applying Chebyshev's inequality to phases that exhibit high CPI variation leads to pessimistic upper bounds. We propose a mechanism that refines such phases into sub-phases based on program counter(PC) signatures collected using profiling and also allows the user to control variance of CPI within a sub-phase. We describe a WCET analyzer built on these lines and evaluate it with standard WCET and embedded benchmark suites on two different architectures for three chosen probabilities, p={0.9, 0.95 and 0.99}. For p= 0.99, refinement based on PC signatures alone, reduces average pessimism of WCET estimate by 36%(77%) on Arch1 (Arch2). Compared to Chronos, an open source static WCET analyzer, the average improvement in estimates obtained by refinement is 5%(125%) on Arch1 (Arch2). On limiting variance of CPI within a sub-phase to {50%, 10%, 5% and 1%} of its original value, average accuracy of WCET estimate improves further to {9%, 11%, 12% and 13%} respectively, on Arch1. On Arch2, average accuracy of WCET improves to 159% when CPI variance is limited to 50% of its original value and improvement is marginal beyond that point.