123 resultados para Denoising Techniques


Relevância:

20.00% 20.00%

Publicador:

Resumo:

An important question in kernel regression is one of estimating the order and bandwidth parameters from available noisy data. We propose to solve the problem within a risk estimation framework. Considering an independent and identically distributed (i.i.d.) Gaussian observations model, we use Stein's unbiased risk estimator (SURE) to estimate a weighted mean-square error (MSE) risk, and optimize it with respect to the order and bandwidth parameters. The two parameters are thus spatially adapted in such a manner that noise smoothing and fine structure preservation are simultaneously achieved. On the application side, we consider the problem of image restoration from uniform/non-uniform data, and show that the SURE approach to spatially adaptive kernel regression results in better quality estimation compared with its spatially non-adaptive counterparts. The denoising results obtained are comparable to those obtained using other state-of-the-art techniques, and in some scenarios, superior.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, the approach for assigning cooperative communication of Uninhabited Aerial Vehicles (UAV) to perform multiple tasks on multiple targets is posed as a combinatorial optimization problem. The multiple task such as classification, attack and verification of target using UAV is employed using nature inspired techniques such as Artificial Immune System (AIS), Particle Swarm Optimization (PSO) and Virtual Bee Algorithm (VBA). The nature inspired techniques have an advantage over classical combinatorial optimization methods like prohibitive computational complexity to solve this NP-hard problem. Using the algorithms we find the best sequence in which to attack and destroy the targets while minimizing the total distance traveled or the maximum distance traveled by an UAV. The performance analysis of the UAV to classify, attack and verify the target is evaluated using AIS, PSO and VBA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Novel switching sequences have been proposed recently for a neutral-point-clamped three-level inverter, controlled effectively as an equivalent two-level inverter. It is shown that the four novel sequences can be grouped into two pairs of sequences. Using each pair of sequences, a hybrid pulsewidth modulation (PWM) technique is proposed, which deploys the two sequences in appropriate spatial regions to reduce the current ripple. Further, a third hybrid PWM technique is proposed which uses all the five sequences (including the conventional sequence) in appropriate spatial regions. Each proposed hybrid PWM is shown, both analytically and experimentally, to outperform its constituent PWM methods in terms of harmonic distortion. In particular, the third proposed hybrid PWM reduces the total harmonic distortion considerably at low- and high-speed ranges of a constant volts-per-hertz induction motor drive, compared to centered space vector PWM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, we aim at reducing the error rate of the online Tamil symbol recognition system by employing multiple experts to reevaluate certain decisions of the primary support vector machine classifier. Motivated by the relatively high percentage of occurrence of base consonants in the script, a reevaluation technique has been proposed to correct any ambiguities arising in the base consonants. Secondly, a dynamic time-warping method is proposed to automatically extract the discriminative regions for each set of confused characters. Class-specific features derived from these regions aid in reducing the degree of confusion. Thirdly, statistics of specific features are proposed for resolving any confusions in vowel modifiers. The reevaluation approaches are tested on two databases (a) the isolated Tamil symbols in the IWFHR test set, and (b) the symbols segmented from a set of 10,000 Tamil words. The recognition rate of the isolated test symbols of the IWFHR database improves by 1.9 %. For the word database, the incorporation of the reevaluation step improves the symbol recognition rate by 3.5 % (from 88.4 to 91.9 %). This, in turn, boosts the word recognition rate by 11.9 % (from 65.0 to 76.9 %). The reduction in the word error rate has been achieved using a generic approach, without the incorporation of language models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In GaAs-based pseudomorphic high-electron mobility transistor device structures, strain and composition of the InxGa1 (-) As-x channel layer are very important as they influence the electronic properties of these devices. In this context, transmission electron microscopy techniques such as (002) dark-field imaging, high-resolution transmission electron microscopy (HRTEM) imaging, scanning transmission electron microscopy-high angle annular dark field (STEM-HAADF) imaging and selected area diffraction, are useful. A quantitative comparative study using these techniques is relevant for assessing the merits and limitations of the respective techniques. In this article, we have investigated strain and composition of the InxGa1 (-) As-x layer with the mentioned techniques and compared the results. The HRTEM images were investigated with strain state analysis. The indium content in this layer was quantified by HAADF imaging and correlated with STEM simulations. The studies showed that the InxGa1 (-) As-x channel layer was pseudomorphically grown leading to tetragonal strain along the 001] growth direction and that the average indium content (x) in the epilayer is similar to 0.12. We found consistency in the results obtained using various methods of analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Carbon Fiber Reinforced Plastic composites were fabricated through vacuum resin infusion technology by adopting two different processing conditions, viz., vacuum only in the first and vacuum plus external pressure in the next, in order to generate two levels of void-bearing samples. They were relatively graded as higher and lower void-bearing ones, respectively. Microscopy and C-scan techniques were utilized to describe the presence of voids arising from the two different processing parameters. Further, to determine the influence of voids on impact behavior, the fabricated +45 degrees/90 degrees/-45 degrees composite samples were subjected to low velocity impacts. The tests show impact properties like peak load and energy to peak load registering higher values for the lower void-bearing case where as the total energy, energy for propagation and ductility indexes were higher for the higher void-bearing ones. Fractographic analysis showed that higher void-bearing samples display lower number of separation of layers in the laminate. These and other results are described and discussed in this report.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of modelling the transient response of an elastic-perfectly-plastic cantilever beam, carrying an impulsively loaded tip mass, is,often referred to as the Parkes cantilever problem 25]; The permanent deformation of a cantilever struck transversely at its tip, Proc. R. Soc. A., 288, pp. 462). This paradigm for classical modelling of projectile impact on structures is re-visited and updated using the mesh-free method, smoothed particle hydrodynamics (SPH). The purpose of this study is to investigate further the behaviour of cantilever beams subjected to projectile impact at its tip, by considering especially physically real effects such as plastic shearing close to the projectile, shear deformation, and the variation of the shear strain along the length and across the thickness of the beam. Finally, going beyond macroscopic structural plasticity, a strategy to incorporate physical discontinuity (due to crack formation) in SPH discretization is discussed and explored in the context of tip-severance of the cantilever beam. Consequently, the proposed scheme illustrates the potency for a more refined treatment of penetration mechanics, paramount in the exploration of structural response under ballistic loading. The objective is to contribute to formulating a computational modelling framework within which transient dynamic plasticity and even penetration/failure phenomena for a range of materials, structures and impact conditions can be explored ab initio, this being essential for arriving at suitable tools for the design of armour systems. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We address the problem of designing an optimal pointwise shrinkage estimator in the transform domain, based on the minimum probability of error (MPE) criterion. We assume an additive model for the noise corrupting the clean signal. The proposed formulation is general in the sense that it can handle various noise distributions. We consider various noise distributions (Gaussian, Student's-t, and Laplacian) and compare the denoising performance of the estimator obtained with the mean-squared error (MSE)-based estimators. The MSE optimization is carried out using an unbiased estimator of the MSE, namely Stein's Unbiased Risk Estimate (SURE). Experimental results show that the MPE estimator outperforms the SURE estimator in terms of SNR of the denoised output, for low (0 -10 dB) and medium values (10 - 20 dB) of the input SNR.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Models of river flow time series are essential in efficient management of a river basin. It helps policy makers in developing efficient water utilization strategies to maximize the utility of scarce water resource. Time series analysis has been used extensively for modeling river flow data. The use of machine learning techniques such as support-vector regression and neural network models is gaining increasing popularity. In this paper we compare the performance of these techniques by applying it to a long-term time-series data of the inflows into the Krishnaraja Sagar reservoir (KRS) from three tributaries of the river Cauvery. In this study flow data over a period of 30 years from three different observation points established in upper Cauvery river sub-basin is analyzed to estimate their contribution to KRS. Specifically, ANN model uses a multi-layer feed forward network trained with a back-propagation algorithm and support vector regression with epsilon intensive-loss function is used. Auto-regressive moving average models are also applied to the same data. The performance of different techniques is compared using performance metrics such as root mean squared error (RMSE), correlation, normalized root mean squared error (NRMSE) and Nash-Sutcliffe Efficiency (NSE).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem addressed in this paper is sound, scalable, demand-driven null-dereference verification for Java programs. Our approach consists conceptually of a base analysis, plus two major extensions for enhanced precision. The base analysis is a dataflow analysis wherein we propagate formulas in the backward direction from a given dereference, and compute a necessary condition at the entry of the program for the dereference to be potentially unsafe. The extensions are motivated by the presence of certain ``difficult'' constructs in real programs, e.g., virtual calls with too many candidate targets, and library method calls, which happen to need excessive analysis time to be analyzed fully. The base analysis is hence configured to skip such a difficult construct when it is encountered by dropping all information that has been tracked so far that could potentially be affected by the construct. Our extensions are essentially more precise ways to account for the effect of these constructs on information that is being tracked, without requiring full analysis of these constructs. The first extension is a novel scheme to transmit formulas along certain kinds of def-use edges, while the second extension is based on using manually constructed backward-direction summary functions of library methods. We have implemented our approach, and applied it on a set of real-life benchmarks. The base analysis is on average able to declare about 84% of dereferences in each benchmark as safe, while the two extensions push this number up to 91%. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Streamflow forecasts at daily time scale are necessary for effective management of water resources systems. Typical applications include flood control, water quality management, water supply to multiple stakeholders, hydropower and irrigation systems. Conventionally physically based conceptual models and data-driven models are used for forecasting streamflows. Conceptual models require detailed understanding of physical processes governing the system being modeled. Major constraints in developing effective conceptual models are sparse hydrometric gauge network and short historical records that limit our understanding of physical processes. On the other hand, data-driven models rely solely on previous hydrological and meteorological data without directly taking into account the underlying physical processes. Among various data driven models Auto Regressive Integrated Moving Average (ARIMA), Artificial Neural Networks (ANNs) are most widely used techniques. The present study assesses performance of ARIMA and ANNs methods in arriving at one-to seven-day ahead forecast of daily streamflows at Basantpur streamgauge site that is situated at upstream of Hirakud Dam in Mahanadi river basin, India. The ANNs considered include Feed-Forward back propagation Neural Network (FFNN) and Radial Basis Neural Network (RBNN). Daily streamflow forecasts at Basantpur site find use in management of water from Hirakud reservoir. (C) 2015 The Authors. Published by Elsevier B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effect of multiplicative noise on a signal when compared with that of additive noise is very large. In this paper, we address the problem of suppressing multiplicative noise in one-dimensional signals. To deal with signals that are corrupted with multiplicative noise, we propose a denoising algorithm based on minimization of an unbiased estimator (MURE) of meansquare error (MSE). We derive an expression for an unbiased estimate of the MSE. The proposed denoising is carried out in wavelet domain (soft thresholding) by considering time-domain MURE. The parameters of thresholding function are obtained by minimizing the unbiased estimator MURE. We show that the parameters for optimal MURE are very close to the optimal parameters considering the oracle MSE. Experiments show that the SNR improvement for the proposed denoising algorithm is competitive with a state-of-the-art method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Image and video analysis requires rich features that can characterize various aspects of visual information. These rich features are typically extracted from the pixel values of the images and videos, which require huge amount of computation and seldom useful for real-time analysis. On the contrary, the compressed domain analysis offers relevant information pertaining to the visual content in the form of transform coefficients, motion vectors, quantization steps, coded block patterns with minimal computational burden. The quantum of work done in compressed domain is relatively much less compared to pixel domain. This paper aims to survey various video analysis efforts published during the last decade across the spectrum of video compression standards. In this survey, we have included only the analysis part, excluding the processing aspect of compressed domain. This analysis spans through various computer vision applications such as moving object segmentation, human action recognition, indexing, retrieval, face detection, video classification and object tracking in compressed videos.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computer Assisted Assessment (CAA) has been existing for several years now. While some forms of CAA do not require sophisticated text understanding (e.g., multiple choice questions), there are also student answers that consist of free text and require analysis of text in the answer. Research towards the latter till date has concentrated on two main sub-tasks: (i) grading of essays, which is done mainly by checking the style, correctness of grammar, and coherence of the essay and (ii) assessment of short free-text answers. In this paper, we present a structured view of relevant research in automated assessment techniques for short free-text answers. We review papers spanning the last 15 years of research with emphasis on recent papers. Our main objectives are two folds. First we present the survey in a structured way by segregating information on dataset, problem formulation, techniques, and evaluation measures. Second we present a discussion on some of the potential future directions in this domain which we hope would be helpful for researchers.