965 resultados para Prescribed mean-curvature problem
Resumo:
Savitzky-Golay (S-G) filters are finite impulse response lowpass filters obtained while smoothing data using a local least-squares (LS) polynomial approximation. Savitzky and Golay proved in their hallmark paper that local LS fitting of polynomials and their evaluation at the mid-point of the approximation interval is equivalent to filtering with a fixed impulse response. The problem that we address here is, ``how to choose a pointwise minimum mean squared error (MMSE) S-G filter length or order for smoothing, while preserving the temporal structure of a time-varying signal.'' We solve the bias-variance tradeoff involved in the MMSE optimization using Stein's unbiased risk estimator (SURE). We observe that the 3-dB cutoff frequency of the SURE-optimal S-G filter is higher where the signal varies fast locally, and vice versa, essentially enabling us to suitably trade off the bias and variance, thereby resulting in near-MMSE performance. At low signal-to-noise ratios (SNRs), it is seen that the adaptive filter length algorithm performance improves by incorporating a regularization term in the SURE objective function. We consider the algorithm performance on real-world electrocardiogram (ECG) signals. The results exhibit considerable SNR improvement. Noise performance analysis shows that the proposed algorithms are comparable, and in some cases, better than some standard denoising techniques available in the literature.
Resumo:
The n-interior point variant of the Erdos-Szekeres problem is to show the following: For any n, n-1, every point set in the plane with sufficient number of interior points contains a convex polygon containing exactly n-interior points. This has been proved only for n-3. In this paper, we prove it for pointsets having atmost logarithmic number of convex layers. We also show that any pointset containing atleast n interior points, there exists a 2-convex polygon that contains exactly n-interior points.
Resumo:
We consider the asymptotics of the invariant measure for the process of spatial distribution of N coupled Markov chains in the limit of a large number of chains. Each chain reflects the stochastic evolution of one particle. The chains are coupled through the dependence of transition rates on the spatial distribution of particles in the various states. Our model is a caricature for medium access interactions in wireless local area networks. Our model is also applicable in the study of spread of epidemics in a network. The limiting process satisfies a deterministic ordinary differential equation called the McKean-Vlasov equation. When this differential equation has a unique globally asymptotically stable equilibrium, the spatial distribution converges weakly to this equilibrium. Using a control-theoretic approach, we examine the question of a large deviation from this equilibrium.
Resumo:
In this paper, we study the asymptotic behavior of an optimal control problem for the time-dependent Kirchhoff-Love plate whose middle surface has a very rough boundary. We identify the limit problem which is an optimal control problem for the limit equation with a different cost functional.
Resumo:
This paper presents an improved hierarchical clustering algorithm for land cover mapping problem using quasi-random distribution. Initially, Niche Particle Swarm Optimization (NPSO) with pseudo/quasi-random distribution is used for splitting the data into number of cluster centers by satisfying Bayesian Information Criteria (BIC). Themain objective is to search and locate the best possible number of cluster and its centers. NPSO which highly depends on the initial distribution of particles in search space is not been exploited to its full potential. In this study, we have compared more uniformly distributed quasi-random with pseudo-random distribution with NPSO for splitting data set. Here to generate quasi-random distribution, Faure method has been used. Performance of previously proposed methods namely K-means, Mean Shift Clustering (MSC) and NPSO with pseudo-random is compared with the proposed approach - NPSO with quasi distribution(Faure). These algorithms are used on synthetic data set and multi-spectral satellite image (Landsat 7 thematic mapper). From the result obtained we conclude that use of quasi-random sequence with NPSO for hierarchical clustering algorithm results in a more accurate data classification.
Resumo:
We consider the design of a linear equalizer with a finite number of coefficients in the context of a classical linear intersymbol-interference channel with additive Gaussian noise for channel estimation. Previous literature has shown that Minimum Bit Error Rate(MBER) based detection has outperformed Minimum Mean Squared Error (MMSE) based detection. We pose the channel estimation problem as a detection problem and propose a novel algorithm to estimate the channel based on the MBER framework for BPSK signals. It is shown that the proposed algorithm reduces BER compared to an MMSE based channel estimation when used in MMSE or MBER detection.
Resumo:
We address the problem of speech enhancement in real-world noisy scenarios. We propose to solve the problem in two stages, the first comprising a generalized spectral subtraction technique, followed by a sequence of perceptually-motivated post-processing algorithms. The role of the post-processing algorithms is to compensate for the effects of noise as well as to suppress any artifacts created by the first-stage processing. The key post-processing mechanisms are aimed at suppressing musical noise and to enhance the formant structure of voiced speech as well as to denoise the linear-prediction residual. The parameter values in the techniques are fixed optimally by experimentally evaluating the enhancement performance as a function of the parameters. We used the Carnegie-Mellon university Arctic database for our experiments. We considered three real-world noise types: fan noise, car noise, and motorbike noise. The enhancement performance was evaluated by conducting listening experiments on 12 subjects. The listeners reported a clear improvement (MOS improvement of 0.5 on an average) over the noisy signal in the perceived quality (increase in the mean-opinion score (MOS)) for positive signal-to-noise-ratios (SNRs). For negative SNRs, however, the improvement was found to be marginal.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
Resumo:
Analysis of high resolution satellite images has been an important research topic for urban analysis. One of the important features of urban areas in urban analysis is the automatic road network extraction. Two approaches for road extraction based on Level Set and Mean Shift methods are proposed. From an original image it is difficult and computationally expensive to extract roads due to presences of other road-like features with straight edges. The image is preprocessed to improve the tolerance by reducing the noise (the buildings, parking lots, vegetation regions and other open spaces) and roads are first extracted as elongated regions, nonlinear noise segments are removed using a median filter (based on the fact that road networks constitute large number of small linear structures). Then road extraction is performed using Level Set and Mean Shift method. Finally the accuracy for the road extracted images is evaluated based on quality measures. The 1m resolution IKONOS data has been used for the experiment.
Resumo:
For most fluids, there exist a maximum and a minimum in the curvature of the reduced vapor pressure curve, p(r) = p(r)(T-r) (with p(r) = p/p(c) and T-r = T/T-c, p(c) and T-c being the pressure and temperature at the critical point). By analyzing National Institute of Standards and Technology (NIST) data on the liquid-vapor coexistence curve for 105 fluids, we find that the maximum occurs in the reduced temperature range 0.5 <= T-r <= 0.8 while the minimum occurs in the reduced temperature range 0.980 <= T-r <= 0.995. Vapor pressure equations for which d(2)p(r)/dT(r)(2) diverges at the critical point present a minimum in their curvature. Therefore, the point of minimum curvature can be used as a marker for the critical region. By using the well-known Ambrose-Walton (AW) vapor pressure equation we obtain the reduced temperatures of the maximum and minimum curvature in terms of the Pitzer acentric factor. The AW predictions are checked against those obtained from NIST data. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
The goal of speech enhancement algorithms is to provide an estimate of clean speech starting from noisy observations. The often-employed cost function is the mean square error (MSE). However, the MSE can never be computed in practice. Therefore, it becomes necessary to find practical alternatives to the MSE. In image denoising problems, the cost function (also referred to as risk) is often replaced by an unbiased estimator. Motivated by this approach, we reformulate the problem of speech enhancement from the perspective of risk minimization. Some recent contributions in risk estimation have employed Stein's unbiased risk estimator (SURE) together with a parametric denoising function, which is a linear expansion of threshold/bases (LET). We show that the first-order case of SURE-LET results in a Wiener-filter type solution if the denoising function is made frequency-dependent. We also provide enhancement results obtained with both techniques and characterize the improvement by means of local as well as global SNR calculations.
Resumo:
We address the problem of speech enhancement using a risk- estimation approach. In particular, we propose the use the Stein’s unbiased risk estimator (SURE) for solving the problem. The need for a suitable finite-sample risk estimator arises because the actual risks invariably depend on the unknown ground truth. We consider the popular mean-squared error (MSE) criterion first, and then compare it against the perceptually-motivated Itakura-Saito (IS) distortion, by deriving unbiased estimators of the corresponding risks. We use a generalized SURE (GSURE) development, recently proposed by Eldar for MSE. We consider dependent observation models from the exponential family with an additive noise model,and derive an unbiased estimator for the risk corresponding to the IS distortion, which is non-quadratic. This serves to address the speech enhancement problem in a more general setting. Experimental results illustrate that the IS metric is efficient in suppressing musical noise, which affects the MSE-enhanced speech. However, in terms of global signal-to-noise ratio (SNR), the minimum MSE solution gives better results.
Resumo:
This paper deals with the evaluation of the component-laminate load-carrying capacity, i.e., to calculate the loads that cause the failure of the individual layers and the component-laminate as a whole in four-bar mechanism. The component-laminate load-carrying capacity is evaluated using the Tsai-Wu-Hahn failure criterion for various lay-ups. The reserve factor of each ply in the component-laminate is calculated by using the maximum resultant force and the maximum resultant moment occurring at different time steps at the joints of the mechanism. Here, all component bars of the mechanism are made of fiber reinforced laminates and have thin rectangular cross-sections. They could, in general, be pre-twisted and/or possess initial curvature, either by design or by defect. They are linked to each other by means of revolute joints. We restrict ourselves to linear materials with small strains within each elastic body (strip-like beam). Each component of the mechanism is modeled as a beam based on geometrically non-linear 3-D elasticity theory. The component problems are thus split into 2-D analyses of reference beam cross-sections and non-linear 1-D analyses along the three beam reference curves. For the thin rectangular cross-sections considered here, the 2-D cross-sectional nonlinearity is also overwhelming. This can be perceived from the fact that such sections constitute a limiting case between thin-walled open and closed sections, thus inviting the non-linear phenomena observed in both. The strong elastic couplings of anisotropic composite laminates complicate the model further. However, a powerful mathematical tool called the Variational Asymptotic Method (VAM) not only enables such a dimensional reduction, but also provides asymptotically correct analytical solutions to the non-linear cross-sectional analysis. Such closed-form solutions are used here in conjunction with numerical techniques for the rest of the problem to predict more quickly and accurately than would otherwise be possible. Local 3-D stress, strain and displacement fields for representative sections in the component-bars are recovered, based on the stress resultants from the 1-D global beam analysis. A numerical example is presented which illustrates the failure of each component-laminate and the mechanism as a whole.
Resumo:
We address the problem of sampling and reconstruction of two-dimensional (2-D) finite-rate-of-innovation (FRI) signals. We propose a three-channel sampling method for efficiently solving the problem. We consider the sampling of a stream of 2-D Dirac impulses and a sum of 2-D unit-step functions. We propose a 2-D causal exponential function as the sampling kernel. By causality in 2-D, we mean that the function has its support restricted to the first quadrant. The advantage of using a multichannel sampling method with causal exponential sampling kernel is that standard annihilating filter or root-finding algorithms are not required. Further, the proposed method has inexpensive hardware implementation and is numerically stable as the number of Dirac impulses increases.