902 resultados para Unconstrained minimization
Resumo:
Cooperative relaying combined with selection has been extensively studied in the literature to improve the performance of interference-constrained secondary users in underlay cognitive radio (CR). We present a novel symbol error probability (SEP)-optimal amplify-and-forward relay selection rule for an average interference-constrained underlay CR system. A fundamental principle, which is unique to average interference-constrained underlay CR, that the proposed rule brings out is that the choice of the optimal relay is affected not just by the source-to-relay, relay-to-destination, and relay-to-primary receiver links, which are local to the relay, but also by the direct source-to-destination (SD) link, even though it is not local to any relay. We also propose a simpler, practically amenable variant of the optimal rule called the 1-bit rule, which requires just one bit of feedback about the SD link gain to the relays, and incurs a marginal performance loss relative to the optimal rule. We analyze its SEP and develop an insightful asymptotic SEP analysis. The proposed rules markedly outperform several ad hoc SD link-unaware rules proposed in the literature. They also generalize the interference-unconstrained and SD link-unaware optimal rules considered in the literature.
Resumo:
Images obtained through fluorescence microscopy at low numerical aperture (NA) are noisy and have poor resolution. Images of specimens such as F-actin filaments obtained using confocal or widefield fluorescence microscopes contain directional information and it is important that an image smoothing or filtering technique preserve the directionality. F-actin filaments are widely studied in pathology because the abnormalities in actin dynamics play a key role in diagnosis of cancer, cardiac diseases, vascular diseases, myofibrillar myopathies, neurological disorders, etc. We develop the directional bilateral filter as a means of filtering out the noise in the image without significantly altering the directionality of the F-actin filaments. The bilateral filter is anisotropic to start with, but we add an additional degree of anisotropy by employing an oriented domain kernel for smoothing. The orientation is locally adapted using a structure tensor and the parameters of the bilateral filter are optimized for within the framework of statistical risk minimization. We show that the directional bilateral filter has better denoising performance than the traditional Gaussian bilateral filter and other denoising techniques such as SURE-LET, non-local means, and guided image filtering at various noise levels in terms of peak signal-to-noise ratio (PSNR). We also show quantitative improvements in low NA images of F-actin filaments. (C) 2015 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License.
Resumo:
We address the problem of separating a speech signal into its excitation and vocal-tract filter components, which falls within the framework of blind deconvolution. Typically, the excitation in case of voiced speech is assumed to be sparse and the vocal-tract filter stable. We develop an alternating l(p) - l(2) projections algorithm (ALPA) to perform deconvolution taking into account these constraints. The algorithm is iterative, and alternates between two solution spaces. The initialization is based on the standard linear prediction decomposition of a speech signal into an autoregressive filter and prediction residue. In every iteration, a sparse excitation is estimated by optimizing an l(p)-norm-based cost and the vocal-tract filter is derived as a solution to a standard least-squares minimization problem. We validate the algorithm on voiced segments of natural speech signals and show applications to epoch estimation. We also present comparisons with state-of-the-art techniques and show that ALPA gives a sparser impulse-like excitation, where the impulses directly denote the epochs or instants of significant excitation.
Resumo:
The effect of multiplicative noise on a signal when compared with that of additive noise is very large. In this paper, we address the problem of suppressing multiplicative noise in one-dimensional signals. To deal with signals that are corrupted with multiplicative noise, we propose a denoising algorithm based on minimization of an unbiased estimator (MURE) of meansquare error (MSE). We derive an expression for an unbiased estimate of the MSE. The proposed denoising is carried out in wavelet domain (soft thresholding) by considering time-domain MURE. The parameters of thresholding function are obtained by minimizing the unbiased estimator MURE. We show that the parameters for optimal MURE are very close to the optimal parameters considering the oracle MSE. Experiments show that the SNR improvement for the proposed denoising algorithm is competitive with a state-of-the-art method.
Resumo:
Local polynomial approximation of data is an approach towards signal denoising. Savitzky-Golay (SG) filters are finite-impulse-response kernels, which convolve with the data to result in polynomial approximation for a chosen set of filter parameters. In the case of noise following Gaussian statistics, minimization of mean-squared error (MSE) between noisy signal and its polynomial approximation is optimum in the maximum-likelihood (ML) sense but the MSE criterion is not optimal for non-Gaussian noise conditions. In this paper, we robustify the SG filter for applications involving noise following a heavy-tailed distribution. The optimal filtering criterion is achieved by l(1) norm minimization of error through iteratively reweighted least-squares (IRLS) technique. It is interesting to note that at any stage of the iteration, we solve a weighted SG filter by minimizing l(2) norm but the process converges to l(1) minimized output. The results show consistent improvement over the standard SG filter performance.
Resumo:
The performance of two curved beam finite element models based on coupled polynomial displacement fields is investigated for out-of-plane vibration of arches. These two-noded beam models employ curvilinear strain definitions and have three degrees of freedom per node namely, out-of-plane translation (v), out-of-plane bending rotation (theta(z)) and torsion rotation (theta(s)). The coupled polynomial interpolation fields are derived independently for Timoshenko and Euler-Bernoulli beam elements using the force-moment equilibrium equations. Numerical performance of these elements for constrained and unconstrained arches is compared with the conventional curved beam models which are based on independent polynomial fields. The formulation is shown to be free from any spurious constraints in the limit of `flexureless torsion' and `torsionless flexure' and hence devoid of flexure and torsion locking. The resulting stiffness and consistent mass matrices generated from the coupled displacement models show excellent convergence of natural frequencies in locking regimes. The accuracy of the shear flexibility added to the elements is also demonstrated. The coupled polynomial models are shown to perform consistently over a wide range of flexure-to-shear (EI/GA) and flexure-to-torsion (EI/GJ) stiffness ratios and are inherently devoid of flexure, torsion and shear locking phenomena. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we search for the regions of the phenomenological minimal supersymmetric standard model (pMSSM) parameter space where one can expect to have moderate Higgs mixing angle (alpha) with relatively light (up to 600 GeV) additional Higgses after satisfying the current LHC data. We perform a global fit analysis using most updated data (till December 2014) from the LHC and Tevatron experiments. The constraints coming from the precision measurements of the rare b-decays B-s -> mu(+)mu(-) and b -> s gamma are also considered. We find that low M-A(less than or similar to 350) and high tan beta(greater than or similar to 25) regions are disfavored by the combined effect of the global analysis and flavor data. However, regions with Higgs mixing angle alpha similar to 0.1-0.8 are still allowed by the current data. We then study the existing direct search bounds on the heavy scalar/pseudoscalar (H/A) and charged Higgs boson (H-+/-) masses and branchings at the LHC. It has been found that regions with low to moderate values of tan beta with light additional Higgses (mass <= 600 GeV) are unconstrained by the data, while the regions with tan beta > 20 are excluded considering the direct search bounds by the LHC-8 data. The possibility to probe the region with tan beta <= 20 at the high luminosity run of LHC are also discussed, giving special attention to the H -> hh, H/A -> t (t) over bar and H/A -> tau(+)tau(-) decay modes.
Resumo:
Standard approaches for ellipse fitting are based on the minimization of algebraic or geometric distance between the given data and a template ellipse. When the data are noisy and come from a partial ellipse, the state-of-the-art methods tend to produce biased ellipses. We rely on the sampling structure of the underlying signal and show that the x- and y-coordinate functions of an ellipse are finite-rate-of-innovation (FRI) signals, and that their parameters are estimable from partial data. We consider both uniform and nonuniform sampling scenarios in the presence of noise and show that the data can be modeled as a sum of random amplitude-modulated complex exponentials. A low-pass filter is used to suppress noise and approximate the data as a sum of weighted complex exponentials. The annihilating filter used in FRI approaches is applied to estimate the sampling interval in the closed form. We perform experiments on simulated and real data, and assess both objective and subjective performances in comparison with the state-of-the-art ellipse fitting methods. The proposed method produces ellipses with lesser bias. Furthermore, the mean-squared error is lesser by about 2 to 10 dB. We show the applications of ellipse fitting in iris images starting from partial edge contours, and to free-hand ellipses drawn on a touch-screen tablet.
Resumo:
Inaccuracies in prediction of circulating viral strain genotypes and the possibility of novel reassortants causing a pandemic outbreak necessitate the development of an anti-influenza vaccine with increased breadth of protection and potential for rapid production and deployment. The hemagglutinin (HA) stem is a promising target for universal influenza vaccine as stem-specific antibodies have the potential to be broadly cross-reactive towards different HA subtypes. Here, we report the design of a bacterially expressed polypeptide that mimics a H5 HA stem by protein minimization to focus the antibody response towards the HA stem. The HA mini-stem folds as a trimer mimicking the HA prefusion conformation. It is resistant to thermal/chemical stress, and it binds to conformation-specific, HA stem-directed broadly neutralizing antibodies with high affinity. Mice vaccinated with the group 1 HA mini-stems are protected from morbidity and mortality against lethal challenge by both group 1 (H5 and H1) and group 2 (H3) influenza viruses, the first report of cross-group protection. Passive transfer of immune serum demonstrates the protection is mediated by stem-specific antibodies. Furthermore, antibodies indudced by these HA stems have broad HA reactivity, yet they do not have antibody-dependent enhancement activity.
Resumo:
Measurement of out-of-plane linear motion with high precision and bandwidth is indispensable for development of precision motion stages and for dynamic characterization of mechanical structures. This paper presents an optical beam deflection (OBD) based system for measurement of out-of-plane linear motion for fully reflective samples. The system also achieves nearly zero cross-sensitivity to angular motion, and a large working distance. The sensitivities to linear and angular motion are analytically obtained and employed to optimize the system design. The optimal shot-noise limited resolution is shown to be less than one angstrom over a bandwidth in excess of 1 kHz. Subsequently, the system is experimentally realized and the sensitivities to out-of-plane motions are calibrated using a novel strategy. The linear sensitivity is found to be in agreement with theory. The angular sensitivity is shown to be over 7.5-times smaller than that of conventional OBD. Finally, the measurement system is employed to measure the transient response of a piezo-positioner, and, with the aid of an open-loop controller, reduce the settling time by about 90%. It is also employed to operate the positioner in closed-loop and demonstrate significant minimization of hysteresis and positioning error.
Resumo:
This paper proposes a new algorithm for waveletbased multidimensional image deconvolution which employs subband-dependent minimization and the dual-tree complex wavelet transform in an iterative Bayesian framework. In addition, this algorithm employs a new prior instead of the popular ℓ1 norm, and is thus able to embed a learning scheme during the iteration which helps it to achieve better deconvolution results and faster convergence. © 2008 IEEE.
Resumo:
This paper proposes to use an extended Gaussian Scale Mixtures (GSM) model instead of the conventional ℓ1 norm to approximate the sparseness constraint in the wavelet domain. We combine this new constraint with subband-dependent minimization to formulate an iterative algorithm on two shift-invariant wavelet transforms, the Shannon wavelet transform and dual-tree complex wavelet transform (DTCWT). This extented GSM model introduces spatially varying information into the deconvolution process and thus enables the algorithm to achieve better results with fewer iterations in our experiments. ©2009 IEEE.
Resumo:
Sensor networks can be naturally represented as graphical models, where the edge set encodes the presence of sparsity in the correlation structure between sensors. Such graphical representations can be valuable for information mining purposes as well as for optimizing bandwidth and battery usage with minimal loss of estimation accuracy. We use a computationally efficient technique for estimating sparse graphical models which fits a sparse linear regression locally at each node of the graph via the Lasso estimator. Using a recently suggested online, temporally adaptive implementation of the Lasso, we propose an algorithm for streaming graphical model selection over sensor networks. With battery consumption minimization applications in mind, we use this algorithm as the basis of an adaptive querying scheme. We discuss implementation issues in the context of environmental monitoring using sensor networks, where the objective is short-term forecasting of local wind direction. The algorithm is tested against real UK weather data and conclusions are drawn about certain tradeoffs inherent in decentralized sensor networks data analysis. © 2010 The Author. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved.
Resumo:
An aromatic polyimide and its mixture with randomly distributed carbon nanotubes (NTs) are simulated by using molecular dynamics, repeated energy minimization and cooling processes. The glass transition temperatures are identified through volume-temperature curves. Stress-strain curves, Young's moduli, densities and Poisson ratios are computed at different temperatures. It is demonstrated that the carbon NT reduces the softening effects of temperature on mechanical properties and increases the ability to resist deformation.
Resumo:
We investigate the size effect on melting of metal nanoclusters by molecular dynamics simulation and thermo dynamic theory based on Kofman's melt model. By the minimization of the free energy of metal nanoclusters with respect to the thickness of the surface liquid layer, it has been found that the nanoclusters of the same metal have the same premelting temperature T-pre = T-0 - T-0(gamma(su) - gamma(lv) - gamma(sl))/(rhoLxi) (T-0 is the melting point of bulk metal, gamma(sv) the solid-vapour interfacial free energy, gamma(sl) the liquid-vapour interfacial free energy, gamma(sl),l the solid-liquid interfacial free energy, p the density of metal, L the latent heat of bulk metal, and xi the characteristic length of surface-interface interaction) to be independent of the size of nanoclusters, so that the characteristic length of a metal can be obtained easily by T-pre, which can be obtained by experiments or molecular dynamics (MD) simulations. The premelting temperature T-pre of Cu is obtained by AID simulations, then xi is obtained. The melting point T-cm is further predicted by free energy analysis and is in good agreement with the result of our MD simulations. We also predict the maximum premelting-liquid width of Cu nanoclusters with various sizes and the critical size, below which there is no premelting.