167 resultados para Unconstrained minimization


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Friction coefficient between a circular-disk periphery and V-block surface was determined by introducing the concept of isotropic point (IP) in isochromatic field of the disk under three-point symmetric loading. IP position on the symmetry axis depends on active coefficient of friction during experiment. We extend this work to asymmetric loading of circular disk in which case two frictional contact pairs out of three loading contacts, independently control the unconstrained IP location. Photoelastic experiment is conducted on particular case of asymmetric three-point loading of circular disk. Basics of digital image processing are used to extract few essential parameters from experimental image, particularly IP location. Analytical solution by Flamant for half plane with a concentrated load, is utilized to derive stress components for required loading configurations of the disk. IP is observed, in analytical simulations of three-point asymmetric normal loading, to move from vertical axis to the boundary along an ellipse-like curve. When friction is included in the analysis, IP approaches the center with increase in loading friction and it goes away with increase in support friction. With all these insights, using experimental IP information, friction angles at three contact pairs of circular disk under asymmetric loading, are determined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An asymptotically-exact methodology is presented for obtaining the cross-sectional stiffness matrix of a pre-twisted moderately-thick beam having rectangular cross sections and made of transversely isotropic materials. The anisotropic beam is modeled from 3-D elasticity, without any further assumptions. The beam is allowed to have large displacements and rotations, but small strain is assumed. The strain energy of the beam is computed making use of the constitutive law and the kinematical relations derived with the inclusion of geometrical nonlinearities and initial twist. Large displacements and rotations are allowed, but small strain is assumed. The Variational Asymptotic Method is used to minimize the energy functional, thereby reducing the cross section to a point on the reference line with appropriate properties, yielding a 1-D constitutive law. In this method as applied herein, the 2-D cross-sectional analysis is performed asymptotically by taking advantage of a material small parameter and two geometric small parameters. 3-D strain components are derived using kinematics and arranged as orders of the small parameters. Warping functions are obtained by the minimization of strain energy subject to certain set of constraints that renders the 1-D strain measures well-defined. Closed-form expressions are derived for the 3-D non-linear warping and stress fields. The model is capable of predicting interlaminar and transverse shear stresses accurately up to first order.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PWM waveforms with positive voltage transition at the positive zero crossing of the fundamental voltage (type-A) are generally considered for PWM waveform with even number of switching angles per quarter whereas, waveforms with negative voltage transition at the positive zero crossing (type-B) are considered for odd number of switching angles per quarter. Optimal PWM, for minimization of total harmonic distortion of line to line (VWTHD), is generally solved with the aforementioned criteria. This paper establishes that a combination of both types of waveforms gives better performance than any individual type in terms of minimum VWTHD for complete range of modulation index (M). Optimal PWM for minimum VWTHD is solved for PWM waveforms with pulse numbers (P) of 5 and 7. Both type-A and type-B waveforms are found to be better in different ranges of M. The theoretical findings are confirmed through simulation and experimental results on a 3.7 kW squirrel cage induction motor in an open-loop V/f drive. Further, the optimal PWM is analysed from a space vector point of view.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work deals with the homogenization of an initial- and boundary-value problem for the doubly-nonlinear system D(t)w - del.(z) over right arrow = g(x, t, x/epsilon) (0.1) w is an element of alpha(u, x/epsilon) (0.2) (z) over right arrow is an element of (gamma) over right arrow (del u, x/epsilon) (0.3) Here epsilon is a positive parameter; alpha and (gamma) over right arrow are maximal monotone with respect to the first variable and periodic with respect to the second one. The inclusions (0.2) and (0.3) are here formulated as null-minimization principles, via the theory of Fitzpatrick MR 1009594]. As epsilon -> 0, a two-scale formulation is derived via Nguetseng's notion of two-scale convergence, and a (single-scale) homogenized problem is then retrieved. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work deals with the homogenization of an initial- and boundary-value problem for the doubly-nonlinear system D(t)w - del.(z) over right arrow = g(x, t, x/epsilon) (0.1) w is an element of alpha(u, x/epsilon) (0.2) (z) over right arrow is an element of (gamma) over right arrow (del u, x/epsilon) (0.3) Here epsilon is a positive parameter; alpha and (gamma) over right arrow are maximal monotone with respect to the first variable and periodic with respect to the second one. The inclusions (0.2) and (0.3) are here formulated as null-minimization principles, via the theory of Fitzpatrick MR 1009594]. As epsilon -> 0, a two-scale formulation is derived via Nguetseng's notion of two-scale convergence, and a (single-scale) homogenized problem is then retrieved. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The cross-sectional stiffness matrix is derived for a pre-twisted, moderately thick beam made of transversely isotropic materials and having rectangular cross sections. An asymptotically-exact methodology is used to model the anisotropic beam from 3-D elasticity, without any further assumptions. The beam is allowed to have large displacements and rotations, but small strain is assumed. The strain energy is computed making use of the beam constitutive law and kinematical relations derived with the inclusion of geometrical nonlinearities and an initial twist. The energy functional is minimized making use of the Variational Asymptotic Method (VAM), thereby reducing the cross section to a point on the beam reference line with appropriate properties, forming a 1-D constitutive law. VAM is a mathematical technique employed in the current problem to rigorously split the 3-D analysis of beams into two: a 2-D analysis over the beam cross-sectional domain, which provides a compact semi-analytical form of the properties of the cross sections, and a nonlinear 1-D analysis of the beam reference curve. In this method, as applied herein, the cross-sectional analysis is performed asymptotically by taking advantage of a material small parameter and two geometric small parameters. 3-D strain components are derived using kinematics and arranged in orders of the small parameters. Closed-form expressions are derived for the 3-D non-linear warping and stress fields. Warping functions are obtained by the minimization of strain energy subject to certain set of constraints that render the 1-D strain measures well-defined. The zeroth-order 3-D warping field thus yielded is then used to integrate the 3-D strain energy density over the cross section, resulting in the 1-D strain energy density, which in turn helps identify the corresponding cross-sectional stiffness matrix. The model is capable of predicting interlaminar and transverse shear stresses accurately up to first order.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We revisit a problem studied by Padakandla and Sundaresan SIAM J. Optim., August 2009] on the minimization of a separable convex function subject to linear ascending constraints. The problem arises as the core optimization in several resource allocation problems in wireless communication settings. It is also a special case of an optimization of a separable convex function over the bases of a specially structured polymatroid. We give an alternative proof of the correctness of the algorithm of Padakandla and Sundaresan. In the process we relax some of their restrictions placed on the objective function.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cooperative relaying combined with selection has been extensively studied in the literature to improve the performance of interference-constrained secondary users in underlay cognitive radio (CR). We present a novel symbol error probability (SEP)-optimal amplify-and-forward relay selection rule for an average interference-constrained underlay CR system. A fundamental principle, which is unique to average interference-constrained underlay CR, that the proposed rule brings out is that the choice of the optimal relay is affected not just by the source-to-relay, relay-to-destination, and relay-to-primary receiver links, which are local to the relay, but also by the direct source-to-destination (SD) link, even though it is not local to any relay. We also propose a simpler, practically amenable variant of the optimal rule called the 1-bit rule, which requires just one bit of feedback about the SD link gain to the relays, and incurs a marginal performance loss relative to the optimal rule. We analyze its SEP and develop an insightful asymptotic SEP analysis. The proposed rules markedly outperform several ad hoc SD link-unaware rules proposed in the literature. They also generalize the interference-unconstrained and SD link-unaware optimal rules considered in the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Images obtained through fluorescence microscopy at low numerical aperture (NA) are noisy and have poor resolution. Images of specimens such as F-actin filaments obtained using confocal or widefield fluorescence microscopes contain directional information and it is important that an image smoothing or filtering technique preserve the directionality. F-actin filaments are widely studied in pathology because the abnormalities in actin dynamics play a key role in diagnosis of cancer, cardiac diseases, vascular diseases, myofibrillar myopathies, neurological disorders, etc. We develop the directional bilateral filter as a means of filtering out the noise in the image without significantly altering the directionality of the F-actin filaments. The bilateral filter is anisotropic to start with, but we add an additional degree of anisotropy by employing an oriented domain kernel for smoothing. The orientation is locally adapted using a structure tensor and the parameters of the bilateral filter are optimized for within the framework of statistical risk minimization. We show that the directional bilateral filter has better denoising performance than the traditional Gaussian bilateral filter and other denoising techniques such as SURE-LET, non-local means, and guided image filtering at various noise levels in terms of peak signal-to-noise ratio (PSNR). We also show quantitative improvements in low NA images of F-actin filaments. (C) 2015 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We address the problem of separating a speech signal into its excitation and vocal-tract filter components, which falls within the framework of blind deconvolution. Typically, the excitation in case of voiced speech is assumed to be sparse and the vocal-tract filter stable. We develop an alternating l(p) - l(2) projections algorithm (ALPA) to perform deconvolution taking into account these constraints. The algorithm is iterative, and alternates between two solution spaces. The initialization is based on the standard linear prediction decomposition of a speech signal into an autoregressive filter and prediction residue. In every iteration, a sparse excitation is estimated by optimizing an l(p)-norm-based cost and the vocal-tract filter is derived as a solution to a standard least-squares minimization problem. We validate the algorithm on voiced segments of natural speech signals and show applications to epoch estimation. We also present comparisons with state-of-the-art techniques and show that ALPA gives a sparser impulse-like excitation, where the impulses directly denote the epochs or instants of significant excitation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The effect of multiplicative noise on a signal when compared with that of additive noise is very large. In this paper, we address the problem of suppressing multiplicative noise in one-dimensional signals. To deal with signals that are corrupted with multiplicative noise, we propose a denoising algorithm based on minimization of an unbiased estimator (MURE) of meansquare error (MSE). We derive an expression for an unbiased estimate of the MSE. The proposed denoising is carried out in wavelet domain (soft thresholding) by considering time-domain MURE. The parameters of thresholding function are obtained by minimizing the unbiased estimator MURE. We show that the parameters for optimal MURE are very close to the optimal parameters considering the oracle MSE. Experiments show that the SNR improvement for the proposed denoising algorithm is competitive with a state-of-the-art method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Local polynomial approximation of data is an approach towards signal denoising. Savitzky-Golay (SG) filters are finite-impulse-response kernels, which convolve with the data to result in polynomial approximation for a chosen set of filter parameters. In the case of noise following Gaussian statistics, minimization of mean-squared error (MSE) between noisy signal and its polynomial approximation is optimum in the maximum-likelihood (ML) sense but the MSE criterion is not optimal for non-Gaussian noise conditions. In this paper, we robustify the SG filter for applications involving noise following a heavy-tailed distribution. The optimal filtering criterion is achieved by l(1) norm minimization of error through iteratively reweighted least-squares (IRLS) technique. It is interesting to note that at any stage of the iteration, we solve a weighted SG filter by minimizing l(2) norm but the process converges to l(1) minimized output. The results show consistent improvement over the standard SG filter performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The performance of two curved beam finite element models based on coupled polynomial displacement fields is investigated for out-of-plane vibration of arches. These two-noded beam models employ curvilinear strain definitions and have three degrees of freedom per node namely, out-of-plane translation (v), out-of-plane bending rotation (theta(z)) and torsion rotation (theta(s)). The coupled polynomial interpolation fields are derived independently for Timoshenko and Euler-Bernoulli beam elements using the force-moment equilibrium equations. Numerical performance of these elements for constrained and unconstrained arches is compared with the conventional curved beam models which are based on independent polynomial fields. The formulation is shown to be free from any spurious constraints in the limit of `flexureless torsion' and `torsionless flexure' and hence devoid of flexure and torsion locking. The resulting stiffness and consistent mass matrices generated from the coupled displacement models show excellent convergence of natural frequencies in locking regimes. The accuracy of the shear flexibility added to the elements is also demonstrated. The coupled polynomial models are shown to perform consistently over a wide range of flexure-to-shear (EI/GA) and flexure-to-torsion (EI/GJ) stiffness ratios and are inherently devoid of flexure, torsion and shear locking phenomena. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we search for the regions of the phenomenological minimal supersymmetric standard model (pMSSM) parameter space where one can expect to have moderate Higgs mixing angle (alpha) with relatively light (up to 600 GeV) additional Higgses after satisfying the current LHC data. We perform a global fit analysis using most updated data (till December 2014) from the LHC and Tevatron experiments. The constraints coming from the precision measurements of the rare b-decays B-s -> mu(+)mu(-) and b -> s gamma are also considered. We find that low M-A(less than or similar to 350) and high tan beta(greater than or similar to 25) regions are disfavored by the combined effect of the global analysis and flavor data. However, regions with Higgs mixing angle alpha similar to 0.1-0.8 are still allowed by the current data. We then study the existing direct search bounds on the heavy scalar/pseudoscalar (H/A) and charged Higgs boson (H-+/-) masses and branchings at the LHC. It has been found that regions with low to moderate values of tan beta with light additional Higgses (mass <= 600 GeV) are unconstrained by the data, while the regions with tan beta > 20 are excluded considering the direct search bounds by the LHC-8 data. The possibility to probe the region with tan beta <= 20 at the high luminosity run of LHC are also discussed, giving special attention to the H -> hh, H/A -> t (t) over bar and H/A -> tau(+)tau(-) decay modes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Standard approaches for ellipse fitting are based on the minimization of algebraic or geometric distance between the given data and a template ellipse. When the data are noisy and come from a partial ellipse, the state-of-the-art methods tend to produce biased ellipses. We rely on the sampling structure of the underlying signal and show that the x- and y-coordinate functions of an ellipse are finite-rate-of-innovation (FRI) signals, and that their parameters are estimable from partial data. We consider both uniform and nonuniform sampling scenarios in the presence of noise and show that the data can be modeled as a sum of random amplitude-modulated complex exponentials. A low-pass filter is used to suppress noise and approximate the data as a sum of weighted complex exponentials. The annihilating filter used in FRI approaches is applied to estimate the sampling interval in the closed form. We perform experiments on simulated and real data, and assess both objective and subjective performances in comparison with the state-of-the-art ellipse fitting methods. The proposed method produces ellipses with lesser bias. Furthermore, the mean-squared error is lesser by about 2 to 10 dB. We show the applications of ellipse fitting in iris images starting from partial edge contours, and to free-hand ellipses drawn on a touch-screen tablet.