908 resultados para partial least-squares regression


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Accelerated electrothermal aging tests were conducted at a constant temperature of 60 degrees C and at different stress levels of 6 kV/mm, 7 kV/mm and 8 kV/mm on unfilled epoxy and epoxy filled with 5 wt% of nano alumina. The leakage current through the samples were continuously monitored and the variation in tan delta values with aging duration was monitored to predict the impending failure and the time to failure of the samples. It is observed that the time to failure of epoxy alumina nanocomposite samples is significantly higher as compared to the unfilled epoxy. Data from the experiments has been analyzed graphically by plotting the Weibull probability and theoretically by the linear least square regression analysis. The characteristic life obtained from the least square regression analysis has been used to plot the inverse power law curve. From the inverse power law curve, the life of the epoxy insulation with and without nanofiller loading at a stress level of 3 kV/mm, i.e. within the midrange of the design stress level of rotating machine insulation, has been obtained by extrapolation. It is observed that the life of epoxy alumina nanocomposite of 5 wt% filler loading is nine times higher than that of the unfilled epoxy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Time-varying linear prediction has been studied in the context of speech signals, in which the auto-regressive (AR) coefficients of the system function are modeled as a linear combination of a set of known bases. Traditionally, least squares minimization is used for the estimation of model parameters of the system. Motivated by the sparse nature of the excitation signal for voiced sounds, we explore the time-varying linear prediction modeling of speech signals using sparsity constraints. Parameter estimation is posed as a 0-norm minimization problem. The re-weighted 1-norm minimization technique is used to estimate the model parameters. We show that for sparsely excited time-varying systems, the formulation models the underlying system function better than the least squares error minimization approach. Evaluation with synthetic and real speech examples show that the estimated model parameters track the formant trajectories closer than the least squares approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Consider N points in R-d and M local coordinate systems that are related through unknown rigid transforms. For each point, we are given (possibly noisy) measurements of its local coordinates in some of the coordinate systems. Alternatively, for each coordinate system, we observe the coordinates of a subset of the points. The problem of estimating the global coordinates of the N points (up to a rigid transform) from such measurements comes up in distributed approaches to molecular conformation and sensor network localization, and also in computer vision and graphics. The least-squares formulation of this problem, although nonconvex, has a well-known closed-form solution when M = 2 (based on the singular value decomposition (SVD)). However, no closed-form solution is known for M >= 3. In this paper, we demonstrate how the least-squares formulation can be relaxed into a convex program, namely, a semidefinite program (SDP). By setting up connections between the uniqueness of this SDP and results from rigidity theory, we prove conditions for exact and stable recovery for the SDP relaxation. In particular, we prove that the SDP relaxation can guarantee recovery under more adversarial conditions compared to earlier proposed spectral relaxations, and we derive error bounds for the registration error incurred by the SDP relaxation. We also present results of numerical experiments on simulated data to confirm the theoretical findings. We empirically demonstrate that (a) unlike the spectral relaxation, the relaxation gap is mostly zero for the SDP (i.e., we are able to solve the original nonconvex least-squares problem) up to a certain noise threshold, and (b) the SDP performs significantly better than spectral and manifold-optimization methods, particularly at large noise levels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Compressive Sensing (CS) theory combines the signal sampling and compression for sparse signals resulting in reduction in sampling rate. In recent years, many recovery algorithms have been proposed to reconstruct the signal efficiently. Subspace Pursuit and Compressive Sampling Matching Pursuit are some of the popular greedy methods. Also, Fusion of Algorithms for Compressed Sensing is a recently proposed method where several CS reconstruction algorithms participate and the final estimate of the underlying sparse signal is determined by fusing the estimates obtained from the participating algorithms. All these methods involve solving a least squares problem which may be ill-conditioned, especially in the low dimension measurement regime. In this paper, we propose a step prior to least squares to ensure the well-conditioning of the least squares problem. Using Monte Carlo simulations, we show that in low dimension measurement scenario, this modification improves the reconstruction capability of the algorithm in clean as well as noisy measurement cases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For a multilayered specimen, the back-scattered signal in frequency-domain optical-coherence tomography (FDOCT) is expressible as a sum of cosines, each corresponding to a change of refractive index in the specimen. Each of the cosines represent a peak in the reconstructed tomogram. We consider a truncated cosine series representation of the signal, with the constraint that the coefficients in the basis expansion be sparse. An l(2) (sum of squared errors) data error is considered with an l(1) (summation of absolute values) constraint on the coefficients. The optimization problem is solved using Weiszfeld's iteratively reweighted least squares (IRLS) algorithm. On real FDOCT data, improved results are obtained over the standard reconstruction technique with lower levels of background measurement noise and artifacts due to a strong l(1) penalty. The previous sparse tomogram reconstruction techniques in the literature proposed collecting sparse samples, necessitating a change in the data capturing process conventionally used in FDOCT. The IRLS-based method proposed in this paper does not suffer from this drawback.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We address the problem of separating a speech signal into its excitation and vocal-tract filter components, which falls within the framework of blind deconvolution. Typically, the excitation in case of voiced speech is assumed to be sparse and the vocal-tract filter stable. We develop an alternating l(p) - l(2) projections algorithm (ALPA) to perform deconvolution taking into account these constraints. The algorithm is iterative, and alternates between two solution spaces. The initialization is based on the standard linear prediction decomposition of a speech signal into an autoregressive filter and prediction residue. In every iteration, a sparse excitation is estimated by optimizing an l(p)-norm-based cost and the vocal-tract filter is derived as a solution to a standard least-squares minimization problem. We validate the algorithm on voiced segments of natural speech signals and show applications to epoch estimation. We also present comparisons with state-of-the-art techniques and show that ALPA gives a sparser impulse-like excitation, where the impulses directly denote the epochs or instants of significant excitation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Local polynomial approximation of data is an approach towards signal denoising. Savitzky-Golay (SG) filters are finite-impulse-response kernels, which convolve with the data to result in polynomial approximation for a chosen set of filter parameters. In the case of noise following Gaussian statistics, minimization of mean-squared error (MSE) between noisy signal and its polynomial approximation is optimum in the maximum-likelihood (ML) sense but the MSE criterion is not optimal for non-Gaussian noise conditions. In this paper, we robustify the SG filter for applications involving noise following a heavy-tailed distribution. The optimal filtering criterion is achieved by l(1) norm minimization of error through iteratively reweighted least-squares (IRLS) technique. It is interesting to note that at any stage of the iteration, we solve a weighted SG filter by minimizing l(2) norm but the process converges to l(1) minimized output. The results show consistent improvement over the standard SG filter performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Biomolecular recognition underlying drug-target interactions is determined by both binding affinity and specificity. Whilst, quantification of binding efficacy is possible, determining specificity remains a challenge, as it requires affinity data for multiple targets with the same ligand dataset. Thus, understanding the interaction space by mapping the target space to model its complementary chemical space through computational techniques are desirable. In this study, active site architecture of FabD drug target in two apicomplexan parasites viz. Plasmodium falciparum (PfFabD) and Toxoplasma gondii (TgFabD) is explored, followed by consensus docking calculations and identification of fifteen best hit compounds, most of which are found to be derivatives of natural products. Subsequently, machine learning techniques were applied on molecular descriptors of six FabD homologs and sixty ligands to induce distinct multivariate partial-least square models. The biological space of FabD mapped by the various chemical entities explain their interaction space in general. It also highlights the selective variations in FabD of apicomplexan parasites with that of the host. Furthermore, chemometric models revealed the principal chemical scaffolds in PfFabD and TgFabD as pyrrolidines and imidazoles, respectively, which render target specificity and improve binding affinity in combination with other functional descriptors conducive for the design and optimization of the leads.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We develop a new dictionary learning algorithm called the l(1)-K-svp, by minimizing the l(1) distortion on the data term. The proposed formulation corresponds to maximum a posteriori estimation assuming a Laplacian prior on the coefficient matrix and additive noise, and is, in general, robust to non-Gaussian noise. The l(1) distortion is minimized by employing the iteratively reweighted least-squares algorithm. The dictionary atoms and the corresponding sparse coefficients are simultaneously estimated in the dictionary update step. Experimental results show that l(1)-K-SVD results in noise-robustness, faster convergence, and higher atom recovery rate than the method of optimal directions, K-SVD, and the robust dictionary learning algorithm (RDL), in Gaussian as well as non-Gaussian noise. For a fixed value of sparsity, number of dictionary atoms, and data dimension, l(1)-K-SVD outperforms K-SVD and RDL on small training sets. We also consider the generalized l(p), 0 < p < 1, data metric to tackle heavy-tailed/impulsive noise. In an image denoising application, l(1)-K-SVD was found to result in higher peak signal-to-noise ratio (PSNR) over K-SVD for Laplacian noise. The structural similarity index increases by 0.1 for low input PSNR, which is significant and demonstrates the efficacy of the proposed method. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It was demonstrated in earlier work that, by approximating its range kernel using shiftable functions, the nonlinear bilateral filter can be computed using a series of fast convolutions. Previous approaches based on shiftable approximation have, however, been restricted to Gaussian range kernels. In this work, we propose a novel approximation that can be applied to any range kernel, provided it has a pointwise-convergent Fourier series. More specifically, we propose to approximate the Gaussian range kernel of the bilateral filter using a Fourier basis, where the coefficients of the basis are obtained by solving a series of least-squares problems. The coefficients can be efficiently computed using a recursive form of the QR decomposition. By controlling the cardinality of the Fourier basis, we can obtain a good tradeoff between the run-time and the filtering accuracy. In particular, we are able to guarantee subpixel accuracy for the overall filtering, which is not provided by the most existing methods for fast bilateral filtering. We present simulation results to demonstrate the speed and accuracy of the proposed algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper deals with the valuation of energy assets related to natural gas. In particular, we evaluate a baseload Natural Gas Combined Cycle (NGCC) power plant and an ancillary instalation, namely a Liquefied Natural Gas (LNG) facility, in a realistic setting; specifically, these investments enjoy a long useful life but require some non-negligible time to build. Then we focus on the valuation of several investment options again in a realistic setting. These include the option to invest in the power plant when there is uncertainty concerning the initial outlay, or the option's time to maturity, or the cost of CO2 emission permits, or when there is a chance to double the plant size in the future. Our model comprises three sources of risk. We consider uncertain gas prices with regard to both the current level and the long-run equilibrium level; the current electricity price is also uncertain. They all are assumed to show mean reversion. The two-factor model for natural gas price is calibrated using data from NYMEX NG futures contracts. Also, we calibrate the one-factor model for electricity price using data from the Spanish wholesale electricity market, respectively. Then we use the estimated parameter values alongside actual physical parameters from a case study to value natural gas plants. Finally, the calibrated parameters are also used in a Monte Carlo simulation framework to evaluate several American-type options to invest in these energy assets. We accomplish this by following the least squares MC approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Feasible tomography schemes for large particle numbers must possess, besides an appropriate data acquisition protocol, an efficient way to reconstruct the density operator from the observed finite data set. Since state reconstruction typically requires the solution of a nonlinear large-scale optimization problem, this is a major challenge in the design of scalable tomography schemes. Here we present an efficient state reconstruction scheme for permutationally invariant quantum state tomography. It works for all common state-of-the-art reconstruction principles, including, in particular, maximum likelihood and least squares methods, which are the preferred choices in today's experiments. This high efficiency is achieved by greatly reducing the dimensionality of the problem employing a particular representation of permutationally invariant states known from spin coupling combined with convex optimization, which has clear advantages regarding speed, control and accuracy in comparison to commonly employed numerical routines. First prototype implementations easily allow reconstruction of a state of 20 qubits in a few minutes on a standard computer

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ENGLISH: Catches of skipjack tuna supporting major fisheries in parts of the western, central and eastern Pacific Ocean have increased in recent years; thus, it is important to examine the dynamics of the fishery to determine man's effect on the abundance of the stocks. A general linear hypothesis model was developed to standardize fishing effort to a single vessel size and gear type. Standardized effort was then used to compute an index of abundance which accounts for seasonal variability in the fishing area. The indices of abundance were highly variable from year to year in both the northern and southern areas of the fishery but indicated a generally higher abundance in the south. Data from 438 fish tagged and recovered in the eastern Pacific Ocean were used to compute growth curves. A least-squares technique was used to estimate the parameters of the von Bertalanffy growth function. Two estimates of the parameters were made by analyzing the same data in different ways. For the first set of estimates, K= 0.819 on an annual instantaneous basis and L= 729 mm; for the second, K = 0.431 and L=881. These compared well with estimates derived using the Chapman-Richards growth function, which includes the von Bertalanffy function as a special case. It was concluded that the latter function provided an adequate empirical fit to the skipjack data since the more complicated function did not significantly improve the fit. Tagging data from three cruises involving 8852 releases and 1777 returns were used to compute mortality rates during the time the fish were in the fishery. Two models were used in the analyses. The best estimates of the catchability coefficient (q) in the north and south were 8.4 X 10- 4 and 5.0 X 10- 5 respectively. The other loss rate (X), which included losses due to emigration, natural mortality and mortality due to carrying a tag, was 0.14 on an annual instantaneous basis for both areas. To detect the possible effect of fishing on abundance and total yield, the relation between abundance and effort and between total catch and effort was examined. It was found that at levels of intensity observed in the fishery, fishing does not appear to have had any measurable effect on the stocks. It was concluded therefore that the total catch could probably be increased by substantially increasing total effort beyond the present level, and that the fluctuations in abundance are fishery-independent. The estimates of growth, mortality and fishing effort were used to compute yield-per-recruitment isopleths for skipjack in both the northern and southern areas. For a size at first entry of about 425 mm, the yield per recruitment was calculated at 3 pounds in the north and 1.5 pounds in the south. In both areas it would be possible to increase the yield per recruitment by increasing fishing effort. It was not possible to assess potential production of the skipjack stocks fished in the eastern Pacific, except to note that the fishery had not affected their abundance and that they were certainly under-exploited. It was concluded that the northern and southern stocks could support increased harvests, especially the latter. SPANISH: Las capturas de atún barrilete que sostienen las pesquerías principales de la parte occidental, central y oriental del Océano Pacífico han aumentado en los últimos años; así que es importante examinar la dinámica de la pesquería para determinar el efecto que pueda tener sobre la abundancia de los stocks. Se desarrolló un modelo hipotético, lineal para standardizar el esfuerzo de pesca a un solo tamaño de barco y tipo de arte. Luego se usó el esfuerzo standardizado para computar un índice de la abundancia que pueda dar razón de la variabilidad estacional en el área de pesca. Los índices de la abundancia variaron mucho de un año a otro tanto en el área septentrional como en el área meridional de la pesquería, pero indicaron una abundancia generalmente superior en el sur. Se emplearon los datos de 438 peces marcados y recuperados en el Océano Pacífico oriental para computar las curvas de crecimiento. Una técnica de mínimos cuadrados fue usada para estimar los parámetros de la función de crecimiento de van Bertalanffy. Se hicieron dos estimativos de los parámetros mediante el análisis de los mismos datos, de diferente manera. Para el primer juego de estimativos, K=0.819 sobre una base anual instantánea y L∞=729 mm; para el segundo, K=0.431 y L∞=881. Estos se correlacionaron bien con los estimativos obtenidos usando la función de crecimiento de Chapman-Richards, que incluye la de von Bertalanffy como un caso especial. Se decidió que la última función proveía un ajuste empírico, adecuado a los datos del barrilete, ya que la función más complicada no mejoró significativamente el ajuste. Los datos de marcación de tres cruceros incluyendo 8852 liberaciones y 1777 retornos, fueron usados para computar las tasas de mortalidad durante el tiempo en que los peces estuvieron en la pesquería. Se usaron dos modelos en los análisis. Los mejores estimativos del coeficiente de capturabilidad (q) en el norte y en el sur fueron 8.4 X 10-4 y 5.0 X 10-5 , respectivamente. La otra tasa de pérdida (X), la cual incluyó pérdidas debidas a la emigración, mortalidad natural y mortalidad debida a llevar una marca, fue 0.14 sobre una base anual instantánea para las dos áreas. Con el fin de descubrir el efecto que posiblemente pueda tener la pesca sobre la abundancia y el rendimiento total, se examinó la relación entre la abundancia y el esfuerzo y entre la captura total y el esfuerzo. Se encontró que a los niveles de la intensidad observada en la pesquería, la pesca no parece haber tenido ningún efecto perceptible en los stocks. Por lo tanto se decidió que mediante un aumento substancial del esfuerzo total, más allá del nivel actual, la captura total probablemente podría aumentarse, y que las fluctuaciones de la abundancia son independientes de la pesquería. Los estimativos del crecimiento, mortalidad y esfuerzo de pesca fueron usados para computar las isopletas del rendimiento por recluta del barrilete, tanto en las áreas del norte como del sur. Para una talla de primera entrada de unos 425 mm, el rendimiento por recluta fue calculado en 3 libras en el norte y 1.5 libras en el sur. En ambas áreas sería posible aumentar el rendimiento por recluta mediante un aumento del esfuerzo de pesca. No fue posible determinar la producción potencial de los stocks del barrilete pescado en el Pacífico oriental, excepto para observar que la pesquería no ha afectado su abundancia y que ciertamente se encuentran subexplotados. Se concluyó que los stocks norte y sur pueden soportar un aumento en el rendimiento, especialmente este último. (PDF contains 274 pages.)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The tension and compression of single-crystalline silicon nanowires (SiNWs) with different cross-sectional shapes are studied systematically using molecular dynamics simulation. The shape effects on the yield stresses are characterized. For the same surface to volume ratio, the circular cross-sectional SiNWs are stronger than the square cross-sectional ones under tensile loading, but reverse happens in compressive loading. With the atoms colored by least-squares atomic local shear strain, the deformation processes reveal that the failure modes of incipient yielding are dependent on the loading directions. The SiNWs under tensile loading slip in {111} surfaces, while the compressive loading leads the SiNWs to slip in the {110} surfaces. The present results are expected to contribute to the design of the silicon devices in nanosystems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ENGLISH: Monthly estimates of the abundance of yellowfin tuna by age groups and regions within the eastern Pacific Ocean during 1970-1988 are made, using purse-seine catch rates, length-frequency samples, and results from cohort analysis. The numbers of individuals caught of each age group in each logged purse-seine set are estimated, using the tonnage from that set and length-frequency distribution from the "nearest" length-frequency sample(s). Nearest refers to the closest length frequency sample(s) to the purse-seine set in time, distance, and set type (dolphin associated, floating object associated, skipjack associated, none of these, and some combinations). Catch rates are initially calculated as the estimated number of individuals of the age group caught per hour of searching. Then, to remove the effects of set type and vessel speed, they are standardized, using separate weiznted generalized linear models for each age group. The standardized catch rates at the center of each 2.5 0 quadrangle-month are estimated, using locally-weighted least-squares regressions on latitude, longitude and date, and then combined into larger regions. Catch rates within these regions are converted to numbers of yellowfin, using the mean age composition from cohort analysis. The variances of the abundance estimates within regions are large for 0-, 1-, and 5-year-olds, but small for 1.5- to 4-year-olds, except during periods of low fishing activity. Mean annual catch rate estimates for the entire eastern Pacific Ocean are significantly positively correlated with mean abundance estimates from cohort analysis for age groups ranging from 1.5 to 4 years old. Catch-rate indices of abundance by age are expected to be useful in conjunction with data on reproductive biology to estimate total egg production within regions. The estimates may also be useful in understanding geographic and temporal variations in age-specific availability to purse seiners, as well as age-specific movements. SPANISH: Se calculan estimaciones mensuales de la abundancia del atún aleta amarilla por grupos de edad y regiones en el Océano Pacífico oriental durante 1970-1988, usando tasas de captura cerquera, muestras de frecuencia de talla, y los resultados del análisis de cohortes. Se estima el número de individuos capturados de cada grupo de edad en cada lance cerquero registrado, usando el tonelaje del lance en cuestión y la distribución de frecuencia de talla de la(s) muestra(s) de frecuencia de talla "más cercana/s)," "Más cercana" significa la(s) muestra(s) de frecuencia de talla más parecida(s) al lance cerquero en cuanto a fecha, distancia, y tipo de lance (asociado con delfines, con objeto flotante, con barrilete, con ninguno de éstos, y algunas combinaciones). Se calculan inicialmente las tasas de captura como el número estimado de individuos del grupo de edad capturado por hora de búsqueda. A continuación, para eliminar los efectos del tipo de lance y la velocidad del barco, se estandardizan dichas tasas, usando un modelo lineal generalizado ponderado, para cada grupo por separado. Se estima la tasa de captura estandardizada al centro de cada cuadrángulo de 2.5°-mes, usando regresiones de mínimos cuadrados ponderados localmente por latitud, longitud, y fecha, y entonces combinándolas en regiones mayores. Se convierten las tasas de captura dentro de estas regiones en números de aletas amarillas individuales, usando el número promedio por edad proveniente del análisis de cohortes. Las varianzas de las estimaciones de la abundancia dentro de las regiones son grandes para los peces de O, 1, Y5 años de edad, pero pequeñas para aquellos de entre 1.5 Y4 años de edad, excepto durante períodos de poca actividad pesquera. Las estimaciones de la tasa de captura media anual para todo el Océano Pacífico oriental están correlacionadas positivamente de forma significativa con las estimaciones de la abundancia media del análisis de las cohortes para los grupos de edad de entre 1.5 y 4 años. Se espera que los índices de abundancia por edad basados en las tasas de captura sean útiles, en conjunto con datos de la biología reproductiva, para estimar la producción total de huevos por regiones. Las estimaciones podrían asimismo ser útiles para la comprensión de las variaciones geográficas y temporales de la disponibilidad específica por edad a los barcos cerqueros, y también las migraciones específicas por edad. (PDF contains 35 pages.)