981 resultados para Generalized Least-squares


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The model-based image reconstruction approaches in photoacoustic tomography have a distinct advantage compared to traditional analytical methods for cases where limited data is available. These methods typically deploy Tikhonov based regularization scheme to reconstruct the initial pressure from the boundary acoustic data. The model-resolution for these cases represents the blur induced by the regularization scheme. A method that utilizes this blurring model and performs the basis pursuit deconvolution to improve the quantitative accuracy of the reconstructed photoacoustic image is proposed and shown to be superior compared to other traditional methods via three numerical experiments. Moreover, this deconvolution including the building of an approximate blur matrix is achieved via the Lanczos bidagonalization (least-squares QR) making this approach attractive in real-time. (C) 2014 Optical Society of America

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Time-varying linear prediction has been studied in the context of speech signals, in which the auto-regressive (AR) coefficients of the system function are modeled as a linear combination of a set of known bases. Traditionally, least squares minimization is used for the estimation of model parameters of the system. Motivated by the sparse nature of the excitation signal for voiced sounds, we explore the time-varying linear prediction modeling of speech signals using sparsity constraints. Parameter estimation is posed as a 0-norm minimization problem. The re-weighted 1-norm minimization technique is used to estimate the model parameters. We show that for sparsely excited time-varying systems, the formulation models the underlying system function better than the least squares error minimization approach. Evaluation with synthetic and real speech examples show that the estimated model parameters track the formant trajectories closer than the least squares approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Consider N points in R-d and M local coordinate systems that are related through unknown rigid transforms. For each point, we are given (possibly noisy) measurements of its local coordinates in some of the coordinate systems. Alternatively, for each coordinate system, we observe the coordinates of a subset of the points. The problem of estimating the global coordinates of the N points (up to a rigid transform) from such measurements comes up in distributed approaches to molecular conformation and sensor network localization, and also in computer vision and graphics. The least-squares formulation of this problem, although nonconvex, has a well-known closed-form solution when M = 2 (based on the singular value decomposition (SVD)). However, no closed-form solution is known for M >= 3. In this paper, we demonstrate how the least-squares formulation can be relaxed into a convex program, namely, a semidefinite program (SDP). By setting up connections between the uniqueness of this SDP and results from rigidity theory, we prove conditions for exact and stable recovery for the SDP relaxation. In particular, we prove that the SDP relaxation can guarantee recovery under more adversarial conditions compared to earlier proposed spectral relaxations, and we derive error bounds for the registration error incurred by the SDP relaxation. We also present results of numerical experiments on simulated data to confirm the theoretical findings. We empirically demonstrate that (a) unlike the spectral relaxation, the relaxation gap is mostly zero for the SDP (i.e., we are able to solve the original nonconvex least-squares problem) up to a certain noise threshold, and (b) the SDP performs significantly better than spectral and manifold-optimization methods, particularly at large noise levels.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Compressive Sensing (CS) theory combines the signal sampling and compression for sparse signals resulting in reduction in sampling rate. In recent years, many recovery algorithms have been proposed to reconstruct the signal efficiently. Subspace Pursuit and Compressive Sampling Matching Pursuit are some of the popular greedy methods. Also, Fusion of Algorithms for Compressed Sensing is a recently proposed method where several CS reconstruction algorithms participate and the final estimate of the underlying sparse signal is determined by fusing the estimates obtained from the participating algorithms. All these methods involve solving a least squares problem which may be ill-conditioned, especially in the low dimension measurement regime. In this paper, we propose a step prior to least squares to ensure the well-conditioning of the least squares problem. Using Monte Carlo simulations, we show that in low dimension measurement scenario, this modification improves the reconstruction capability of the algorithm in clean as well as noisy measurement cases.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Event-triggered sampling (ETS) is a new approach towards efficient signal analysis. The goal of ETS need not be only signal reconstruction, but also direct estimation of desired information in the signal by skillful design of event. We show a promise of ETS approach towards better analysis of oscillatory non-stationary signals modeled by a time-varying sinusoid, when compared to existing uniform Nyquist-rate sampling based signal processing. We examine samples drawn using ETS, with events as zero-crossing (ZC), level-crossing (LC), and extrema, for additive in-band noise and jitter in detection instant. We find that extrema samples are robust, and also facilitate instantaneous amplitude (IA), and instantaneous frequency (IF) estimation in a time-varying sinusoid. The estimation is proposed solely using extrema samples, and a local polynomial regression based least-squares fitting approach. The proposed approach shows improvement, for noisy signals, over widely used analytic signal, energy separation, and ZC based approaches (which are based on uniform Nyquist-rate sampling based data-acquisition and processing). Further, extrema based ETS in general gives a sub-sampled representation (relative to Nyquistrate) of a time-varying sinusoid. For the same data-set size captured with extrema based ETS, and uniform sampling, the former gives much better IA and IF estimation. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

For a multilayered specimen, the back-scattered signal in frequency-domain optical-coherence tomography (FDOCT) is expressible as a sum of cosines, each corresponding to a change of refractive index in the specimen. Each of the cosines represent a peak in the reconstructed tomogram. We consider a truncated cosine series representation of the signal, with the constraint that the coefficients in the basis expansion be sparse. An l(2) (sum of squared errors) data error is considered with an l(1) (summation of absolute values) constraint on the coefficients. The optimization problem is solved using Weiszfeld's iteratively reweighted least squares (IRLS) algorithm. On real FDOCT data, improved results are obtained over the standard reconstruction technique with lower levels of background measurement noise and artifacts due to a strong l(1) penalty. The previous sparse tomogram reconstruction techniques in the literature proposed collecting sparse samples, necessitating a change in the data capturing process conventionally used in FDOCT. The IRLS-based method proposed in this paper does not suffer from this drawback.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We address the problem of separating a speech signal into its excitation and vocal-tract filter components, which falls within the framework of blind deconvolution. Typically, the excitation in case of voiced speech is assumed to be sparse and the vocal-tract filter stable. We develop an alternating l(p) - l(2) projections algorithm (ALPA) to perform deconvolution taking into account these constraints. The algorithm is iterative, and alternates between two solution spaces. The initialization is based on the standard linear prediction decomposition of a speech signal into an autoregressive filter and prediction residue. In every iteration, a sparse excitation is estimated by optimizing an l(p)-norm-based cost and the vocal-tract filter is derived as a solution to a standard least-squares minimization problem. We validate the algorithm on voiced segments of natural speech signals and show applications to epoch estimation. We also present comparisons with state-of-the-art techniques and show that ALPA gives a sparser impulse-like excitation, where the impulses directly denote the epochs or instants of significant excitation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Local polynomial approximation of data is an approach towards signal denoising. Savitzky-Golay (SG) filters are finite-impulse-response kernels, which convolve with the data to result in polynomial approximation for a chosen set of filter parameters. In the case of noise following Gaussian statistics, minimization of mean-squared error (MSE) between noisy signal and its polynomial approximation is optimum in the maximum-likelihood (ML) sense but the MSE criterion is not optimal for non-Gaussian noise conditions. In this paper, we robustify the SG filter for applications involving noise following a heavy-tailed distribution. The optimal filtering criterion is achieved by l(1) norm minimization of error through iteratively reweighted least-squares (IRLS) technique. It is interesting to note that at any stage of the iteration, we solve a weighted SG filter by minimizing l(2) norm but the process converges to l(1) minimized output. The results show consistent improvement over the standard SG filter performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It was demonstrated in earlier work that, by approximating its range kernel using shiftable functions, the nonlinear bilateral filter can be computed using a series of fast convolutions. Previous approaches based on shiftable approximation have, however, been restricted to Gaussian range kernels. In this work, we propose a novel approximation that can be applied to any range kernel, provided it has a pointwise-convergent Fourier series. More specifically, we propose to approximate the Gaussian range kernel of the bilateral filter using a Fourier basis, where the coefficients of the basis are obtained by solving a series of least-squares problems. The coefficients can be efficiently computed using a recursive form of the QR decomposition. By controlling the cardinality of the Fourier basis, we can obtain a good tradeoff between the run-time and the filtering accuracy. In particular, we are able to guarantee subpixel accuracy for the overall filtering, which is not provided by the most existing methods for fast bilateral filtering. We present simulation results to demonstrate the speed and accuracy of the proposed algorithm.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper deals with the valuation of energy assets related to natural gas. In particular, we evaluate a baseload Natural Gas Combined Cycle (NGCC) power plant and an ancillary instalation, namely a Liquefied Natural Gas (LNG) facility, in a realistic setting; specifically, these investments enjoy a long useful life but require some non-negligible time to build. Then we focus on the valuation of several investment options again in a realistic setting. These include the option to invest in the power plant when there is uncertainty concerning the initial outlay, or the option's time to maturity, or the cost of CO2 emission permits, or when there is a chance to double the plant size in the future. Our model comprises three sources of risk. We consider uncertain gas prices with regard to both the current level and the long-run equilibrium level; the current electricity price is also uncertain. They all are assumed to show mean reversion. The two-factor model for natural gas price is calibrated using data from NYMEX NG futures contracts. Also, we calibrate the one-factor model for electricity price using data from the Spanish wholesale electricity market, respectively. Then we use the estimated parameter values alongside actual physical parameters from a case study to value natural gas plants. Finally, the calibrated parameters are also used in a Monte Carlo simulation framework to evaluate several American-type options to invest in these energy assets. We accomplish this by following the least squares MC approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En este trabajo pretendemos alcanzar un doble objetivo. En primer lugar, estudiamos la influencia del compromiso afectivo de los empleados percibido por el directivo, tanto sobre su nivel de confianza, como sobre la capacidad de aprendizaje organizativo (CAO). Igualmente, analizamos cómo influye sobre la CAO esta predisposición del directivo a confiar en sus empleados. En segundo lugar, examinamos si el compromiso afectivo de los empleados percibido por el directivo, la confianza del directivo y la CAO favorecen la innovación en producto. Aplicando modelos de ecuaciones estructurales (Partial Least Squares -PLS-) sobre una muestra de 92 empresas pertenecientes a sectores innovadores españoles se concluye que el compromiso afectivo que el directivo percibe en los empleados determina su nivel de confianza en aquellos. Asimismo, ambas variables (compromiso afectivo de los empleados percibido por el directivo y confianza del directivo) explican la CAO. Finalmente, se comprueba que los dos factores considerados (compromiso afectivo percibido y confianza) influyen sobre la innovación en producto a través de la CAO.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Otoliths commonly are used to determine the taxon, age, and size of fishes. This information is useful for population management, predator-prey studies, and archaeological research. The relationship between the length of a fish and the length of its otoliths remains unknown for many species of marine fishes in the Pacific Ocean. Therefore, the relationships between fish length and fish weight, and between otolith length and fish length, were developed for 63 species of fishes caught in the eastern North Pacific Ocean. We also summarized similar relationships for 46 eastern North Pacific fish species reported in the literature. The relationship between fish length and otolith length was linear, and most of the variability was explained by a simple least-squares regression (r 2 > 0.700 for 45 of 63 species). The relationship between otolith length and fish length was not significantly different between left and right otoliths for all but one fish species. Images of otoliths from 77 taxa are included to assist in the identification of species. (PDF file contains 38 pages.)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Feasible tomography schemes for large particle numbers must possess, besides an appropriate data acquisition protocol, an efficient way to reconstruct the density operator from the observed finite data set. Since state reconstruction typically requires the solution of a nonlinear large-scale optimization problem, this is a major challenge in the design of scalable tomography schemes. Here we present an efficient state reconstruction scheme for permutationally invariant quantum state tomography. It works for all common state-of-the-art reconstruction principles, including, in particular, maximum likelihood and least squares methods, which are the preferred choices in today's experiments. This high efficiency is achieved by greatly reducing the dimensionality of the problem employing a particular representation of permutationally invariant states known from spin coupling combined with convex optimization, which has clear advantages regarding speed, control and accuracy in comparison to commonly employed numerical routines. First prototype implementations easily allow reconstruction of a state of 20 qubits in a few minutes on a standard computer

Relevância:

80.00% 80.00%

Publicador:

Resumo:

ENGLISH: Catches of skipjack tuna supporting major fisheries in parts of the western, central and eastern Pacific Ocean have increased in recent years; thus, it is important to examine the dynamics of the fishery to determine man's effect on the abundance of the stocks. A general linear hypothesis model was developed to standardize fishing effort to a single vessel size and gear type. Standardized effort was then used to compute an index of abundance which accounts for seasonal variability in the fishing area. The indices of abundance were highly variable from year to year in both the northern and southern areas of the fishery but indicated a generally higher abundance in the south. Data from 438 fish tagged and recovered in the eastern Pacific Ocean were used to compute growth curves. A least-squares technique was used to estimate the parameters of the von Bertalanffy growth function. Two estimates of the parameters were made by analyzing the same data in different ways. For the first set of estimates, K= 0.819 on an annual instantaneous basis and L= 729 mm; for the second, K = 0.431 and L=881. These compared well with estimates derived using the Chapman-Richards growth function, which includes the von Bertalanffy function as a special case. It was concluded that the latter function provided an adequate empirical fit to the skipjack data since the more complicated function did not significantly improve the fit. Tagging data from three cruises involving 8852 releases and 1777 returns were used to compute mortality rates during the time the fish were in the fishery. Two models were used in the analyses. The best estimates of the catchability coefficient (q) in the north and south were 8.4 X 10- 4 and 5.0 X 10- 5 respectively. The other loss rate (X), which included losses due to emigration, natural mortality and mortality due to carrying a tag, was 0.14 on an annual instantaneous basis for both areas. To detect the possible effect of fishing on abundance and total yield, the relation between abundance and effort and between total catch and effort was examined. It was found that at levels of intensity observed in the fishery, fishing does not appear to have had any measurable effect on the stocks. It was concluded therefore that the total catch could probably be increased by substantially increasing total effort beyond the present level, and that the fluctuations in abundance are fishery-independent. The estimates of growth, mortality and fishing effort were used to compute yield-per-recruitment isopleths for skipjack in both the northern and southern areas. For a size at first entry of about 425 mm, the yield per recruitment was calculated at 3 pounds in the north and 1.5 pounds in the south. In both areas it would be possible to increase the yield per recruitment by increasing fishing effort. It was not possible to assess potential production of the skipjack stocks fished in the eastern Pacific, except to note that the fishery had not affected their abundance and that they were certainly under-exploited. It was concluded that the northern and southern stocks could support increased harvests, especially the latter. SPANISH: Las capturas de atún barrilete que sostienen las pesquerías principales de la parte occidental, central y oriental del Océano Pacífico han aumentado en los últimos años; así que es importante examinar la dinámica de la pesquería para determinar el efecto que pueda tener sobre la abundancia de los stocks. Se desarrolló un modelo hipotético, lineal para standardizar el esfuerzo de pesca a un solo tamaño de barco y tipo de arte. Luego se usó el esfuerzo standardizado para computar un índice de la abundancia que pueda dar razón de la variabilidad estacional en el área de pesca. Los índices de la abundancia variaron mucho de un año a otro tanto en el área septentrional como en el área meridional de la pesquería, pero indicaron una abundancia generalmente superior en el sur. Se emplearon los datos de 438 peces marcados y recuperados en el Océano Pacífico oriental para computar las curvas de crecimiento. Una técnica de mínimos cuadrados fue usada para estimar los parámetros de la función de crecimiento de van Bertalanffy. Se hicieron dos estimativos de los parámetros mediante el análisis de los mismos datos, de diferente manera. Para el primer juego de estimativos, K=0.819 sobre una base anual instantánea y L∞=729 mm; para el segundo, K=0.431 y L∞=881. Estos se correlacionaron bien con los estimativos obtenidos usando la función de crecimiento de Chapman-Richards, que incluye la de von Bertalanffy como un caso especial. Se decidió que la última función proveía un ajuste empírico, adecuado a los datos del barrilete, ya que la función más complicada no mejoró significativamente el ajuste. Los datos de marcación de tres cruceros incluyendo 8852 liberaciones y 1777 retornos, fueron usados para computar las tasas de mortalidad durante el tiempo en que los peces estuvieron en la pesquería. Se usaron dos modelos en los análisis. Los mejores estimativos del coeficiente de capturabilidad (q) en el norte y en el sur fueron 8.4 X 10-4 y 5.0 X 10-5 , respectivamente. La otra tasa de pérdida (X), la cual incluyó pérdidas debidas a la emigración, mortalidad natural y mortalidad debida a llevar una marca, fue 0.14 sobre una base anual instantánea para las dos áreas. Con el fin de descubrir el efecto que posiblemente pueda tener la pesca sobre la abundancia y el rendimiento total, se examinó la relación entre la abundancia y el esfuerzo y entre la captura total y el esfuerzo. Se encontró que a los niveles de la intensidad observada en la pesquería, la pesca no parece haber tenido ningún efecto perceptible en los stocks. Por lo tanto se decidió que mediante un aumento substancial del esfuerzo total, más allá del nivel actual, la captura total probablemente podría aumentarse, y que las fluctuaciones de la abundancia son independientes de la pesquería. Los estimativos del crecimiento, mortalidad y esfuerzo de pesca fueron usados para computar las isopletas del rendimiento por recluta del barrilete, tanto en las áreas del norte como del sur. Para una talla de primera entrada de unos 425 mm, el rendimiento por recluta fue calculado en 3 libras en el norte y 1.5 libras en el sur. En ambas áreas sería posible aumentar el rendimiento por recluta mediante un aumento del esfuerzo de pesca. No fue posible determinar la producción potencial de los stocks del barrilete pescado en el Pacífico oriental, excepto para observar que la pesquería no ha afectado su abundancia y que ciertamente se encuentran subexplotados. Se concluyó que los stocks norte y sur pueden soportar un aumento en el rendimiento, especialmente este último. (PDF contains 274 pages.)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The tension and compression of single-crystalline silicon nanowires (SiNWs) with different cross-sectional shapes are studied systematically using molecular dynamics simulation. The shape effects on the yield stresses are characterized. For the same surface to volume ratio, the circular cross-sectional SiNWs are stronger than the square cross-sectional ones under tensile loading, but reverse happens in compressive loading. With the atoms colored by least-squares atomic local shear strain, the deformation processes reveal that the failure modes of incipient yielding are dependent on the loading directions. The SiNWs under tensile loading slip in {111} surfaces, while the compressive loading leads the SiNWs to slip in the {110} surfaces. The present results are expected to contribute to the design of the silicon devices in nanosystems.