900 resultados para Incorrect Generalized Least Squares


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A generalized technique is proposed for modeling the effects of process variations on dynamic power by directly relating the variations in process parameters to variations in dynamic power of a digital circuit. The dynamic power of a 2-input NAND gate is characterized by mixed-mode simulations, to be used as a library element for 65mn gate length technology. The proposed methodology is demonstrated with a multiplier circuit built using the NAND gate library, by characterizing its dynamic power through Monte Carlo analysis. The statistical technique of Response. Surface Methodology (RSM) using Design of Experiments (DOE) and Least Squares Method (LSM), are employed to generate a "hybrid model" for gate power to account for simultaneous variations in multiple process parameters. We demonstrate that our hybrid model based statistical design approach results in considerable savings in the power budget of low power CMOS designs with an error of less than 1%, with significant reductions in uncertainty by atleast 6X on a normalized basis, against worst case design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis studies the interest-rate policy of the ECB by estimating monetary policy rules using real-time data and central bank forecasts. The aim of the estimations is to try to characterize a decade of common monetary policy and to look at how different models perform at this task.The estimated rules include: contemporary Taylor rules, forward-looking Taylor rules, nonlinearrules and forecast-based rules. The nonlinear models allow for the possibility of zone-like preferences and an asymmetric response to key variables. The models therefore encompass the most popular sub-group of simple models used for policy analysis as well as the more unusual non-linear approach. In addition to the empirical work, this thesis also contains a more general discussion of monetary policy rules mostly from a New Keynesian perspective. This discussion includes an overview of some notable related studies, optimal policy, policy gradualism and several other related subjects. The regression estimations are performed with either least squares or the generalized method of moments depending on the requirements of the estimations. The estimations use data from both the Euro Area Real-Time Database and the central bank forecasts published in ECB Monthly Bulletins. These data sources represent some of the best data that is available for this kind of analysis. The main results of this thesis are that forward-looking behavior appears highly prevalent, but that standard forward-looking Taylor rules offer only ambivalent results with regard to inflation. Nonlinear models are shown to work, but on the other hand do not have a strong rationale over a simpler linear formulation. However, the forecasts appear to be highly useful in characterizing policy and may offer the most accurate depiction of a predominantly forward-looking central bank. In particular the inflation response appears much stronger while the output response becomes highly forward-looking as well.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of estimating multiple Carrier Frequency Offsets (CFOs) in the uplink of MIMO-OFDM systems with Co-Channel (CC) and OFDMA based carrier allocation is considered. The tri-linear data model for generalized, multiuser OFDM system is formulated. Novel blind subspace based estimation of multiple CFOs in the case of arbitrary carrier allocation scheme in OFDMA systems and CC users in OFDM systems based on the Khatri-Rao product is proposed. The method works where the conventional subspace method fails. The performance of the proposed methods is compared with pilot based Least-Squares method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The maintenance of chlorine residual is needed at all the points in the distribution system supplied with chlorine as a disinfectant. The propagation and level of chlorine in a distribution system is affected by both bulk and pipe wall reactions. It is well known that the field determination of wall reaction parameter is difficult. The source strength of chlorine to maintain a specified chlorine residual at a target node is also an important parameter. The inverse model presented in the paper determines these water quality parameters, which are associated with different reaction kinetics, either in single or in groups of pipes. The weighted-least-squares method based on the Gauss-Newton minimization technique is used for the estimation of these parameters. The validation and application of the inverse model is illustrated with an example pipe distribution system under steady state. A generalized procedure to handle noisy and bad (abnormal) data is suggested, which can be used to estimate these parameters more accurately. The developed inverse model is useful for water supply agencies to calibrate their water distribution system and to improve their operational strategies to maintain water quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. Methods: The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. Results: The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. Conclusions: The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time. (C) 2013 American Association of Physicists in Medicine. http://dx.doi.org/10.1118/1.4792459]

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We develop a new dictionary learning algorithm called the l(1)-K-svp, by minimizing the l(1) distortion on the data term. The proposed formulation corresponds to maximum a posteriori estimation assuming a Laplacian prior on the coefficient matrix and additive noise, and is, in general, robust to non-Gaussian noise. The l(1) distortion is minimized by employing the iteratively reweighted least-squares algorithm. The dictionary atoms and the corresponding sparse coefficients are simultaneously estimated in the dictionary update step. Experimental results show that l(1)-K-SVD results in noise-robustness, faster convergence, and higher atom recovery rate than the method of optimal directions, K-SVD, and the robust dictionary learning algorithm (RDL), in Gaussian as well as non-Gaussian noise. For a fixed value of sparsity, number of dictionary atoms, and data dimension, l(1)-K-SVD outperforms K-SVD and RDL on small training sets. We also consider the generalized l(p), 0 < p < 1, data metric to tackle heavy-tailed/impulsive noise. In an image denoising application, l(1)-K-SVD was found to result in higher peak signal-to-noise ratio (PSNR) over K-SVD for Laplacian noise. The structural similarity index increases by 0.1 for low input PSNR, which is significant and demonstrates the efficacy of the proposed method. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ENGLISH: Monthly estimates of the abundance of yellowfin tuna by age groups and regions within the eastern Pacific Ocean during 1970-1988 are made, using purse-seine catch rates, length-frequency samples, and results from cohort analysis. The numbers of individuals caught of each age group in each logged purse-seine set are estimated, using the tonnage from that set and length-frequency distribution from the "nearest" length-frequency sample(s). Nearest refers to the closest length frequency sample(s) to the purse-seine set in time, distance, and set type (dolphin associated, floating object associated, skipjack associated, none of these, and some combinations). Catch rates are initially calculated as the estimated number of individuals of the age group caught per hour of searching. Then, to remove the effects of set type and vessel speed, they are standardized, using separate weiznted generalized linear models for each age group. The standardized catch rates at the center of each 2.5 0 quadrangle-month are estimated, using locally-weighted least-squares regressions on latitude, longitude and date, and then combined into larger regions. Catch rates within these regions are converted to numbers of yellowfin, using the mean age composition from cohort analysis. The variances of the abundance estimates within regions are large for 0-, 1-, and 5-year-olds, but small for 1.5- to 4-year-olds, except during periods of low fishing activity. Mean annual catch rate estimates for the entire eastern Pacific Ocean are significantly positively correlated with mean abundance estimates from cohort analysis for age groups ranging from 1.5 to 4 years old. Catch-rate indices of abundance by age are expected to be useful in conjunction with data on reproductive biology to estimate total egg production within regions. The estimates may also be useful in understanding geographic and temporal variations in age-specific availability to purse seiners, as well as age-specific movements. SPANISH: Se calculan estimaciones mensuales de la abundancia del atún aleta amarilla por grupos de edad y regiones en el Océano Pacífico oriental durante 1970-1988, usando tasas de captura cerquera, muestras de frecuencia de talla, y los resultados del análisis de cohortes. Se estima el número de individuos capturados de cada grupo de edad en cada lance cerquero registrado, usando el tonelaje del lance en cuestión y la distribución de frecuencia de talla de la(s) muestra(s) de frecuencia de talla "más cercana/s)," "Más cercana" significa la(s) muestra(s) de frecuencia de talla más parecida(s) al lance cerquero en cuanto a fecha, distancia, y tipo de lance (asociado con delfines, con objeto flotante, con barrilete, con ninguno de éstos, y algunas combinaciones). Se calculan inicialmente las tasas de captura como el número estimado de individuos del grupo de edad capturado por hora de búsqueda. A continuación, para eliminar los efectos del tipo de lance y la velocidad del barco, se estandardizan dichas tasas, usando un modelo lineal generalizado ponderado, para cada grupo por separado. Se estima la tasa de captura estandardizada al centro de cada cuadrángulo de 2.5°-mes, usando regresiones de mínimos cuadrados ponderados localmente por latitud, longitud, y fecha, y entonces combinándolas en regiones mayores. Se convierten las tasas de captura dentro de estas regiones en números de aletas amarillas individuales, usando el número promedio por edad proveniente del análisis de cohortes. Las varianzas de las estimaciones de la abundancia dentro de las regiones son grandes para los peces de O, 1, Y5 años de edad, pero pequeñas para aquellos de entre 1.5 Y4 años de edad, excepto durante períodos de poca actividad pesquera. Las estimaciones de la tasa de captura media anual para todo el Océano Pacífico oriental están correlacionadas positivamente de forma significativa con las estimaciones de la abundancia media del análisis de las cohortes para los grupos de edad de entre 1.5 y 4 años. Se espera que los índices de abundancia por edad basados en las tasas de captura sean útiles, en conjunto con datos de la biología reproductiva, para estimar la producción total de huevos por regiones. Las estimaciones podrían asimismo ser útiles para la comprensión de las variaciones geográficas y temporales de la disponibilidad específica por edad a los barcos cerqueros, y también las migraciones específicas por edad. (PDF contains 35 pages.)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Pade approximation with Baker's algorithm is compared with the least-squares Prony method and the generalized pencil-of-functions (GPOF) method for calculating mode frequencies and mode Q factors for coupled optical microdisks by FDTD technique. Comparisons of intensity spectra and the corresponding mode frequencies and Q factors show that the Pade approximation can yield more stable results than the Prony and the GPOF methods, especially the intensity spectrum. The results of the Prony method and the GPOF method are greatly influenced by the selected number of resonant modes, which need to be optimized during the data processing, in addition to the length of the time response signal. Furthermore, the Pade approximation is applied to calculate light delay for embedded microring resonators from complex transmission spectra obtained by the Pade approximation from a FDTD output. The Prony and the GPOF methods cannot be applied to calculate the transmission spectra, because the transmission signal obtained by the FDTD simulation cannot be expressed as a sum of damped complex exponentials. (C) 2009 Optical Society of America

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Singular value decomposition - least squares (SVDLS), a new method for processing the multiple spectra with multiple wavelengths and multiple components in thin layer spectroelectrochemistry has been developed. The CD spectra of three components, norepinephrine reduced form of norepinephrinechrome and norepinephrinequinone, and their fraction distributions with applied potential were obtained in three redox processes of norepinephrine from 30 experimental CD spectra, which well explains electrochemical mechanism of norepinephrine as well as the changes in the CD spectrum during the electrochemical processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thermocouples are one of the most popular devices for temperature measurement due to their robustness, ease of manufacture and installation, and low cost. However, when used in certain harsh environments, for example, in combustion systems and engine exhausts, large wire diameters are required, and consequently the measurement bandwidth is reduced. This article discusses a software compensation technique to address the loss of high frequency fluctuations based on measurements from two thermocouples. In particular, a difference equation sDEd approach is proposed and compared with existing methods both in simulation and on experimental test rig data with constant flow velocity. It is found that the DE algorithm, combined with the use of generalized total least squares for parameter identification, provides better performance in terms of time constant estimation without any a priori assumption on the time constant ratios of the thermocouples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The characterization of thermocouple sensors for temperature measurement in varying-flow environments is a challenging problem. Recently, the authors introduced novel difference-equation-based algorithms that allow in situ characterization of temperature measurement probes consisting of two-thermocouple sensors with differing time constants. In particular, a linear least squares (LS) lambda formulation of the characterization problem, which yields unbiased estimates when identified using generalized total LS, was introduced. These algorithms assume that time constants do not change during operation and are, therefore, appropriate for temperature measurement in homogenous constant-velocity liquid or gas flows. This paper develops an alternative ß-formulation of the characterization problem that has the major advantage of allowing exploitation of a priori knowledge of the ratio of the sensor time constants, thereby facilitating the implementation of computationally efficient algorithms that are less sensitive to measurement noise. A number of variants of the ß-formulation are developed, and appropriate unbiased estimators are identified. Monte Carlo simulation results are used to support the analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The characterization of thermocouple sensors for temperature measurement in variable flow environments is a challenging problem. In this paper, novel difference equation-based algorithms are presented that allow in situ characterization of temperature measurement probes consisting of two-thermocouple sensors with differing time constants. Linear and non-linear least squares formulations of the characterization problem are introduced and compared in terms of their computational complexity, robustness to noise and statistical properties. With the aid of this analysis, least squares optimization procedures that yield unbiased estimates are identified. The main contribution of the paper is the development of a linear two-parameter generalized total least squares formulation of the sensor characterization problem. Monte-Carlo simulation results are used to support the analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thermocouples are one of the most popular devices for temperature measurement due to their robustness, ease of manufacture and installation, and low cost. However, when used in certain harsh environments, for example, in combustion systems and engine exhausts, large wire diameters are required, and consequently the measurement bandwidth is reduced. This article discusses a software compensation technique to address the loss of high frequency fluctuations based on measurements from two thermocouples. In particular, a difference equation (DE) approach is proposed and compared with existing methods both in simulation and on experimental test rig data with constant flow velocity. It is found that the DE algorithm, combined with the use of generalized total least squares for parameter identification, provides better performance in terms of time constant estimation without any a priori assumption on the time constant ratios of the thermocouples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Os Modelos de Equações Simultâneas (SEM) são modelos estatísticos com muita tradição em estudos de Econometria, uma vez que permitem representar e estudar uma vasta gama de processos económicos. Os estimadores mais usados em SEM resultam da aplicação do Método dos Mínimos Quadrados ou do Método da Máxima Verosimilhança, os quais não são robustos. Em Maronna e Yohai (1997), os autores propõem formas de “robustificar” esses estimadores. Um outro método de estimação com interesse nestes modelos é o Método dos Momentos Generalizado (GMM), o qual também conduz a estimadores não robustos. Estimadores que sofrem de falta de robustez são muito inconvenientes uma vez que podem conduzir a resultados enganadores quando são violadas as hipóteses subjacentes ao modelo assumido. Os estimadores robustos são de grande valor, em particular quando os modelos em estudo são complexos, como é o caso dos SEM. O principal objectivo desta investigação foi o de procurar tais estimadores tendo-se construído um estimador robusto a que se deu o nome de GMMOGK. Trata-se de uma versão robusta do estimador GMM. Para avaliar o desempenho do novo estimador foi feito um adequado estudo de simulação e foi também feita a aplicação do estimador a um conjunto de dados reais. O estimador robusto tem um bom desempenho nos modelos heterocedásticos considerados e, nessas condições, comporta-se melhor do que os estimadores não robustos usados no estudo. Contudo, quando a análise é feita em cada equação separadamente, a especificidade de cada equação individual e a estrutura de dependência do sistema são dois aspectos que influenciam o desempenho do estimador, tal como acontece com os estimadores usuais. Para enquadrar a investigação, o texto inclui uma revisão de aspectos essenciais dos SEM, o seu papel em Econometria, os principais métodos de estimação, com particular ênfase no GMM, e uma curta introdução à estimação robusta.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As técnicas estatísticas são fundamentais em ciência e a análise de regressão linear é, quiçá, uma das metodologias mais usadas. É bem conhecido da literatura que, sob determinadas condições, a regressão linear é uma ferramenta estatística poderosíssima. Infelizmente, na prática, algumas dessas condições raramente são satisfeitas e os modelos de regressão tornam-se mal-postos, inviabilizando, assim, a aplicação dos tradicionais métodos de estimação. Este trabalho apresenta algumas contribuições para a teoria de máxima entropia na estimação de modelos mal-postos, em particular na estimação de modelos de regressão linear com pequenas amostras, afetados por colinearidade e outliers. A investigação é desenvolvida em três vertentes, nomeadamente na estimação de eficiência técnica com fronteiras de produção condicionadas a estados contingentes, na estimação do parâmetro ridge em regressão ridge e, por último, em novos desenvolvimentos na estimação com máxima entropia. Na estimação de eficiência técnica com fronteiras de produção condicionadas a estados contingentes, o trabalho desenvolvido evidencia um melhor desempenho dos estimadores de máxima entropia em relação ao estimador de máxima verosimilhança. Este bom desempenho é notório em modelos com poucas observações por estado e em modelos com um grande número de estados, os quais são comummente afetados por colinearidade. Espera-se que a utilização de estimadores de máxima entropia contribua para o tão desejado aumento de trabalho empírico com estas fronteiras de produção. Em regressão ridge o maior desafio é a estimação do parâmetro ridge. Embora existam inúmeros procedimentos disponíveis na literatura, a verdade é que não existe nenhum que supere todos os outros. Neste trabalho é proposto um novo estimador do parâmetro ridge, que combina a análise do traço ridge e a estimação com máxima entropia. Os resultados obtidos nos estudos de simulação sugerem que este novo estimador é um dos melhores procedimentos existentes na literatura para a estimação do parâmetro ridge. O estimador de máxima entropia de Leuven é baseado no método dos mínimos quadrados, na entropia de Shannon e em conceitos da eletrodinâmica quântica. Este estimador suplanta a principal crítica apontada ao estimador de máxima entropia generalizada, uma vez que prescinde dos suportes para os parâmetros e erros do modelo de regressão. Neste trabalho são apresentadas novas contribuições para a teoria de máxima entropia na estimação de modelos mal-postos, tendo por base o estimador de máxima entropia de Leuven, a teoria da informação e a regressão robusta. Os estimadores desenvolvidos revelam um bom desempenho em modelos de regressão linear com pequenas amostras, afetados por colinearidade e outliers. Por último, são apresentados alguns códigos computacionais para estimação com máxima entropia, contribuindo, deste modo, para um aumento dos escassos recursos computacionais atualmente disponíveis.