993 resultados para Estimation theory


Relevância:

60.00% 60.00%

Publicador:

Resumo:

National Highway Traffic Safety Administration, Washington, D.C.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Transportation Department, Office of Systems Engineering, Washington, D.C.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Transportation Systems Center, Cambridge, Mass.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper investigates the performance of EASI algorithm and the proposed EKENS algorithm for linear and nonlinear mixtures. The proposed EKENS algorithm is based on the modified equivariant algorithm and kernel density estimation. Theory and characteristic of both the algorithms are discussed for blind source separation model. The separation structure of nonlinear mixtures is based on a nonlinear stage followed by a linear stage. Simulations with artificial and natural data demonstrate the feasibility and good performance of the proposed EKENS algorithm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Theoretical and Experimental Tomography in the Sea Experiment (THETIS 1) took place in the Gulf of Lion to observe the evolution of the temperature field and the process of deep convection during the 1991-1992 winter. The temperature measurements consist, of moored sensors, conductivity-temperature-depth and expendable bathythermograph surveys, ana acoustic tomography. Because of this diverse data set and since the field evolves rather fast, the analysis uses a unified framework, based on estimation theory and implementing a Kalman filter. The resolution and the errors associated with the model are systematically estimated. Temperature is a good tracer of water masses. The time-evolving three-dimensional view of the field resulting from the analysis shows the details of the three classical convection phases: preconditioning, vigourous convection, and relaxation. In all phases, there is strong spatial nonuniformity, with mesoscale activity, short timescales, and sporadic evidence of advective events (surface capping, intrusions of Levantine Intermediate Water (LIW)). Deep convection, reaching 1500 m, was observed in late February; by late April the field had not yet returned to its initial conditions (strong deficit of LIW). Comparison with available atmospheric flux data shows that advection acts to delay the occurence of convection and confirms the essential role of buoyancy fluxes. For this winter, the deep. mixing results in an injection of anomalously warm water (Delta T similar or equal to 0.03 degrees) to a depth of 1500 m, compatible with the deep warming previously reported.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Se presenta una aproximación a la estructura a plazos de tasas de interés del mercado colombiano a través de dos modelos: Modelo de Diebold, & Li con factores latentes y el modelo de Diebold, Rudebusch & Aruoba con Macrofactores, los cuales fueron estimados utilizando un Filtro de Kalman implementado en MATLAB y posteriormente utilizados para obtener pronósticos de la curva en función del comportamiento esperado de variables macroeconómicas y financieras de la economía local y americana -- La inclusión de los macrofactores se hace esperando mejores proyecciones de la curva, de manera que tener proyecciones de estas variables será de utilidad para conocer el comportamiento futuro de la curva de rendimientos local -- Los modelos se ajustan con datos mensuales, tomando el periodo 2003-2015 y testeado con una porción de esta información; el modelo de factores latentes tiene solo información histórica de la curva cero cupón mientras que en el modelo con macrofactores se consideraron variables como: inflación local 12 meses, CDS 5Y, índice VIX, precios del WTI, TRM, tasa de cambio Euro/Dólar, tasa REPO y tasa FED; obteniendo finalmente dos modelos, siendo el que contiene macrofactores el que tiene mejores indicadores de desempeño en el pronóstico

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis is concerned with change point analysis for time series, i.e. with detection of structural breaks in time-ordered, random data. This long-standing research field regained popularity over the last few years and is still undergoing, as statistical analysis in general, a transformation to high-dimensional problems. We focus on the fundamental »change in the mean« problem and provide extensions of the classical non-parametric Darling-Erdős-type cumulative sum (CUSUM) testing and estimation theory within highdimensional Hilbert space settings. In the first part we contribute to (long run) principal component based testing methods for Hilbert space valued time series under a rather broad (abrupt, epidemic, gradual, multiple) change setting and under dependence. For the dependence structure we consider either traditional m-dependence assumptions or more recently developed m-approximability conditions which cover, e.g., MA, AR and ARCH models. We derive Gumbel and Brownian bridge type approximations of the distribution of the test statistic under the null hypothesis of no change and consistency conditions under the alternative. A new formulation of the test statistic using projections on subspaces allows us to simplify the standard proof techniques and to weaken common assumptions on the covariance structure. Furthermore, we propose to adjust the principal components by an implicit estimation of a (possible) change direction. This approach adds flexibility to projection based methods, weakens typical technical conditions and provides better consistency properties under the alternative. In the second part we contribute to estimation methods for common changes in the means of panels of Hilbert space valued time series. We analyze weighted CUSUM estimates within a recently proposed »high-dimensional low sample size (HDLSS)« framework, where the sample size is fixed but the number of panels increases. We derive sharp conditions on »pointwise asymptotic accuracy« or »uniform asymptotic accuracy« of those estimates in terms of the weighting function. Particularly, we prove that a covariance-based correction of Darling-Erdős-type CUSUM estimates is required to guarantee uniform asymptotic accuracy under moderate dependence conditions within panels and that these conditions are fulfilled, e.g., by any MA(1) time series. As a counterexample we show that for AR(1) time series, close to the non-stationary case, the dependence is too strong and uniform asymptotic accuracy cannot be ensured. Finally, we conduct simulations to demonstrate that our results are practically applicable and that our methodological suggestions are advantageous.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the present paper, Eringen's nonlocal elasticity theory is employed to evaluate the length dependent in-plane stiffness of single-walled carbon nanotubes (SWCNTs). The SWCNT is modeled as an Euler-Bernoulli beam and is analyzed for various boundary conditions to evaluate the length dependent in-plane stiffness. It has been found that the nonlocal scaling parameter has a significant effect on the length dependent in-plane stiffness of SWCNTs. It has been observed that as the nonlocal scale parameter increases the stiffness ratio of SWCNT decreases. In nonlocality, the cantilever SWCNT has high in-plane stiffness as compared to the simply-supported and the clamped cases.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Use of engineered landfills for the disposal of industrial wastes is currently a common practice. Bentonite is attracting a greater attention not only as capping and lining materials in landfills but also as buffer and backfill materials for repositories of high-level nuclear waste around the world. In the design of buffer and backfill materials, it is important to know the swelling pressures of compacted bentonite with different electrolyte solutions. The theoretical studies on swell pressure behaviour are all based on Diffuse Double Layer (DDL) theory. To establish a relation between the swell pressure and void ratio of the soil, it is necessary to calculate the mid-plane potential in the diffuse part of the interacting ionic double layers. The difficulty in these calculations is the elliptic integral involved in the relation between half space distance and mid plane potential. Several investigators circumvented this problem using indirect methods or by using cumbersome numerical techniques. In this work, a novel approach is proposed for theoretical estimations of swell pressures of fine-grained soil from the DDL theory. The proposed approach circumvents the complex computations in establishing the relationship between mid-plane potential and diffused plates’ distances in other words, between swell pressure and void ratio.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Bentonite clays are proven to be attractive as buffer and backfill material in high-level nuclear waste repositories around the world. A quick estimation of swelling pressures of the compacted bentonites for different clay-water-electrolyte interactions is essential in the design of buffer and backfill materials. The theoretical studies on the swelling behavior of bentonites are based on diffuse double layer (DDL) theory. To establish theoretical relationship between void ratio and swelling pressure (e versus P), evaluation of elliptic integral and inverse analysis are unavoidable. In this paper, a novel procedure is presented to establish theoretical relationship of e versus P based on the Gouy-Chapman method. The proposed procedure establishes a unique relationship between electric potentials of interacting and non-interacting diffuse clay-water-electrolyte systems. A procedure is, thus, proposed to deduce the relation between swelling pressures and void ratio from the established relation between electric potentials. This approach is simple and alleviates the need for elliptic integral evaluation and also the inverse analysis. Further, application of the proposed approach to estimate swelling pressures of four compacted bentonites, for example, MX 80, Febex, Montigel and Kunigel V1, at different dry densities, shows that the method is very simple and predicts solutions with very good accuracy. Moreover, the proposed procedure provides continuous distributions of e versus P and thus it is computationally efficient when compared with the existing techniques.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

As técnicas estatísticas são fundamentais em ciência e a análise de regressão linear é, quiçá, uma das metodologias mais usadas. É bem conhecido da literatura que, sob determinadas condições, a regressão linear é uma ferramenta estatística poderosíssima. Infelizmente, na prática, algumas dessas condições raramente são satisfeitas e os modelos de regressão tornam-se mal-postos, inviabilizando, assim, a aplicação dos tradicionais métodos de estimação. Este trabalho apresenta algumas contribuições para a teoria de máxima entropia na estimação de modelos mal-postos, em particular na estimação de modelos de regressão linear com pequenas amostras, afetados por colinearidade e outliers. A investigação é desenvolvida em três vertentes, nomeadamente na estimação de eficiência técnica com fronteiras de produção condicionadas a estados contingentes, na estimação do parâmetro ridge em regressão ridge e, por último, em novos desenvolvimentos na estimação com máxima entropia. Na estimação de eficiência técnica com fronteiras de produção condicionadas a estados contingentes, o trabalho desenvolvido evidencia um melhor desempenho dos estimadores de máxima entropia em relação ao estimador de máxima verosimilhança. Este bom desempenho é notório em modelos com poucas observações por estado e em modelos com um grande número de estados, os quais são comummente afetados por colinearidade. Espera-se que a utilização de estimadores de máxima entropia contribua para o tão desejado aumento de trabalho empírico com estas fronteiras de produção. Em regressão ridge o maior desafio é a estimação do parâmetro ridge. Embora existam inúmeros procedimentos disponíveis na literatura, a verdade é que não existe nenhum que supere todos os outros. Neste trabalho é proposto um novo estimador do parâmetro ridge, que combina a análise do traço ridge e a estimação com máxima entropia. Os resultados obtidos nos estudos de simulação sugerem que este novo estimador é um dos melhores procedimentos existentes na literatura para a estimação do parâmetro ridge. O estimador de máxima entropia de Leuven é baseado no método dos mínimos quadrados, na entropia de Shannon e em conceitos da eletrodinâmica quântica. Este estimador suplanta a principal crítica apontada ao estimador de máxima entropia generalizada, uma vez que prescinde dos suportes para os parâmetros e erros do modelo de regressão. Neste trabalho são apresentadas novas contribuições para a teoria de máxima entropia na estimação de modelos mal-postos, tendo por base o estimador de máxima entropia de Leuven, a teoria da informação e a regressão robusta. Os estimadores desenvolvidos revelam um bom desempenho em modelos de regressão linear com pequenas amostras, afetados por colinearidade e outliers. Por último, são apresentados alguns códigos computacionais para estimação com máxima entropia, contribuindo, deste modo, para um aumento dos escassos recursos computacionais atualmente disponíveis.