992 resultados para Efficient estimation


Relevância:

60.00% 60.00%

Publicador:

Resumo:

El presente trabajo se enfoca en el análisis de las acciones de Ecopetrol, empresa representativa del mercado de Extracción de Petróleo y Gas natural en Colombia (SP&G), durante el periodo, del 22 de mayo de 2012 al 30 de agosto de 2013. Durante este espacio de tiempo la acción sufrió una serie de variaciones en su precio las cuales se relacionaban a la nueva emisión de acciones que realizo la Compañía. Debido a este cambio en el comportamiento del activo se generaron una serie de interrogantes sobre, (i) la reacción del mercado ante diferentes sucesos ocurridos dentro de las firmas y en su entorno (ii) la capacidad de los modelos financieros de predecir y entender las posibles reacciones observadas de los activos (entendidos como deuda). Durante el desarrollo del presente trabajo se estudiará la pertinencia del mismo, en línea con los objetivos y desarrollos de la Escuela de Administración de la Universidad del Rosario. Puntualmente en temas de Perdurabilidad direccionados a la línea de Gerencia. Donde el entendimiento de la deuda como parte del funcionamiento actual y como variable determinante para el comportamiento futuro de las organizaciones tiene especial importancia. Una vez se clarifica la relación entre el presente trabajo y la Universidad, se desarrollan diferentes conceptos y teorías financieras que han permitido conocer y estudiar de manera más específica el mercado, con el objetivo de reducir los riesgos de las inversiones realizadas. Éste análisis se desarrolla en dos partes: (i) modelos de tiempo discreto y (ii) modelos de tiempo continúo. Una vez se tiene mayor claridad sobre los modelos estudiados hasta el momento se realiza el respectivo análisis de los datos mediante modelos de caos y análisis recurrente los cuales nos permiten entender que las acciones se comportan de manera caótica pero que establecen ciertas relaciones entre los precios actuales y los históricos, desarrollando comportamientos definidos entre los precios, las cantidades, el entorno macroeconómico y la organización. De otra parte, se realiza una descripción del mercado de petróleo en Colombia y se estudia a Ecopetrol como empresa y eje principal del mercado descrito en el país. La compañía Ecopetrol es representativa debido a que es uno de los mayores aportantes fiscales del país, pues sus ingresos se desprenden de bienes que se encuentran en el subsuelo por lo que la renta petrolera incluye impuestos a la producción transformación y consumo (Ecopetrol, 2003). Por último, se presentan los resultados del trabajo, así como el análisis que da lugar para presentar ciertas recomendaciones a partir de lo observado.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La dependencia entre las series financieras, es un parámetro fundamental para la estimación de modelos de Riesgo. El Valor en Riesgo (VaR) es una de las medidas más importantes utilizadas para la administración y gestión de Riesgos Financieros, en la actualidad existen diferentes métodos para su estimación, como el método por simulación histórica, el cual no asume ninguna distribución sobre los retornos de los factores de riesgo o activos, o los métodos paramétricos que asumen normalidad sobre las distribuciones. En este documento se introduce la teoría de cópulas, como medida de dependencia entre las series, se estima un modelo ARMA-GARCH-Cópula para el cálculo del Valor en Riesgo de un portafolio compuesto por dos series financiera, la tasa de cambio Dólar-Peso y Euro-Peso. Los resultados obtenidos muestran que la estimación del VaR por medio de copulas es más preciso en relación a los métodos tradicionales.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Classical regression methods take vectors as covariates and estimate the corresponding vectors of regression parameters. When addressing regression problems on covariates of more complex form such as multi-dimensional arrays (i.e. tensors), traditional computational models can be severely compromised by ultrahigh dimensionality as well as complex structure. By exploiting the special structure of tensor covariates, the tensor regression model provides a promising solution to reduce the model’s dimensionality to a manageable level, thus leading to efficient estimation. Most of the existing tensor-based methods independently estimate each individual regression problem based on tensor decomposition which allows the simultaneous projections of an input tensor to more than one direction along each mode. As a matter of fact, multi-dimensional data are collected under the same or very similar conditions, so that data share some common latent components but can also have their own independent parameters for each regression task. Therefore, it is beneficial to analyse regression parameters among all the regressions in a linked way. In this paper, we propose a tensor regression model based on Tucker Decomposition, which identifies not only the common components of parameters across all the regression tasks, but also independent factors contributing to each particular regression task simultaneously. Under this paradigm, the number of independent parameters along each mode is constrained by a sparsity-preserving regulariser. Linked multiway parameter analysis and sparsity modeling further reduce the total number of parameters, with lower memory cost than their tensor-based counterparts. The effectiveness of the new method is demonstrated on real data sets.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Although there has been substantial research on long-run co-movement (common trends) in the empirical macroeconomics literature. little or no work has been done on short run co-movement (common cycles). Investigating common cycles is important on two grounds: first. their existence is an implication of most dynamic macroeconomic models. Second. they impose important restrictions on dynamic systems. Which can be used for efficient estimation and forecasting. In this paper. using a methodology that takes into account short- and long-run co-movement restrictions. we investigate their existence in a multivariate data set containing U.S. per-capita output. consumption. and investment. As predicted by theory. the data have common trends and common cycles. Based on the results of a post-sample forecasting comparison between restricted and unrestricted systems. we show that a non-trivial loss of efficiency results when common cycles are ignored. If permanent shocks are associated with changes in productivity. the latter fails to be an important source of variation for output and investment contradicting simple aggregate dynamic models. Nevertheless. these shocks play a very important role in explaining the variation of consumption. Showing evidence of smoothing. Furthermore. it seems that permanent shocks to output play a much more important role in explaining unemployment fluctuations than previously thought.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Reduced form estimation of multivariate data sets currently takes into account long-run co-movement restrictions by using Vector Error Correction Models (VECM' s). However, short-run co-movement restrictions are completely ignored. This paper proposes a way of taking into account short-and long-run co-movement restrictions in multivariate data sets, leading to efficient estimation of VECM' s. It enables a more precise trend-cycle decomposition of the data which imposes no untested restrictions to recover these two components. The proposed methodology is applied to a multivariate data set containing U.S. per-capita output, consumption and investment Based on the results of a post-sample forecasting comparison between restricted and unrestricted VECM' s, we show that a non-trivial loss of efficiency results whenever short-run co-movement restrictions are ignored. While permanent shocks to consumption still play a very important role in explaining consumption’s variation, it seems that the improved estimates of trends and cycles of output, consumption, and investment show evidence of a more important role for transitory shocks than previously suspected. Furthermore, contrary to previous evidence, it seems that permanent shocks to output play a much more important role in explaining unemployment fluctuations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Despite the commonly held belief that aggregate data display short-run comovement, there has been little discussion about the econometric consequences of this feature of the data. We use exhaustive Monte-Carlo simulations to investigate the importance of restrictions implied by common-cyclical features for estimates and forecasts based on vector autoregressive models. First, we show that the ìbestî empirical model developed without common cycle restrictions need not nest the ìbestî model developed with those restrictions. This is due to possible differences in the lag-lengths chosen by model selection criteria for the two alternative models. Second, we show that the costs of ignoring common cyclical features in vector autoregressive modelling can be high, both in terms of forecast accuracy and efficient estimation of variance decomposition coefficients. Third, we find that the Hannan-Quinn criterion performs best among model selection criteria in simultaneously selecting the lag-length and rank of vector autoregressions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we show how to obtain efficient designs of experiments for fitting Michaelis-Menten and Hill equations useful in chemical studies. The search of exact D-optimal designs by using local and pseudo-Bayesian approaches is considered. Optimal designs were compared to those commonly used in practice using an efficiency measure and theoretical standard errors of the kinetic parameter estimates. In conclusion, the D-optimal designs based on the Hill equation proved efficient for estimating the parameters of both models. Furthermore, these are promising with respect to practical issues, allowing efficient estimation as well as goodness-of-fit tests and comparisons between some kinetic models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Maser thesis is devoted to developing a model to technical state of gas turbine engine estimation. The approaches to preparation data, especially to handle unbalanced data were presented in the thesis. In order to efficient estimation of model performance, the special metric was chosen. Goal of the master thesis is analyzing of monitoring parameters data and developing a model of technical state of GTE estimation based on the data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-08

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Large scale image mosaicing methods are in great demand among scientists who study different aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seafloor. Cost and weight constraints mean that lowcost Remotely operated vehicles (ROVs) usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predetermined trajectory that provides several non time-consecutive overlapping image pairs. Finding these pairs (a process known as topology estimation) is indispensable to obtaining globally consistent mosaics and accurate trajectory estimates, which are necessary for a global view of the surveyed area, especially when optical sensors are the only data source. This thesis presents a set of consistent methods aimed at creating large area image mosaics from optical data obtained during surveys with low-cost underwater vehicles. First, a global alignment method developed within a Feature-based image mosaicing (FIM) framework, where nonlinear minimisation is substituted by two linear steps, is discussed. Then, a simple four-point mosaic rectifying method is proposed to reduce distortions that might occur due to lens distortions, error accumulation and the difficulties of optical imaging in an underwater medium. The topology estimation problem is addressed by means of an augmented state and extended Kalman filter combined framework, aimed at minimising the total number of matching attempts and simultaneously obtaining the best possible trajectory. Potential image pairs are predicted by taking into account the uncertainty in the trajectory. The contribution of matching an image pair is investigated using information theory principles. Lastly, a different solution to the topology estimation problem is proposed in a bundle adjustment framework. Innovative aspects include the use of fast image similarity criterion combined with a Minimum spanning tree (MST) solution, to obtain a tentative topology. This topology is improved by attempting image matching with the pairs for which there is the most overlap evidence. Unlike previous approaches for large-area mosaicing, our framework is able to deal naturally with cases where time-consecutive images cannot be matched successfully, such as completely unordered sets. Finally, the efficiency of the proposed methods is discussed and a comparison made with other state-of-the-art approaches, using a series of challenging datasets in underwater scenarios

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents the theoretical development of a nonlinear adaptive filter based on a concept of filtering by approximated densities (FAD). The most common procedures for nonlinear estimation apply the extended Kalman filter. As opposed to conventional techniques, the proposed recursive algorithm does not require any linearisation. The prediction uses a maximum entropy principle subject to constraints. Thus, the densities created are of an exponential type and depend on a finite number of parameters. The filtering yields recursive equations involving these parameters. The update applies the Bayes theorem. Through simulation on a generic exponential model, the proposed nonlinear filter is implemented and the results prove to be superior to that of the extended Kalman filter and a class of nonlinear filters based on partitioning algorithms.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis focuses on the energy efficiency in wireless networks under the transmission and information diffusion points of view. In particular, on one hand, the communication efficiency is investigated, attempting to reduce the consumption during transmissions, while on the other hand the energy efficiency of the procedures required to distribute the information among wireless nodes in complex networks is taken into account. For what concerns energy efficient communications, an innovative transmission scheme reusing source of opportunity signals is introduced. This kind of signals has never been previously studied in literature for communication purposes. The scope is to provide a way for transmitting information with energy consumption close to zero. On the theoretical side, starting from a general communication channel model subject to a limited input amplitude, the theme of low power transmission signals is tackled under the perspective of stating sufficient conditions for the capacity achieving input distribution to be discrete. Finally, the focus is shifted towards the design of energy efficient algorithms for the diffusion of information. In particular, the endeavours are aimed at solving an estimation problem distributed over a wireless sensor network. The proposed solutions are deeply analyzed both to ensure their energy efficiency and to guarantee their robustness against losses during the diffusion of information (against information diffusion truncation more in general).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A large number of proposals for estimating the bivariate survival function under random censoring has been made. In this paper we discuss nonparametric maximum likelihood estimation and the bivariate Kaplan-Meier estimator of Dabrowska. We show how these estimators are computed, present their intuitive background and compare their practical performance under different levels of dependence and censoring, based on extensive simulation results, which leads to a practical advise.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

El propósito de esta tesis es la implementación de métodos eficientes de adaptación de mallas basados en ecuaciones adjuntas en el marco de discretizaciones de volúmenes finitos para mallas no estructuradas. La metodología basada en ecuaciones adjuntas optimiza la malla refinándola adecuadamente con el objetivo de mejorar la precisión de cálculo de un funcional de salida dado. El funcional suele ser una magnitud escalar de interés ingenieril obtenida por post-proceso de la solución, como por ejemplo, la resistencia o la sustentación aerodinámica. Usualmente, el método de adaptación adjunta está basado en una estimación a posteriori del error del funcional de salida mediante un promediado del residuo numérico con las variables adjuntas, “Dual Weighted Residual method” (DWR). Estas variables se obtienen de la solución del problema adjunto para el funcional seleccionado. El procedimiento habitual para introducir este método en códigos basados en discretizaciones de volúmenes finitos involucra la utilización de una malla auxiliar embebida obtenida por refinamiento uniforme de la malla inicial. El uso de esta malla implica un aumento significativo de los recursos computacionales (por ejemplo, en casos 3D el aumento de memoria requerida respecto a la que necesita el problema fluido inicial puede llegar a ser de un orden de magnitud). En esta tesis se propone un método alternativo basado en reformular la estimación del error del funcional en una malla auxiliar más basta y utilizar una técnica de estimación del error de truncación, denominada _ -estimation, para estimar los residuos que intervienen en el método DWR. Utilizando esta estimación del error se diseña un algoritmo de adaptación de mallas que conserva los ingredientes básicos de la adaptación adjunta estándar pero con un coste computacional asociado sensiblemente menor. La metodología de adaptación adjunta estándar y la propuesta en la tesis han sido introducidas en un código de volúmenes finitos utilizado habitualmente en la industria aeronáutica Europea. Se ha investigado la influencia de distintos parámetros numéricos que intervienen en el algoritmo. Finalmente, el método propuesto se compara con otras metodologías de adaptación de mallas y su eficiencia computacional se demuestra en una serie de casos representativos de interés aeronáutico. ABSTRACT The purpose of this thesis is the implementation of efficient grid adaptation methods based on the adjoint equations within the framework of finite volume methods (FVM) for unstructured grid solvers. The adjoint-based methodology aims at adapting grids to improve the accuracy of a functional output of interest, as for example, the aerodynamic drag or lift. The adjoint methodology is based on the a posteriori functional error estimation using the adjoint/dual-weighted residual method (DWR). In this method the error in a functional output can be directly related to local residual errors of the primal solution through the adjoint variables. These variables are obtained by solving the corresponding adjoint problem for the chosen functional. The common approach to introduce the DWR method within the FVM framework involves the use of an auxiliary embedded grid. The storage of this mesh demands high computational resources, i.e. over one order of magnitude increase in memory relative to the initial problem for 3D cases. In this thesis, an alternative methodology for adapting the grid is proposed. Specifically, the DWR approach for error estimation is re-formulated on a coarser mesh level using the _ -estimation method to approximate the truncation error. Then, an output-based adaptive algorithm is designed in such way that the basic ingredients of the standard adjoint method are retained but the computational cost is significantly reduced. The standard and the new proposed adjoint-based adaptive methodologies have been incorporated into a flow solver commonly used in the EU aeronautical industry. The influence of different numerical settings has been investigated. The proposed method has been compared against different grid adaptation approaches and the computational efficiency of the new method has been demonstrated on some representative aeronautical test cases.