7 resultados para Deterministic imputation
Resumo:
When it comes to information sets in real life, often pieces of the whole set may not be available. This problem can find its origin in various reasons, describing therefore different patterns. In the literature, this problem is known as Missing Data. This issue can be fixed in various ways, from not taking into consideration incomplete observations, to guessing what those values originally were, or just ignoring the fact that some values are missing. The methods used to estimate missing data are called Imputation Methods. The work presented in this thesis has two main goals. The first one is to determine whether any kind of interactions exists between Missing Data, Imputation Methods and Supervised Classification algorithms, when they are applied together. For this first problem we consider a scenario in which the databases used are discrete, understanding discrete as that it is assumed that there is no relation between observations. These datasets underwent processes involving different combina- tions of the three components mentioned. The outcome showed that the missing data pattern strongly influences the outcome produced by a classifier. Also, in some of the cases, the complex imputation techniques investigated in the thesis were able to obtain better results than simple ones. The second goal of this work is to propose a new imputation strategy, but this time we constrain the specifications of the previous problem to a special kind of datasets, the multivariate Time Series. We designed new imputation techniques for this particular domain, and combined them with some of the contrasted strategies tested in the pre- vious chapter of this thesis. The time series also were subjected to processes involving missing data and imputation to finally propose an overall better imputation method. In the final chapter of this work, a real-world example is presented, describing a wa- ter quality prediction problem. The databases that characterized this problem had their own original latent values, which provides a real-world benchmark to test the algorithms developed in this thesis.
Resumo:
The aim of this paper is to explain under which circumstances using TACs as instrument to manage a fishery along with fishing periods may be interesting from a regulatory point of view. In order to do this, the deterministic analysis of Homans and Wilen (1997)and Anderson (2000) is extended to a stochastic scenario where the resource cannot be measured accurately. The resulting endogenous stochastic model is numerically solved for finding the optimal control rules in the Iberian sardine stock. Three relevant conclusions can be highligted from simulations. First, the higher the uncertainty about the state of the stock is, the lower the probability of closing the fishery is. Second, the use of TACs as management instrument in fisheries already regulated with fishing periods leads to: i) An increase of the optimal season length and harvests, especially for medium and high number of licences, ii) An improvement of the biological and economic variables when the size of the fleet is large; and iii) Eliminate the extinction risk for the resource. And third, the regulator would rather select the number of licences and do not restrict the season length.
Resumo:
This paper analyzes the trend processes characterized by two standard growth models using simple econometrics. The first model is the basic neoclassical growth model that postulates a deterministic trend for output. The second model is the Uzawa-Lucas model that postulates a stochastic trend for output. The aim is to understand how the different trend processes for output assumed by these two standard growth models determine the ability of each model to explain the observed trend processes of other macroeconomic variables such as consumption and investment. The results show that the two models reproduce the output trend process. Moreover, the results show that the basic growth model captures properly the consumption trend process, but fails in characterizing the investment trend process. The reverse is true for the Uzawa-Lucas model.
Resumo:
[ES] Los modelos implícitos constituyen uno de los enfoques de valoración de opciones alternativos al modelo de Black-Scholes que ha conocido un mayor desarrollo en los últimos años. Dentro de este planteamiento existen diferentes alternativas: los árboles implícitos, los modelos con función de volatilidad determinista y los modelos con función de volatilidad implícita. Todos ellos se construyen a partir de una estimación de la distribución de probabilidades riesgo-neutral del precio futuro del activo subyacente, congruente con los precios de mercado de las opciones negociadas. En consecuencia, los modelos implícitos proporcionan buenos resultados en la valoración de opciones dentro de la muestra. Sin embargo, su comportamiento como instrumento de predicción para opciones fuera de muestra no resulta satisfactorio. En este artículo se analiza la medida en la que este enfoque contribuye a la mejora de la valoración de opciones, tanto desde un punto de vista teórico como práctico.
Resumo:
Background: Malignancies arising in the large bowel cause the second largest number of deaths from cancer in the Western World. Despite progresses made during the last decades, colorectal cancer remains one of the most frequent and deadly neoplasias in the western countries. Methods: A genomic study of human colorectal cancer has been carried out on a total of 31 tumoral samples, corresponding to different stages of the disease, and 33 non-tumoral samples. The study was carried out by hybridisation of the tumour samples against a reference pool of non-tumoral samples using Agilent Human 1A 60- mer oligo microarrays. The results obtained were validated by qRT-PCR. In the subsequent bioinformatics analysis, gene networks by means of Bayesian classifiers, variable selection and bootstrap resampling were built. The consensus among all the induced models produced a hierarchy of dependences and, thus, of variables. Results: After an exhaustive process of pre-processing to ensure data quality–lost values imputation, probes quality, data smoothing and intraclass variability filtering–the final dataset comprised a total of 8, 104 probes. Next, a supervised classification approach and data analysis was carried out to obtain the most relevant genes. Two of them are directly involved in cancer progression and in particular in colorectal cancer. Finally, a supervised classifier was induced to classify new unseen samples. Conclusions: We have developed a tentative model for the diagnosis of colorectal cancer based on a biomarker panel. Our results indicate that the gene profile described herein can discriminate between non-cancerous and cancerous samples with 94.45% accuracy using different supervised classifiers (AUC values in the range of 0.997 and 0.955).
Resumo:
We analyze the effects of capital income taxation on long-run growth in a stochastic, two-period overlapping generations economy. Endogenous growth is driven by a positive externality of physical capital in the production sector that makes firms exhibit an aggregate technology in equilibrium. We distinguish between capital income and labor income, and between attitudes towards risk and intertemporal substitution of consumption. We show necessary and sufficient conditions such that i) increments in the capital income taxation lead to higher equilibrium growth rates, and ii) the effect of changes in the capital income tax rate on the equilibrium growth may be of opposite signs in stochastic and in deterministic economies. Such a sign reversal is shown to be more likely depending on i) how the intertemporal elasticity of substitution compares to one, and ii) the size of second- period labor supply. Numerical simulations show that for reasonable values of the intertemporal elasticity of substitution, a sign reversal shows up only for implausibly high values of the second- period’s labor supply. The conclusion is that deterministic OLG economies are a good approximation of the effect of taxes on the equilibrium growth rate as in Smith (1996).
Resumo:
31 p.