20 resultados para Data Models

em Archivo Digital para la Docencia y la Investigación - Repositorio Institucional de la Universidad del País Vasco


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Previous research has shown a strong positive correlation between short-term persistence and long-term output growth as well as between depreciation rates and long-term output growth. This evidence, therefore, contradicts the standard predictions from traditional neoclassical or AK-type growth models with exogenous depreciation. In this paper, we first confirm these findings for a larger sample of 101 countries. We then study the dynamics of growth and persistence in a model where both the depreciation rate and growth are endogenous and procyclical. We find that the model s predictions become consistent with the empirical evidence on persistence, long-term growth and depreciation rates.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

[ES] Durante la última década surge un interés por el estudio de la estructura de propiedad como elemento determinante de la diversificación. Sin embargo, existe una carencia de investigaciones que analicen la influencia de la naturaleza del último propietario en el nivel y tipo de diversificación. Por ello, el objeto del presente trabajo es analizar las estrategias de diversificación empleadas por los grandes grupos empresariales españoles cuya empresa matriz cotiza en los mercados de valores, estudiando las diferencias existentes entre grupos familiares y no familiares, y considerando en estos últimos la naturaleza del último propietario. Se parte de una muestra de noventa y nueve grupos empresariales, donde se identifican las compañías que constituyen el grupo empresarial, siendo empleadas como metodologías econométricas los modelos logísticos binomiales y los modelos datos panel. Los resultados muestran como la naturaleza familiar del grupo influye positivamente en la especialización y en el empleo de estrategias de diversificación relacionada, y negativamente en el empleo de estrategias de diversificación no relacionada. Los grupos familiares difieren en mayor medida de aquellos grupos no familiares donde no existe un accionista de referencia que pueda ejercer el control efectivo del grupo y la dispersión de la propiedad es mayor, los denominados grupos sin control efectivo . La investigación permite profundizar en el análisis de las diferencias existentes entre grupos familiares y no familiares, y más concretamente en el ámbito de las estrategias de crecimiento, considerando la naturaleza del último propietario de los grupos no familiares.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Revisions of US macroeconomic data are not white-noise. They are persistent, correlated with real-time data, and with high variability (around 80% of volatility observed in US real-time data). Their business cycle effects are examined in an estimated DSGE model extended with both real-time and final data. After implementing a Bayesian estimation approach, the role of both habit formation and price indexation fall significantly in the extended model. The results show how revision shocks of both output and inflation are expansionary because they occur when real-time published data are too low and the Fed reacts by cutting interest rates. Consumption revisions, by contrast, are countercyclical as consumption habits mirror the observed reduction in real-time consumption. In turn, revisions of the three variables explain 9.3% of changes of output in its long-run variance decomposition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Binmore and Samuelson (1999) have shown that perturbations (drift) are crucial to study the stability properties of Nash equilibria. We contribute to this literature by providing a behavioural foundation for models of evolutionary drift. In particular, this article introduces a microeconomic model of drift based on the similarity theory developed by Tversky (1977), Kahneman and Tversky (1979) and Rubinstein (1988),(1998). An innovation with respect to those works is that we deal with similarity relations that are derived from the perception that each agent has about how well he is playing the game. In addition, the similarity relations are adapted to a dynamic setting. We obtain different models of drift depending on how we model the agent´s assessment of his behaviour in the game. The examples of the ultimatum game and the chain-store game are used to show the conditions for each model to stabilize elements in the component of Nash equilibria that are not subgame- perfect. It is also shown how some models approximate the laboratory data about those games while others match the data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Published as an article in: Investigaciones Economicas, 2005, vol. 29, issue 3, pages 483-523.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Contributed to: III Bienal de Restauración Monumental: "Sobre la des-restauración" (Sevilla, Spain, Nov 23-25, 2006)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Contributed to: Virtual Retrospect 2007 (Pessac, France, Nov 14-16, 2007)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Contributed to: Fusion of Cultures: XXXVIII Annual Conference on Computer Applications and Quantitative Methods in Archaeology – CAA2010 (Granada, Spain, Apr 6-9, 2010)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] This paper is an outcome of the ERASMUS IP program called TOPCART, there are more information about this project that can be accessed from the following item:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hyper-spectral data allows the construction of more robust statistical models to sample the material properties than the standard tri-chromatic color representation. However, because of the large dimensionality and complexity of the hyper-spectral data, the extraction of robust features (image descriptors) is not a trivial issue. Thus, to facilitate efficient feature extraction, decorrelation techniques are commonly applied to reduce the dimensionality of the hyper-spectral data with the aim of generating compact and highly discriminative image descriptors. Current methodologies for data decorrelation such as principal component analysis (PCA), linear discriminant analysis (LDA), wavelet decomposition (WD), or band selection methods require complex and subjective training procedures and in addition the compressed spectral information is not directly related to the physical (spectral) characteristics associated with the analyzed materials. The major objective of this article is to introduce and evaluate a new data decorrelation methodology using an approach that closely emulates the human vision. The proposed data decorrelation scheme has been employed to optimally minimize the amount of redundant information contained in the highly correlated hyper-spectral bands and has been comprehensively evaluated in the context of non-ferrous material classification

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The learning of probability distributions from data is a ubiquitous problem in the fields of Statistics and Artificial Intelligence. During the last decades several learning algorithms have been proposed to learn probability distributions based on decomposable models due to their advantageous theoretical properties. Some of these algorithms can be used to search for a maximum likelihood decomposable model with a given maximum clique size, k, which controls the complexity of the model. Unfortunately, the problem of learning a maximum likelihood decomposable model given a maximum clique size is NP-hard for k > 2. In this work, we propose a family of algorithms which approximates this problem with a computational complexity of O(k · n^2 log n) in the worst case, where n is the number of implied random variables. The structures of the decomposable models that solve the maximum likelihood problem are called maximal k-order decomposable graphs. Our proposals, called fractal trees, construct a sequence of maximal i-order decomposable graphs, for i = 2, ..., k, in k − 1 steps. At each step, the algorithms follow a divide-and-conquer strategy based on the particular features of this type of structures. Additionally, we propose a prune-and-graft procedure which transforms a maximal k-order decomposable graph into another one, increasing its likelihood. We have implemented two particular fractal tree algorithms called parallel fractal tree and sequential fractal tree. These algorithms can be considered a natural extension of Chow and Liu’s algorithm, from k = 2 to arbitrary values of k. Both algorithms have been compared against other efficient approaches in artificial and real domains, and they have shown a competitive behavior to deal with the maximum likelihood problem. Due to their low computational complexity they are especially recommended to deal with high dimensional domains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] This paper is based in the following project:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Recently, with the access of low toxicity biological and targeted therapies, evidence of the existence of a long-term survival subpopulation of cancer patients is appearing. We have studied an unselected population with advanced lung cancer to look for evidence of multimodality in survival distribution, and estimate the proportion of long-term survivors. Methods: We used survival data of 4944 patients with non-small-cell lung cancer (NSCLC) stages IIIb-IV at diagnostic, registered in the National Cancer Registry of Cuba (NCRC) between January 1998 and December 2006. We fitted one-component survival model and two-component mixture models to identify short-and long-term survivors. Bayesian information criterion was used for model selection. Results: For all of the selected parametric distributions the two components model presented the best fit. The population with short-term survival (almost 4 months median survival) represented 64% of patients. The population of long-term survival included 35% of patients, and showed a median survival around 12 months. None of the patients of short-term survival was still alive at month 24, while 10% of the patients of long-term survival died afterwards. Conclusions: There is a subgroup showing long-term evolution among patients with advanced lung cancer. As survival rates continue to improve with the new generation of therapies, prognostic models considering short-and long-term survival subpopulations should be considered in clinical research.