934 resultados para DYNAMIC PORTFOLIO SELECTION
Resumo:
In this study, we propose a new semi-nonparametric (SNP) density model for describing the density of portfolio returns. This distribution, which we refer to as the multivariate moments expansion (MME), admits any non-Gaussian (multivariate) distribution as its basis because it is specified directly in terms of the basis density’s moments. To obtain the expansion of the Gaussian density, the MME is a reformulation of the multivariate Gram-Charlier (MGC), but the MME is much simpler and tractable than the MGC when positive transformations are used to produce well-defined densities. As an empirical application, we extend the dynamic conditional equicorrelation (DECO) model to an SNP framework using the MME. The resulting model is parameterized in a feasible manner to admit two-stage consistent estimation and it represents the DECO as well as the salient non-Gaussian features of portfolio return distributions. The in- and out-of-sample performance of a MME-DECO model of a portfolio of 10 assets demonstrate that it can be a useful tool for risk management purposes.
Resumo:
A prominent hypothesis states that specialized neural modules within the human lateral frontopolar cortices (LFPCs) support “relational integration” (RI), the solving of complex problems using inter-related rules. However, it has been proposed that LFPC activity during RI could reflect the recruitment of additional “domain-general” resources when processing more difficult problems in general as opposed to RI specifi- cally. Moreover, theoretical research with computational models has demonstrated that RI may be supported by dynamic processes that occur throughout distributed networks of brain regions as opposed to within a discrete computational module. Here, we present fMRI findings from a novel deductive reasoning paradigm that controls for general difficulty while manipulating RI demands. In accordance with the domain- general perspective, we observe an increase in frontoparietal activation during challenging problems in general as opposed to RI specifically. Nonetheless, when examining frontoparietal activity using analyses of phase synchrony and psychophysiological interactions, we observe increased network connectivity during RI alone. Moreover, dynamic causal modeling with Bayesian model selection identifies the LFPC as the effective connectivity source. Based on these results, we propose that during RI an increase in network connectivity and a decrease in network metastability allows rules that are coded throughout working memory systems to be dynamically bound. This change in connectivity state is top-down propagated via a hierarchical system of domain-general networks with the LFPC at the apex. In this manner, the functional network perspective reconciles key propositions of the globalist, modular, and computational accounts of RI within a single unified framework.
Resumo:
Versão editor: http://www.isegi.unl.pt/docentes/acorreia/documentos/European_Challenge_KM_Innovation_2004.pdf
Resumo:
This work tests different delta hedging strategies for two products issued by Banco de Investimento Global in 2012. The work studies the behaviour of the delta and gamma of autocallables and their impact on the results when delta hedging with different rebalancing periods. Given its discontinuous payoff and path dependency, it is suggested the hedging portfolio is rebalanced on a daily basis to better follow market changes. Moreover, a mixed strategy is analysed where time to maturity is used as a criterion to change the rebalancing frequency.
Resumo:
Cette thèse envisage un ensemble de méthodes permettant aux algorithmes d'apprentissage statistique de mieux traiter la nature séquentielle des problèmes de gestion de portefeuilles financiers. Nous débutons par une considération du problème général de la composition d'algorithmes d'apprentissage devant gérer des tâches séquentielles, en particulier celui de la mise-à-jour efficace des ensembles d'apprentissage dans un cadre de validation séquentielle. Nous énumérons les desiderata que des primitives de composition doivent satisfaire, et faisons ressortir la difficulté de les atteindre de façon rigoureuse et efficace. Nous poursuivons en présentant un ensemble d'algorithmes qui atteignent ces objectifs et présentons une étude de cas d'un système complexe de prise de décision financière utilisant ces techniques. Nous décrivons ensuite une méthode générale permettant de transformer un problème de décision séquentielle non-Markovien en un problème d'apprentissage supervisé en employant un algorithme de recherche basé sur les K meilleurs chemins. Nous traitons d'une application en gestion de portefeuille où nous entraînons un algorithme d'apprentissage à optimiser directement un ratio de Sharpe (ou autre critère non-additif incorporant une aversion au risque). Nous illustrons l'approche par une étude expérimentale approfondie, proposant une architecture de réseaux de neurones spécialisée à la gestion de portefeuille et la comparant à plusieurs alternatives. Finalement, nous introduisons une représentation fonctionnelle de séries chronologiques permettant à des prévisions d'être effectuées sur un horizon variable, tout en utilisant un ensemble informationnel révélé de manière progressive. L'approche est basée sur l'utilisation des processus Gaussiens, lesquels fournissent une matrice de covariance complète entre tous les points pour lesquels une prévision est demandée. Cette information est utilisée à bon escient par un algorithme qui transige activement des écarts de cours (price spreads) entre des contrats à terme sur commodités. L'approche proposée produit, hors échantillon, un rendement ajusté pour le risque significatif, après frais de transactions, sur un portefeuille de 30 actifs.
Resumo:
Dans les études sur le transport, les modèles de choix de route décrivent la sélection par un utilisateur d’un chemin, depuis son origine jusqu’à sa destination. Plus précisément, il s’agit de trouver dans un réseau composé d’arcs et de sommets la suite d’arcs reliant deux sommets, suivant des critères donnés. Nous considérons dans le présent travail l’application de la programmation dynamique pour représenter le processus de choix, en considérant le choix d’un chemin comme une séquence de choix d’arcs. De plus, nous mettons en œuvre les techniques d’approximation en programmation dynamique afin de représenter la connaissance imparfaite de l’état réseau, en particulier pour les arcs éloignés du point actuel. Plus précisément, à chaque fois qu’un utilisateur atteint une intersection, il considère l’utilité d’un certain nombre d’arcs futurs, puis une estimation est faite pour le restant du chemin jusqu’à la destination. Le modèle de choix de route est implanté dans le cadre d’un modèle de simulation de trafic par événements discrets. Le modèle ainsi construit est testé sur un modèle de réseau routier réel afin d’étudier sa performance.
Resumo:
La valoración de una empresa como sistema dinámico es bastante compleja, los diferentes modelos o métodos de valoración son una aproximación teórica y por consiguiente simplificadora de la realidad. Dichos modelos, se aproximan mediante supuestos o premisas estadísticas que nos permiten hacer dicha simplificación, ejemplos de estos, son el comportamiento del inversionista o la eficiencia del mercado. Bajo el marco de un mercado emergente, este proceso presenta de indistinta forma retos paracualquier método de valoración, dado a que el mercado no obedece a los paradigmas tradicionales. Lo anterior hace referencia a que la valoración es aún más compleja, dado que los inversionistas se enfrentan a mayores riesgos y obstáculos. Así mismo, a medida que las economías se globalizan y el capital es más móvil, la valoración tomaráaún más importancia en el contexto citado. Este trabajo de gradopretende recopilar y analizar los diferentes métodos de valoración, además de identificar y aplicar aquellos que se reconocen como “buenas prácticas”. Este proceso se llevó a cabo para una de las empresas más importantes de Colombia, donde fundamentalmente se consideró el contexto de mercado emergente y específicamente el sector petrolero, como criterios para la aplicación del tradicional DCF y el práctico R&V.
Resumo:
This paper considers an overlapping generations model in which capital investment is financed in a credit market with adverse selection. Lenders’ inability to commit ex-ante not to bailout ex-post, together with a wealthy position of entrepreneurs gives rise to the soft budget constraint syndrome, i.e. the absence of liquidation of poor performing firms on a regular basis. This problem arises endogenously as a result of the interaction between the economic behavior of agents, without relying on political economy explanations. We found the problem more binding along the business cycle, providing an explanation to creditors leniency during booms in some LatinAmerican countries in the late seventies and early nineties.
Resumo:
Nonlinear adjustment toward long-run price equilibrium relationships in the sugar-ethanol-oil nexus in Brazil is examined. We develop generalized bivariate error correction models that allow for cointegration between sugar, ethanol, and oil prices, where dynamic adjustments are potentially nonlinear functions of the disequilibrium errors. A range of models are estimated using Bayesian Monte Carlo Markov Chain algorithms and compared using Bayesian model selection methods. The results suggest that the long-run drivers of Brazilian sugar prices are oil prices and that there are nonlinearities in the adjustment processes of sugar and ethanol prices to oil price but linear adjustment between ethanol and sugar prices.
Resumo:
This paper describes the recent developments and improvements made to the variable radius niching technique called Dynamic Niche Clustering (DNC). DNC is fitness sharing based technique that employs a separate population of overlapping fuzzy niches with independent radii which operate in the decoded parameter space, and are maintained alongside the normal GA population. We describe a speedup process that can be applied to the initial generation which greatly reduces the complexity of the initial stages. A split operator is also introduced that is designed to counteract the excessive growth of niches, and it is shown that this improves the overall robustness of the technique. Finally, the effect of local elitism is documented and compared to the performance of the basic DNC technique on a selection of 2D test functions. The paper is concluded with a view to future work to be undertaken on the technique.
Resumo:
We consider the finite sample properties of model selection by information criteria in conditionally heteroscedastic models. Recent theoretical results show that certain popular criteria are consistent in that they will select the true model asymptotically with probability 1. To examine the empirical relevance of this property, Monte Carlo simulations are conducted for a set of non–nested data generating processes (DGPs) with the set of candidate models consisting of all types of model used as DGPs. In addition, not only is the best model considered but also those with similar values of the information criterion, called close competitors, thus forming a portfolio of eligible models. To supplement the simulations, the criteria are applied to a set of economic and financial series. In the simulations, the criteria are largely ineffective at identifying the correct model, either as best or a close competitor, the parsimonious GARCH(1, 1) model being preferred for most DGPs. In contrast, asymmetric models are generally selected to represent actual data. This leads to the conjecture that the properties of parameterizations of processes commonly used to model heteroscedastic data are more similar than may be imagined and that more attention needs to be paid to the behaviour of the standardized disturbances of such models, both in simulation exercises and in empirical modelling.
Resumo:
Demands for thermal comfort, better indoor air quality together with lower environmental impacts have had ascending trends in the last decade. In many circumstances, these demands could not be fully covered through the soft approach of bioclimatic design like optimisation of the building orientation and internal layout. This is mostly because of the dense urban environment and building internal energy loads. In such cases, heating, ventilation, air-conditioning and refrigeration (HVAC&R) systems make a key role to fulfill the requirements of indoor environment. Therefore, it is required to select the most proper HVAC&R system. In this study, a robust decision making approach for HVAC&R system selection is proposed. Technical performance, economic aspect and environmental impacts of 36 permutations of primary and secondary systems are taken into account to choose the most proper HVAC&R system for a case study office building. The building is a representative for the dominant form of office buildings in the UK. Dynamic performance evaluation of HVAC&R alternatives using TRNSYS package together with life cycle energy cost analysis provides a reliable basis for decision making. Six scenarios broadly cover the decision makers' attitudes on HVAC&R system selection which are analysed through Analytical Hierarchy Process (AHP). One of the significant outcomes reveals that, despite both the higher energy demand and more investment requirements associated with compound heating, cooling and power system (CCHP); this system is one of the top ranked alternatives due to the lower energy cost and C02 emissions. The sensitivity analysis reveals that in all six scenarios, the first five top ranked alternatives are not changed. Finally, the proposed approach and the results could be used by researchers and designers especially in the early stages of a design process in which all involved bodies face the lack of time, information and tools for evaluation of a variety of systems.
Resumo:
What are the microfoundations of dynamic capabilities that sustain competitive advantage in a highly volatile environment, such as a transition economy? We explore the detailed nature of these dynamic capabilities along with their antecedents by tracing the sequence of their development based on a longitudinal case study of an organization subject to an external context of radical transition — the Russian oil company, Yukos. Our rich qualitative data indicate two distinct types of dynamic capabilities that are pivotal for organizational transformation. Adaptation dynamic capabilities relate to routines of resource exploitation and deployment, which are supported by acquisition, internalization and dissemination of extant knowledge, as well as resource reconfiguration, divestment and integration. Innovation dynamic capabilities relate to the creation of completely new capabilities via exploration and path-creation processes, which are supported by search, experimentation and risk taking, as well as project selection, funding and implementation. Second, we find that sequencing the two types of dynamic capabilities, helped the organization both to secure short-term competitive advantage, and to create the basis for long-term competitive advantage. These dynamic capability constructs advance theoretical understanding of what dynamic capabilities are, whilst their sequencing explains how firms create, leverage and enhance them over time.
Resumo:
Break crops and multi-crop rotations are common in arable farm management, and the soil quality inherited from a previous crop is one of the parameters that determine the gross margin that is achieved with a given crop from a given parcel of land. In previous work we developed a dynamic economic model to calculate the potential yield and gross margin of a set of crops grown in a selection of typical rotation scenarios, and we reported use of the model to calculate coexistence costs for GM maize grown in a crop rotation. The model predicts economic effects of pest and weed pressures in monthly time steps. Validation of the model in respect of specific traits is proceeding as data from trials with novel crop varieties is published. Alongside this aspect of the validation process, we are able to incorporate data representing the economic impact of abiotic stresses on conventional crops, and then use the model to predict the cumulative gross margin achievable from a sequence of conventional crops grown at varying levels of abiotic stress. We report new progress with this aspect of model validation. In this paper, we report the further development of the model to take account of abiotic stress arising from drought, flood, heat or frost; such stresses being introduced in addition to variable pest and weed pressure. The main purpose is to assess the economic incentive for arable farmers to adopt novel crop varieties having multiple ‘stacked’ traits introduced by means of various biotechnological tools available to crop breeders.
Resumo:
Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.