106 resultados para Generalized mean
Spanning tests in return and stochastic discount factor mean-variance frontiers: A unifying approach
Resumo:
We propose new spanning tests that assess if the initial and additional assets share theeconomically meaningful cost and mean representing portfolios. We prove their asymptoticequivalence to existing tests under local alternatives. We also show that unlike two-step oriterated procedures, single-step methods such as continuously updated GMM yield numericallyidentical overidentifyng restrictions tests, so there is arguably a single spanning test.To prove these results, we extend optimal GMM inference to deal with singularities in thelong run second moment matrix of the influence functions. Finally, we test for spanningusing size and book-to-market sorted US stock portfolios.
Resumo:
The Generalized Assignment Problem consists in assigning a setof tasks to a set of agents with minimum cost. Each agent hasa limited amount of a single resource and each task must beassigned to one and only one agent, requiring a certain amountof the resource of the agent. We present new metaheuristics forthe generalized assignment problem based on hybrid approaches.One metaheuristic is a MAX-MIN Ant System (MMAS), an improvedversion of the Ant System, which was recently proposed byStutzle and Hoos to combinatorial optimization problems, and itcan be seen has an adaptive sampling algorithm that takes inconsideration the experience gathered in earlier iterations ofthe algorithm. Moreover, the latter heuristic is combined withlocal search and tabu search heuristics to improve the search.A greedy randomized adaptive search heuristic (GRASP) is alsoproposed. Several neighborhoods are studied, including one basedon ejection chains that produces good moves withoutincreasing the computational effort. We present computationalresults of the comparative performance, followed by concludingremarks and ideas on future research in generalized assignmentrelated problems.
Resumo:
A Method is offered that makes it possible to apply generalized canonicalcorrelations analysis (CANCOR) to two or more matrices of different row and column order. The new method optimizes the generalized canonical correlationanalysis objective by considering only the observed values. This is achieved byemploying selection matrices. We present and discuss fit measures to assessthe quality of the solutions. In a simulation study we assess the performance of our new method and compare it to an existing procedure called GENCOM,proposed by Green and Carroll. We find that our new method outperforms the GENCOM algorithm both with respect to model fit and recovery of the truestructure. Moreover, as our new method does not require any type of iteration itis easier to implement and requires less computation. We illustrate the methodby means of an example concerning the relative positions of the political parties inthe Netherlands based on provincial data.
Resumo:
Principal curves have been defined Hastie and Stuetzle (JASA, 1989) assmooth curves passing through the middle of a multidimensional dataset. They are nonlinear generalizations of the first principalcomponent, a characterization of which is the basis for the principalcurves definition.In this paper we propose an alternative approach based on a differentproperty of principal components. Consider a point in the space wherea multivariate normal is defined and, for each hyperplane containingthat point, compute the total variance of the normal distributionconditioned to belong to that hyperplane. Choose now the hyperplaneminimizing this conditional total variance and look for thecorresponding conditional mean. The first principal component of theoriginal distribution passes by this conditional mean and it isorthogonal to that hyperplane. This property is easily generalized todata sets with nonlinear structure. Repeating the search from differentstarting points, many points analogous to conditional means are found.We call them principal oriented points. When a one-dimensional curveruns the set of these special points it is called principal curve oforiented points. Successive principal curves are recursively definedfrom a generalization of the total variance.
Resumo:
In this paper I explore the issue of nonlinearity (both in the datageneration process and in the functional form that establishes therelationship between the parameters and the data) regarding the poorperformance of the Generalized Method of Moments (GMM) in small samples.To this purpose I build a sequence of models starting with a simple linearmodel and enlarging it progressively until I approximate a standard (nonlinear)neoclassical growth model. I then use simulation techniques to find the smallsample distribution of the GMM estimators in each of the models.
Resumo:
This paper presents a two--factor model of the term structure ofinterest rates. We assume that default free discount bond prices aredetermined by the time to maturity and two factors, the long--term interestrate and the spread (difference between the long--term rate and theshort--term (instantaneous) riskless rate). Assuming that both factorsfollow a joint Ornstein--Uhlenbeck process, a general bond pricing equationis derived. We obtain a closed--form expression for bond prices andexamine its implications for the term structure of interest rates. We alsoderive a closed--form solution for interest rate derivatives prices. Thisexpression is applied to price European options on discount bonds andmore complex types of options. Finally, empirical evidence of the model'sperformance is presented.
Resumo:
This paper presents a general equilibrium model of money demand wherethe velocity of money changes in response to endogenous fluctuations in the interest rate. The parameter space can be divided into two subsets: one where velocity is constant and equal to one as in cash-in-advance models, and another one where velocity fluctuates as in Baumol (1952). Despite its simplicity, in terms of paramaters to calibrate, the model performs surprisingly well. In particular, it approximates the variability of money velocity observed in the U.S. for the post-war period. The model is then used to analyze the welfare costs of inflation under uncertainty. This application calculates the errors derived from computing the costs of inflation with deterministic models. It turns out that the size of this difference is small, at least for the levels of uncertainty estimated for the U.S. economy.
Resumo:
Portfolio and stochastic discount factor (SDF) frontiers are usually regarded as dual objects, and researchers sometimes use one to answer questions about the other. However, the introduction of conditioning information and active portfolio strategies alters this relationship. For instance, the unconditional portfolio frontier in Hansen and Richard (1987) is not dual to the unconditional SDF frontier in Gallant, Hansen and Tauchen (1990). We characterise the dual objects to those frontiers, and relate them to the frontiers generated with managed portfolios, which are commonly used in empirical work. We also study the implications of a safe asset and other special cases.
Resumo:
In a previous paper a novel Generalized Multiobjective Multitree model (GMM-model) was proposed. This model considers for the first time multitree-multicast load balancing with splitting in a multiobjective context, whose mathematical solution is a whole Pareto optimal set that can include several results than it has been possible to find in the publications surveyed. To solve the GMM-model, in this paper a multi-objective evolutionary algorithm (MOEA) inspired by the Strength Pareto Evolutionary Algorithm (SPEA) is proposed. Experimental results considering up to 11 different objectives are presented for the well-known NSF network, with two simultaneous data flows
Resumo:
Upper bounds for the Betti numbers of generalized Cohen-Macaulay ideals are given. In particular, for the case of non-degenerate, reduced and ir- reducible projective curves we get an upper bound which only depends on their degree.
Resumo:
[spa] Se presenta el operador OWA generalizado inducido (IGOWA). Es un nuevo operador de agregación que generaliza al operador OWA a través de utilizar las principales características de dos operadores muy conocidos como son el operador OWA generalizado y el operador OWA inducido. Entonces, este operador utiliza medias generalizadas y variables de ordenación inducidas en el proceso de reordenación. Con esta formulación, se obtiene una amplia gama de operadores de agregación que incluye a todos los casos particulares de los operadores IOWA y GOWA, y otros casos particulares. A continuación, se realiza una generalización mayor al operador IGOWA a través de utilizar medias cuasi-aritméticas. Finalmente, también se desarrolla un ejemplo numérico del nuevo modelo en un problema de toma de decisiones financieras.
Resumo:
[spa] La conceptuación de talento ha ido cobrando cada vez más importancia tanto para académicos como profesionales, con el fin de avanzar en el estudio de la gestión del talento. De hecho, la confusión sobre el significado de talento en la realidad empresarial impide llegar a un consenso sobre el concepto y la práctica de la gestión del talento. En este estudio teórico revisamos el concepto de talento en el mundo de la empresa con el fin de resumir lo que hemos aprendido y discutir las ventajas y limitaciones de las diferentes acepciones. Concluimos con la formulación de una definición de este concepto, ya que una correcta interpretación de la gestión del talento—por no hablar de una exitosa gestión del talento— depende de tener una comprensión clara de lo que se entiende por talento en un contexto organizativo. Además, con la definición de talento propuesta delimitamos el concepto de talento evitando algunos problemas detectados en las definiciones anteriores (por ejemplo, generalidades y tautologías), y poniendo de relieve las variables importantes que le afectan y lo hacen más manejable.
Resumo:
[spa] Se presenta el operador de media ponderada ordenada generalizada lingüística de 2 tuplas inducida (2-TILGOWA). Es un nuevo operador de agregación que extiende los anteriores modelos a través de utilizar medias generalizadas, variables de ordenación inducidas e información lingüística representada mediante el modelo de las 2 tuplas lingüísticas. Su principal ventaja se encuentra en la posibilidad de incluir a un gran número de operadores de agregación lingüísticos como casos particulares. Por eso, el análisis puede ser visto desde diferentes perspectivas de forma que se obtiene una visión más completa del problema considerado y seleccionar la alternativa que parece estar en mayor concordancia con nuestros intereses o creencias. A continuación se desarrolla una generalización mayor a través de utilizar medias cuasi-aritméticas, obteniéndose el operador Quasi-2-TILOWA. El trabajo finaliza analizando la aplicabilidad del nuevo modelo en un problema de toma de decisiones sobre gestión de la producción.
Resumo:
[spa] El índice del máximo y el mínimo nivel es una técnica muy útil, especialmente para toma de decisiones, que usa la distancia de Hamming y el coeficiente de adecuación en el mismo problema. En este trabajo, se propone una generalización a través de utilizar medias generalizadas y cuasi aritméticas. A estos operadores de agregación, se les denominará el índice del máximo y el mínimo nivel medio ponderado ordenado generalizado (GOWAIMAM) y cuasi aritmético (Quasi-OWAIMAM). Estos nuevos operadores generalizan una amplia gama de casos particulares como el índice del máximo y el mínimo nivel generalizado (GIMAM), el OWAIMAM, y otros. También se desarrolla una aplicación en la toma de decisiones sobre selección de productos.
Resumo:
In this paper we analyze the time of ruin in a risk process with the interclaim times being Erlang(n) distributed and a constant dividend barrier. We obtain an integro-differential equation for the Laplace Transform of the time of ruin. Explicit solutions for the moments of the time of ruin are presented when the individual claim amounts have a distribution with rational Laplace transform. Finally, some numerical results and a compare son with the classical risk model, with interclaim times following an exponential distribution, are given.