814 resultados para Benchmark
Resumo:
Emerging markets have received wide attention from investors around the globe because of their return potential and risk diversification. This research examines the selection and timing performance of Canadian mutual funds which invest in fixed-income and equity securities in emerging markets. We use (un)conditional two- and five-factor benchmark models that accommodate the dynamics of returns in emerging markets. We also adopt the cross-sectional bootstrap methodology to distinguish between ‘skill’ and ‘luck’ for individual funds. All the tests are conducted using a comprehensive data set of bond and equity emerging funds over the period of 1989-2011. The risk-adjusted measures of performance are estimated using the least squares method with the Newey-West adjustment for standard errors that are robust to conditional heteroskedasticity and autocorrelation. The performance statistics of the emerging funds before (after) management-related costs are insignificantly positive (significantly negative). They are sensitive to the chosen benchmark model and conditional information improves selection performance. The timing statistics are largely insignificant throughout the sample period and are not sensitive to the benchmark model. Evidence of timing and selecting abilities is obtained in a small number of funds which is not sensitive to the fees structure. We also find evidence that a majority of individual funds provide zero (very few provide positive) abnormal return before fees and a significantly negative return after fees. At the negative end of the tail of performance distribution, our resampling tests fail to reject the role of bad luck in the poor performance of funds and we conclude that most of them are merely ‘unlucky’.
Resumo:
Ordered gene problems are a very common classification of optimization problems. Because of their popularity countless algorithms have been developed in an attempt to find high quality solutions to the problems. It is also common to see many different types of problems reduced to ordered gene style problems as there are many popular heuristics and metaheuristics for them due to their popularity. Multiple ordered gene problems are studied, namely, the travelling salesman problem, bin packing problem, and graph colouring problem. In addition, two bioinformatics problems not traditionally seen as ordered gene problems are studied: DNA error correction and DNA fragment assembly. These problems are studied with multiple variations and combinations of heuristics and metaheuristics with two distinct types or representations. The majority of the algorithms are built around the Recentering- Restarting Genetic Algorithm. The algorithm variations were successful on all problems studied, and particularly for the two bioinformatics problems. For DNA Error Correction multiple cases were found with 100% of the codes being corrected. The algorithm variations were also able to beat all other state-of-the-art DNA Fragment Assemblers on 13 out of 16 benchmark problem instances.
Resumo:
The purpose of this research was to examine the ways in which individuals with mental illness create a life of purpose, satisfaction and meaning. The data supported the identification of four common themes: (1) the power of leisure in activation, (2) the power of leisure in resiliency, (3) the power of leisure in identity and (4) the power of leisure in reducing struggle. Through an exploration of the experience of having a mental illness, this project supports that leisure provides therapeutic benefits that transcend through negative life events. In addition, this project highlights the individual nature of recovery as a process of self-discovery. Through the creation of a visual model, this project provides a benchmark for how a small group of individuals have experienced living well with mental illness. As such, this work brings new thought to the growing body of mental health and leisure studies literature.
Resumo:
Experimental Extended X-ray Absorption Fine Structure (EXAFS) spectra carry information about the chemical structure of metal protein complexes. However, pre- dicting the structure of such complexes from EXAFS spectra is not a simple task. Currently methods such as Monte Carlo optimization or simulated annealing are used in structure refinement of EXAFS. These methods have proven somewhat successful in structure refinement but have not been successful in finding the global minima. Multiple population based algorithms, including a genetic algorithm, a restarting ge- netic algorithm, differential evolution, and particle swarm optimization, are studied for their effectiveness in structure refinement of EXAFS. The oxygen-evolving com- plex in S1 is used as a benchmark for comparing the algorithms. These algorithms were successful in finding new atomic structures that produced improved calculated EXAFS spectra over atomic structures previously found.
Characterizing Dynamic Optimization Benchmarks for the Comparison of Multi-Modal Tracking Algorithms
Resumo:
Population-based metaheuristics, such as particle swarm optimization (PSO), have been employed to solve many real-world optimization problems. Although it is of- ten sufficient to find a single solution to these problems, there does exist those cases where identifying multiple, diverse solutions can be beneficial or even required. Some of these problems are further complicated by a change in their objective function over time. This type of optimization is referred to as dynamic, multi-modal optimization. Algorithms which exploit multiple optima in a search space are identified as niching algorithms. Although numerous dynamic, niching algorithms have been developed, their performance is often measured solely on their ability to find a single, global optimum. Furthermore, the comparisons often use synthetic benchmarks whose landscape characteristics are generally limited and unknown. This thesis provides a landscape analysis of the dynamic benchmark functions commonly developed for multi-modal optimization. The benchmark analysis results reveal that the mechanisms responsible for dynamism in the current dynamic bench- marks do not significantly affect landscape features, thus suggesting a lack of representation for problems whose landscape features vary over time. This analysis is used in a comparison of current niching algorithms to identify the effects that specific landscape features have on niching performance. Two performance metrics are proposed to measure both the scalability and accuracy of the niching algorithms. The algorithm comparison results demonstrate the algorithms best suited for a variety of dynamic environments. This comparison also examines each of the algorithms in terms of their niching behaviours and analyzing the range and trade-off between scalability and accuracy when tuning the algorithms respective parameters. These results contribute to the understanding of current niching techniques as well as the problem features that ultimately dictate their success.
Resumo:
As a result of mutation in genes, which is a simple change in our DNA, we will have undesirable phenotypes which are known as genetic diseases or disorders. These small changes, which happen frequently, can have extreme results. Understanding and identifying these changes and associating these mutated genes with genetic diseases can play an important role in our health, by making us able to find better diagnosis and therapeutic strategies for these genetic diseases. As a result of years of experiments, there is a vast amount of data regarding human genome and different genetic diseases that they still need to be processed properly to extract useful information. This work is an effort to analyze some useful datasets and to apply different techniques to associate genes with genetic diseases. Two genetic diseases were studied here: Parkinson’s disease and breast cancer. Using genetic programming, we analyzed the complex network around known disease genes of the aforementioned diseases, and based on that we generated a ranking for genes, based on their relevance to these diseases. In order to generate these rankings, centrality measures of all nodes in the complex network surrounding the known disease genes of the given genetic disease were calculated. Using genetic programming, all the nodes were assigned scores based on the similarity of their centrality measures to those of the known disease genes. Obtained results showed that this method is successful at finding these patterns in centrality measures and the highly ranked genes are worthy as good candidate disease genes for being studied. Using standard benchmark tests, we tested our approach against ENDEAVOUR and CIPHER - two well known disease gene ranking frameworks - and we obtained comparable results.
Resumo:
The curse of dimensionality is a major problem in the fields of machine learning, data mining and knowledge discovery. Exhaustive search for the most optimal subset of relevant features from a high dimensional dataset is NP hard. Sub–optimal population based stochastic algorithms such as GP and GA are good choices for searching through large search spaces, and are usually more feasible than exhaustive and deterministic search algorithms. On the other hand, population based stochastic algorithms often suffer from premature convergence on mediocre sub–optimal solutions. The Age Layered Population Structure (ALPS) is a novel metaheuristic for overcoming the problem of premature convergence in evolutionary algorithms, and for improving search in the fitness landscape. The ALPS paradigm uses an age–measure to control breeding and competition between individuals in the population. This thesis uses a modification of the ALPS GP strategy called Feature Selection ALPS (FSALPS) for feature subset selection and classification of varied supervised learning tasks. FSALPS uses a novel frequency count system to rank features in the GP population based on evolved feature frequencies. The ranked features are translated into probabilities, which are used to control evolutionary processes such as terminal–symbol selection for the construction of GP trees/sub-trees. The FSALPS metaheuristic continuously refines the feature subset selection process whiles simultaneously evolving efficient classifiers through a non–converging evolutionary process that favors selection of features with high discrimination of class labels. We investigated and compared the performance of canonical GP, ALPS and FSALPS on high–dimensional benchmark classification datasets, including a hyperspectral image. Using Tukey’s HSD ANOVA test at a 95% confidence interval, ALPS and FSALPS dominated canonical GP in evolving smaller but efficient trees with less bloat expressions. FSALPS significantly outperformed canonical GP and ALPS and some reported feature selection strategies in related literature on dimensionality reduction.
Resumo:
The curse of dimensionality is a major problem in the fields of machine learning, data mining and knowledge discovery. Exhaustive search for the most optimal subset of relevant features from a high dimensional dataset is NP hard. Sub–optimal population based stochastic algorithms such as GP and GA are good choices for searching through large search spaces, and are usually more feasible than exhaustive and determinis- tic search algorithms. On the other hand, population based stochastic algorithms often suffer from premature convergence on mediocre sub–optimal solutions. The Age Layered Population Structure (ALPS) is a novel meta–heuristic for overcoming the problem of premature convergence in evolutionary algorithms, and for improving search in the fitness landscape. The ALPS paradigm uses an age–measure to control breeding and competition between individuals in the population. This thesis uses a modification of the ALPS GP strategy called Feature Selection ALPS (FSALPS) for feature subset selection and classification of varied supervised learning tasks. FSALPS uses a novel frequency count system to rank features in the GP population based on evolved feature frequencies. The ranked features are translated into probabilities, which are used to control evolutionary processes such as terminal–symbol selection for the construction of GP trees/sub-trees. The FSALPS meta–heuristic continuously refines the feature subset selection process whiles simultaneously evolving efficient classifiers through a non–converging evolutionary process that favors selection of features with high discrimination of class labels. We investigated and compared the performance of canonical GP, ALPS and FSALPS on high–dimensional benchmark classification datasets, including a hyperspectral image. Using Tukey’s HSD ANOVA test at a 95% confidence interval, ALPS and FSALPS dominated canonical GP in evolving smaller but efficient trees with less bloat expressions. FSALPS significantly outperformed canonical GP and ALPS and some reported feature selection strategies in related literature on dimensionality reduction.
Resumo:
In an economy where cash can be stored costlessly (in nominal terms), the nominal interest rate is bounded below by zero. This paper derives the implications of this nonnegativity constraint for the term structure and shows that it induces a nonlinear and convex relation between short- and long-term interest rates. As a result, the long-term rate responds asymmetrically to changes in the short-term rate, and by less than predicted by a benchmark linear model. In particular, a decrease in the short-term rate leads to a decrease in the long-term rate that is smaller in magnitude than the increase in the long-term rate associated with an increase in the short-term rate of the same size. Up to the extent that monetary policy acts by affecting long-term rates through the term structure, its power is considerably reduced at low interest rates. The empirical predictions of the model are examined using data from Japan.
Resumo:
This paper develops a general stochastic framework and an equilibrium asset pricing model that make clear how attitudes towards intertemporal substitution and risk matter for option pricing. In particular, we show under which statistical conditions option pricing formulas are not preference-free, in other words, when preferences are not hidden in the stock and bond prices as they are in the standard Black and Scholes (BS) or Hull and White (HW) pricing formulas. The dependence of option prices on preference parameters comes from several instantaneous causality effects such as the so-called leverage effect. We also emphasize that the most standard asset pricing models (CAPM for the stock and BS or HW preference-free option pricing) are valid under the same stochastic setting (typically the absence of leverage effect), regardless of preference parameter values. Even though we propose a general non-preference-free option pricing formula, we always keep in mind that the BS formula is dominant both as a theoretical reference model and as a tool for practitioners. Another contribution of the paper is to characterize why the BS formula is such a benchmark. We show that, as soon as we are ready to accept a basic property of option prices, namely their homogeneity of degree one with respect to the pair formed by the underlying stock price and the strike price, the necessary statistical hypotheses for homogeneity provide BS-shaped option prices in equilibrium. This BS-shaped option-pricing formula allows us to derive interesting characterizations of the volatility smile, that is, the pattern of BS implicit volatilities as a function of the option moneyness. First, the asymmetry of the smile is shown to be equivalent to a particular form of asymmetry of the equivalent martingale measure. Second, this asymmetry appears precisely when there is either a premium on an instantaneous interest rate risk or on a generalized leverage effect or both, in other words, whenever the option pricing formula is not preference-free. Therefore, the main conclusion of our analysis for practitioners should be that an asymmetric smile is indicative of the relevance of preference parameters to price options.
Resumo:
How does openness affect economic development? This question is answered in the context of a dynamic general equilibrium model of the world economy, where countries have technological differences that are both sector-neutral and specific to the investment goods sector. Relative to a benchmark case of trade in credit markets only, consider (i) a complete restriction of trade, and (ii) a full liberalization of trade. The first change decreases the cross-sectional dispersion of incomes only slightly, and produces a relatively small welfare loss. The second change, instead, decreases dispersion by a significant amount, and produces a very large welfare gain.
Resumo:
Cette thèse est une recherche pluridisciplinaire sur le concept du pardon interpersonnel. Elle cherche à circonscrire la portée et la dynamique du pardon, entre autres en répondant à la question Pourquoi pardonner ? Jusqu’à récemment on trouvait peu d’écrits sur le pardon. Mais les deux dernières décennies ont vu un foisonnement de travaux de recherche sur le sujet de la part de psychologues éveillés à ses bienfaits thérapeutiques. Parallèlement, des philosophes et des théologiens se sont aussi intéressés à la question et ont commencé à publier leurs réflexions. Deux hypothèses marquent le parcours de notre recherche. La première porte sur la signification de la deuxième partie de l’énoncé biblique en Luc 23, 34 « Père, pardonne-leur car ils ne savent pas ce qu’ils font ». Elle avance que le « motif de l’ignorance » que cette parole affirme a une portée universelle et soutient que l’offenseur est en état d’ignorance inconsciente lorsqu’il fait le mal. Le pardon des offenses serait donc le pardon de cette ignorance inconsciente. La seconde hypothèse conjecture que le pardon interpersonnel s’inscrit dans une dynamique spirituelle même s’il a quitté ses amarres religieuses. Nous avançons que la relation pardon-spiritualité est significative et que sa compréhension peut aider à mieux saisir l’essence d’un pardon devenu séculier et à en permettre l’éclosion. Pour établir la valeur de cette hypothèse, nous devons étudier la dynamique d’une démarche de pardon de même qu’à déterminer le statut actuel de la spiritualité. La thèse se divise en trois parties. La première partie expose la pensée d’auteurs significatifs dans chacune des principales disciplines concernées par le pardon : philosophie, théologie, psychologie et spiritualité. Il y est question d’offense pardonnable ou impardonnable, de pardon conditionnel ou inconditionnel, de corrélats du pardon comme l’oubli, la colère, la culpabilité, le repentir et des résultats d’études empiriques psychothérapeutiques sur le pardon. Cette première partie se termine par une réflexion sur la spiritualité de façon à voir dans quelle mesure le pardon devient une dynamique spirituelle. La deuxième partie est consacrée à l’examen de l’hypothèse concernant le sens et la portée du « car ils ne savent pas ce qu’ils font ». Dans un premier temps on fait appel à l’expertise exégétique pour situer l’authenticité et la portée de ce passage. Nous explorons ensuite la pensée philosophique à travers l’histoire pour comprendre le véritable sens du libre-arbitre et son impact sur la conception de la faute. La remise en cause philosophique du libre-arbitre nous ramènera à la thèse socratique selon laquelle « Nul n’est méchant volontairement ». La théorie mimétique de René Girard vient démontrer que les persécuteurs sont fondamentalement inconscients de ce qu’ils font et la théologienne Lytta Basset identifie le fantasme de la connaissance du bien et du mal comme accroissant cette ignorance qui s’ignore. La troisième partie de la thèse intègre les réflexions et découvertes des deux premières parties, et les situent dans un parcours qui va de l’impardonnable à la guérison, tout en les conceptualisant avec une matrice de verticalité et d’horizontalité qui schématise leurs interactions. Nous découvrons que si « car ils ne savent pas ce qu’ils font » fournit la réponse logique à la question Pourquoi pardonner ?, il existe aussi une deuxième voie qui conduit au pardon, l’amour. L’amour est la clé du pardon basé sur le message évangélique, alors que l’empathie est celle de l’approche psychothérapeutique. Enfin, la comparaison entre le « pardon psychothérapeutique » et le « pardon évangélique » nous fait conclure qu’il y a deux modes d’accès majeurs au pardon : la raison et l’amour.
Resumo:
The aim of this paper is to discuss the crisis of the international financial system and the necessity of reforming it by new anchor or benchmark for the international currency, a money-commodity. The need for understanding the definition of a numéraire is a first necessity. Although most economists reject any connection between money and a particular commodity (gold) – because of the existence of legal tender money in every country – it will be shown that it is equivalent to reduce the real space to an abstract number (usually assumed 1) in order to postulate that money is neutral. This is sheer nonsense. It will also be shown that the concept of fiat money or state money does not preclude the existence of commodity money. This paper is divided in four sections. The first section analyses the definition and meaning of a numéraire for the international currency and the justification for a variable standard of value. In the second section, the market value of the US dollar is analysed by looking at new forms of value -the derivative products- the dollar as a safe haven, and the role of SDRs in reforming the international monetary system. In the third and fourth sections, empirical evidence concerning the most recent period of the financial crisis is presented and an econometric model is specified to fit those data. After estimating many different specifications of the model –linear stepwise regression, simultaneous regression with GMM estimator, error correction model- the main econometric result is that there is a one to one correspondence between the price of gold and the value of the US dollar. Indeed, the variance of the price of gold is mainly explained by the Euro exchange rate defined with respect to the US dollar, the inflation rate and negatively influenced by the Dow Jones index and the interest rate.
Resumo:
Dans le domaine des neurosciences computationnelles, l'hypothèse a été émise que le système visuel, depuis la rétine et jusqu'au cortex visuel primaire au moins, ajuste continuellement un modèle probabiliste avec des variables latentes, à son flux de perceptions. Ni le modèle exact, ni la méthode exacte utilisée pour l'ajustement ne sont connus, mais les algorithmes existants qui permettent l'ajustement de tels modèles ont besoin de faire une estimation conditionnelle des variables latentes. Cela nous peut nous aider à comprendre pourquoi le système visuel pourrait ajuster un tel modèle; si le modèle est approprié, ces estimé conditionnels peuvent aussi former une excellente représentation, qui permettent d'analyser le contenu sémantique des images perçues. Le travail présenté ici utilise la performance en classification d'images (discrimination entre des types d'objets communs) comme base pour comparer des modèles du système visuel, et des algorithmes pour ajuster ces modèles (vus comme des densités de probabilité) à des images. Cette thèse (a) montre que des modèles basés sur les cellules complexes de l'aire visuelle V1 généralisent mieux à partir d'exemples d'entraînement étiquetés que les réseaux de neurones conventionnels, dont les unités cachées sont plus semblables aux cellules simples de V1; (b) présente une nouvelle interprétation des modèles du système visuels basés sur des cellules complexes, comme distributions de probabilités, ainsi que de nouveaux algorithmes pour les ajuster à des données; et (c) montre que ces modèles forment des représentations qui sont meilleures pour la classification d'images, après avoir été entraînés comme des modèles de probabilités. Deux innovations techniques additionnelles, qui ont rendu ce travail possible, sont également décrites : un algorithme de recherche aléatoire pour sélectionner des hyper-paramètres, et un compilateur pour des expressions mathématiques matricielles, qui peut optimiser ces expressions pour processeur central (CPU) et graphique (GPU).
Resumo:
The central hypothesis to be tested is the relevance of gold in the determination of the value of the US dollar as an international reserve currency after 1971. In the first section the market value of the US dollar is analysed by looking at new forms of value (financial derivative products), the dollar as a safe haven, the choice of a standard of value and the role of SDRs in reforming the international monetary system. Based on dimensional analysis, the second section analyses the definition and meaning of a numéraire for international currency and the justification for a variable standard of value based on a commodity (gold). The second section is the theoretical foundation for the empirical and econometric analysis in the third and fourth sections. The third section is devoted to the specification of an econometric model and a graphical analysis of the data. It is clear that an inverse relation exists between the value of the US dollar and the price of gold. The fourth section shows the estimations of the different specifications of the model including linear regression and cointegration analysis. The most important econometric result is that the null hypothesis is rejected in favour of a significant link between the price of gold and the value of the US dollar. There is also a positive relationship between gold price and inflation. An inverse statistically significant relation between gold price and monetary policy is shown by applying a dynamic model of cointegration with lags.