580 resultados para benchmarks
Resumo:
Mutual fund managers increasingly lend their holdings and/or use short sales to generate higher returns for their funds. This project presents a first look into the impact these practices on performance using the performance measures: i) Characteristic Selectivity (CS), the ability of the fund's managers to choose stocks that outperform their benchmarks; ii) Characteristic Timing (CT), the ability of the manager to time the market; iii) and Average Style (AS), the returns from funds systematically holding stocks with certain characteristics. These returns are computed through the DGTW benchmarks. The effect of other variables that have also been shown to impact fund’s returns – total net assets under management, investment styles, turnover and expense ratios – will also be analyzed. I find that managers who use short-sales do not exhibit better stock picking abilities than those who do not, while mutual funds that lend do present higher CS returns. In addition, while lending is not significant for the total performance of a fund, the employment of short-sales and of both short-sales and lending has a negative impact on the fund’s performance.
Resumo:
Banco del conocimiento
Resumo:
Banco del conocimiento
Characterizing Dynamic Optimization Benchmarks for the Comparison of Multi-Modal Tracking Algorithms
Resumo:
Population-based metaheuristics, such as particle swarm optimization (PSO), have been employed to solve many real-world optimization problems. Although it is of- ten sufficient to find a single solution to these problems, there does exist those cases where identifying multiple, diverse solutions can be beneficial or even required. Some of these problems are further complicated by a change in their objective function over time. This type of optimization is referred to as dynamic, multi-modal optimization. Algorithms which exploit multiple optima in a search space are identified as niching algorithms. Although numerous dynamic, niching algorithms have been developed, their performance is often measured solely on their ability to find a single, global optimum. Furthermore, the comparisons often use synthetic benchmarks whose landscape characteristics are generally limited and unknown. This thesis provides a landscape analysis of the dynamic benchmark functions commonly developed for multi-modal optimization. The benchmark analysis results reveal that the mechanisms responsible for dynamism in the current dynamic bench- marks do not significantly affect landscape features, thus suggesting a lack of representation for problems whose landscape features vary over time. This analysis is used in a comparison of current niching algorithms to identify the effects that specific landscape features have on niching performance. Two performance metrics are proposed to measure both the scalability and accuracy of the niching algorithms. The algorithm comparison results demonstrate the algorithms best suited for a variety of dynamic environments. This comparison also examines each of the algorithms in terms of their niching behaviours and analyzing the range and trade-off between scalability and accuracy when tuning the algorithms respective parameters. These results contribute to the understanding of current niching techniques as well as the problem features that ultimately dictate their success.
Resumo:
A full assessment of para-virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-‐metal, as well as on para-‐virtualization. The idea is to see what the overheads of para-‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-‐metal, then on the para-‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-‐native performance. You can deploy both para-‐virtualization and full virtualization across various virtualized systems. Para-‐virtualization is an OS-‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.
Resumo:
This paper reviews nine software packages with particular reference to their GARCH model estimation accuracy when judged against a respected benchmark. We consider the numerical consistency of GARCH and EGARCH estimation and forecasting. Our results have a number of implications for published research and future software development. Finally, we argue that the establishment of benchmarks for other standard non-linear models is long overdue.
Resumo:
The modern built environment has become more complex in terms of building types, environmental systems and use profiles. This complexity causes difficulties in terms of optimising buildings energy design. In this circumstance, introducing a set of prototype reference buildings, or so called benchmark buildings, that are able to represent all or majority parts of the UK building stock may be useful for the examination of the impact of national energy policies on building energy consumption. This study proposes a set of reference office buildings for England and Wales based on the information collected from the Non-Domestic Building Stock (NDBS) project and an intensive review of the existing building benchmarks. The proposed building benchmark comprises 10 prototypical reference buildings, which in relation to built form and size, represent 95% of office buildings in England and Wales. This building benchmark provides a platform for those involved in building energy simulations to evaluate energy-efficiency measures and for policy-makers to assess the influence of different building energy policies.
Resumo:
The skill of a forecast can be assessed by comparing the relative proximity of both the forecast and a benchmark to the observations. Example benchmarks include climatology or a naïve forecast. Hydrological ensemble prediction systems (HEPS) are currently transforming the hydrological forecasting environment but in this new field there is little information to guide researchers and operational forecasters on how benchmarks can be best used to evaluate their probabilistic forecasts. In this study, it is identified that the forecast skill calculated can vary depending on the benchmark selected and that the selection of a benchmark for determining forecasting system skill is sensitive to a number of hydrological and system factors. A benchmark intercomparison experiment is then undertaken using the continuous ranked probability score (CRPS), a reference forecasting system and a suite of 23 different methods to derive benchmarks. The benchmarks are assessed within the operational set-up of the European Flood Awareness System (EFAS) to determine those that are ‘toughest to beat’ and so give the most robust discrimination of forecast skill, particularly for the spatial average fields that EFAS relies upon. Evaluating against an observed discharge proxy the benchmark that has most utility for EFAS and avoids the most naïve skill across different hydrological situations is found to be meteorological persistency. This benchmark uses the latest meteorological observations of precipitation and temperature to drive the hydrological model. Hydrological long term average benchmarks, which are currently used in EFAS, are very easily beaten by the forecasting system and the use of these produces much naïve skill. When decomposed into seasons, the advanced meteorological benchmarks, which make use of meteorological observations from the past 20 years at the same calendar date, have the most skill discrimination. They are also good at discriminating skill in low flows and for all catchment sizes. Simpler meteorological benchmarks are particularly useful for high flows. Recommendations for EFAS are to move to routine use of meteorological persistency, an advanced meteorological benchmark and a simple meteorological benchmark in order to provide a robust evaluation of forecast skill. This work provides the first comprehensive evidence on how benchmarks can be used in evaluation of skill in probabilistic hydrological forecasts and which benchmarks are most useful for skill discrimination and avoidance of naïve skill in a large scale HEPS. It is recommended that all HEPS use the evidence and methodology provided here to evaluate which benchmarks to employ; so forecasters can have trust in their skill evaluation and will have confidence that their forecasts are indeed better.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
We aimed to develop site-specific sediment quality guidelines (SQGs) for two estuarine and port zones in Southeastern Brazil (Santos Estuarine System and Paranagua Estuarine System) and three in Southern Spain (Ria of Huelva, Bay of Cadiz, and Bay of Algeciras), and compare these values against national and traditionally used international benchmark values. Site-specific SQGs were derived based on sediment physical-chemical, toxicological, and benthic community data integrated through multivariate analysis. This technique allowed the identification of chemicals of concern and the establishment of effects range correlatively to individual concentrations of contaminants for each site of study. The results revealed that sediments from Santos channel, as well as inner portions of the SES, are considered highly polluted (exceeding SQGs-high) by metals, PAHs and PCBs. High pollution by PAHs and some metals was found in Sao Vicente channel. In PES, sediments from inner portions (proximities of the Ponta do Mix port`s terminal and the Port of Paranagua) are highly polluted by metals and PAHs, including one zone inside the limits of an environmental protection area. In Gulf of Cadiz, SQGs exceedences were found in Ria of Huelva (all analysed metals and PAHs), in the surroundings of the Port of CAdiz (Bay of CAdiz) (metals), and in Bay of Algeciras (Ni and PAHs). The site-specific SQGs derived in this study are more restricted than national SQGs applied in Brazil and Spain, as well as international guidelines. This finding confirms the importance of the development of site-specific SQGs to support the characterisation of sediments and dredged material. The use of the same methodology to derive SQGs in Brazilian and Spanish port zones confirmed the applicability of this technique with an international scope and provided a harmonised methodology for site-specific SQGs derivation. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
In this work we studied the efficiency of the benchmarks used in the asset management industry. In chapter 2 we analyzed the efficiency of the benchmark used for the government bond markets. We found that for the Emerging Market Bonds an equally weighted index for the country weights is probably the more suited because guarantees maximum diversification of country risk but for the Eurozone government bond market we found a GDP weighted index is better because the most important matter is to avoid a higher weight for highly indebted countries. In chapter 3 we analyzed the efficiency of a Derivatives Index to invest in the European corporate bond market instead of a Cash Index. We can state that the two indexes are similar in terms of returns, but that the Derivatives Index is less risky because it has a lower volatility, has values of skewness and kurtosis closer to those of a normal distribution and is a more liquid instrument, as the autocorrelation is not significant. In chapter 4 it is analyzed the impact of fallen angels on the corporate bond portfolios. Our analysis investigated the impact of the month-end rebalancing of the ML Emu Non Financial Corporate Index for the exit of downgraded bond (the event). We can conclude a flexible approach to the month-end rebalancing is better in order to avoid a loss of valued due to the benchmark construction rules. In chapter 5 we did a comparison between the equally weighted and capitalization weighted method for the European equity market. The benefit which results from reweighting the portfolio into equal weights can be attributed to the fact that EW portfolios implicitly follow a contrarian investment strategy, because they mechanically rebalance away from stocks that increase in price.