580 resultados para Benchmarks


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Palaeodata in synthesis form are needed as benchmarks for the Palaeoclimate Modelling Intercomparison Project (PMIP). Advances since the last synthesis of terrestrial palaeodata from the last glacial maximum (LGM) call for a new evaluation, especially of data from the tropics. Here pollen, plant-macrofossil, lake-level, noble gas (from groundwater) and δ18O (from speleothems) data are compiled for 18±2 ka (14C), 32 °N–33 °S. The reliability of the data was evaluated using explicit criteria and some types of data were re-analysed using consistent methods in order to derive a set of mutually consistent palaeoclimate estimates of mean temperature of the coldest month (MTCO), mean annual temperature (MAT), plant available moisture (PAM) and runoff (P-E). Cold-month temperature (MAT) anomalies from plant data range from −1 to −2 K near sea level in Indonesia and the S Pacific, through −6 to −8 K at many high-elevation sites to −8 to −15 K in S China and the SE USA. MAT anomalies from groundwater or speleothems seem more uniform (−4 to −6 K), but the data are as yet sparse; a clear divergence between MAT and cold-month estimates from the same region is seen only in the SE USA, where cold-air advection is expected to have enhanced cooling in winter. Regression of all cold-month anomalies against site elevation yielded an estimated average cooling of −2.5 to −3 K at modern sea level, increasing to ≈−6 K by 3000 m. However, Neotropical sites showed larger than the average sea-level cooling (−5 to −6 K) and a non-significant elevation effect, whereas W and S Pacific sites showed much less sea-level cooling (−1 K) and a stronger elevation effect. These findings support the inference that tropical sea-surface temperatures (SSTs) were lower than the CLIMAP estimates, but they limit the plausible average tropical sea-surface cooling, and they support the existence of CLIMAP-like geographic patterns in SST anomalies. Trends of PAM and lake levels indicate wet LGM conditions in the W USA, and at the highest elevations, with generally dry conditions elsewhere. These results suggest a colder-than-present ocean surface producing a weaker hydrological cycle, more arid continents, and arguably steeper-than-present terrestrial lapse rates. Such linkages are supported by recent observations on freezing-level height and tropical SSTs; moreover, simulations of “greenhouse” and LGM climates point to several possible feedback processes by which low-level temperature anomalies might be amplified aloft.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present and examine a multi-sensor global compilation of mid-Holocene (MH) sea surface temperatures (SST), based on Mg/Ca and alkenone palaeothermometry and reconstructions obtained using planktonic foraminifera and organic-walled dinoflagellate cyst census counts. We assess the uncertainties originating from using different methodologies and evaluate the potential of MH SST reconstructions as a benchmark for climate-model simulations. The comparison between different analytical approaches (time frame, baseline climate) shows the choice of time window for the MH has a negligible effect on the reconstructed SST pattern, but the choice of baseline climate affects both the magnitude and spatial pattern of the reconstructed SSTs. Comparison of the SST reconstructions made using different sensors shows significant discrepancies at a regional scale, with uncertainties often exceeding the reconstructed SST anomaly. Apparent patterns in SST may largely be a reflection of the use of different sensors in different regions. Overall, the uncertainties associated with the SST reconstructions are generally larger than the MH anomalies. Thus, the SST data currently available cannot serve as a target for benchmarking model simulations. Further evaluations of potential subsurface and/or seasonal artifacts that may contribute to obscure the MH SST reconstructions are urgently needed to provide reliable benchmarks for model evaluations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Land surface Processes and eXchanges (LPX) model is a fire-enabled dynamic global vegetation model that performs well globally but has problems representing fire regimes and vegetative mix in savannas. Here we focus on improving the fire module. To improve the representation of ignitions, we introduced a reatment of lightning that allows the fraction of ground strikes to vary spatially and seasonally, realistically partitions strike distribution between wet and dry days, and varies the number of dry days with strikes. Fuel availability and moisture content were improved by implementing decomposition rates specific to individual plant functional types and litter classes, and litter drying rates driven by atmospheric water content. To improve water extraction by grasses, we use realistic plant-specific treatments of deep roots. To improve fire responses, we introduced adaptive bark thickness and post-fire resprouting for tropical and temperate broadleaf trees. All improvements are based on extensive analyses of relevant observational data sets. We test model performance for Australia, first evaluating parameterisations separately and then measuring overall behaviour against standard benchmarks. Changes to the lightning parameterisation produce a more realistic simulation of fires in southeastern and central Australia. Implementation of PFT-specific decomposition rates enhances performance in central Australia. Changes in fuel drying improve fire in northern Australia, while changes in rooting depth produce a more realistic simulation of fuel availability and structure in central and northern Australia. The introduction of adaptive bark thickness and resprouting produces more realistic fire regimes in Australian savannas. We also show that the model simulates biomass recovery rates consistent with observations from several different regions of the world characterised by resprouting vegetation. The new model (LPX-Mv1) produces an improved simulation of observed vegetation composition and mean annual burnt area, by 33 and 18% respectively compared to LPX.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper uses a novel numerical optimization technique - robust optimization - that is well suited to solving the asset-liability management (ALM) problem for pension schemes. It requires the estimation of fewer stochastic parameters, reduces estimation risk and adopts a prudent approach to asset allocation. This study is the first to apply it to a real-world pension scheme, and the first ALM model of a pension scheme to maximise the Sharpe ratio. We disaggregate pension liabilities into three components - active members, deferred members and pensioners, and transform the optimal asset allocation into the scheme’s projected contribution rate. The robust optimization model is extended to include liabilities and used to derive optimal investment policies for the Universities Superannuation Scheme (USS), benchmarked against the Sharpe and Tint, Bayes-Stein, and Black-Litterman models as well as the actual USS investment decisions. Over a 144 month out-of-sample period robust optimization is superior to the four benchmarks across 20 performance criteria, and has a remarkably stable asset allocation – essentially fix-mix. These conclusions are supported by six robustness checks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The evaluation of forecast performance plays a central role both in the interpretation and use of forecast systems and in their development. Different evaluation measures (scores) are available, often quantifying different characteristics of forecast performance. The properties of several proper scores for probabilistic forecast evaluation are contrasted and then used to interpret decadal probability hindcasts of global mean temperature. The Continuous Ranked Probability Score (CRPS), Proper Linear (PL) score, and IJ Good’s logarithmic score (also referred to as Ignorance) are compared; although information from all three may be useful, the logarithmic score has an immediate interpretation and is not insensitive to forecast busts. Neither CRPS nor PL is local; this is shown to produce counter intuitive evaluations by CRPS. Benchmark forecasts from empirical models like Dynamic Climatology place the scores in context. Comparing scores for forecast systems based on physical models (in this case HadCM3, from the CMIP5 decadal archive) against such benchmarks is more informative than internal comparison systems based on similar physical simulation models with each other. It is shown that a forecast system based on HadCM3 out performs Dynamic Climatology in decadal global mean temperature hindcasts; Dynamic Climatology previously outperformed a forecast system based upon HadGEM2 and reasons for these results are suggested. Forecasts of aggregate data (5-year means of global mean temperature) are, of course, narrower than forecasts of annual averages due to the suppression of variance; while the average “distance” between the forecasts and a target may be expected to decrease, little if any discernible improvement in probabilistic skill is achieved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a selection of methodologies for using the palaeo-climate model component of the Coupled Model Intercomparison Project (Phase 5) (CMIP5) to attempt to constrain future climate projections using the same models. The constraints arise from measures of skill in hindcasting palaeo-climate changes from the present over three periods: the Last Glacial Maximum (LGM) (21 000 yr before present, ka), the mid-Holocene (MH) (6 ka) and the Last Millennium (LM) (850–1850 CE). The skill measures may be used to validate robust patterns of climate change across scenarios or to distinguish between models that have differing outcomes in future scenarios. We find that the multi-model ensemble of palaeo-simulations is adequate for addressing at least some of these issues. For example, selected benchmarks for the LGM and MH are correlated to the rank of future projections of precipitation/temperature or sea ice extent to indicate that models that produce the best agreement with palaeo-climate information give demonstrably different future results than the rest of the models. We also explore cases where comparisons are strongly dependent on uncertain forcing time series or show important non-stationarity, making direct inferences for the future problematic. Overall, we demonstrate that there is a strong potential for the palaeo-climate simulations to help inform the future projections and urge all the modelling groups to complete this subset of the CMIP5 runs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis examines three different, but related problems in the broad area of portfolio management for long-term institutional investors, and focuses mainly on the case of pension funds. The first idea (Chapter 3) is the application of a novel numerical technique – robust optimization – to a real-world pension scheme (the Universities Superannuation Scheme, USS) for first time. The corresponding empirical results are supported by many robustness checks and several benchmarks such as the Bayes-Stein and Black-Litterman models that are also applied for first time in a pension ALM framework, the Sharpe and Tint model and the actual USS asset allocations. The second idea presented in Chapter 4 is the investigation of whether the selection of the portfolio construction strategy matters in the SRI industry, an issue of great importance for long term investors. This study applies a variety of optimal and naïve portfolio diversification techniques to the same SRI-screened universe, and gives some answers to the question of which portfolio strategies tend to create superior SRI portfolios. Finally, the third idea (Chapter 5) compares the performance of a real-world pension scheme (USS) before and after the recent major changes in the pension rules under different dynamic asset allocation strategies and the fixed-mix portfolio approach and quantifies the redistributive effects between various stakeholders. Although this study deals with a specific pension scheme, the methodology can be applied by other major pension schemes in countries such as the UK and USA that have changed their rules.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Identifying the correct sense of a word in context is crucial for many tasks in natural language processing (machine translation is an example). State-of-the art methods for Word Sense Disambiguation (WSD) build models using hand-crafted features that usually capturing shallow linguistic information. Complex background knowledge, such as semantic relationships, are typically either not used, or used in specialised manner, due to the limitations of the feature-based modelling techniques used. On the other hand, empirical results from the use of Inductive Logic Programming (ILP) systems have repeatedly shown that they can use diverse sources of background knowledge when constructing models. In this paper, we investigate whether this ability of ILP systems could be used to improve the predictive accuracy of models for WSD. Specifically, we examine the use of a general-purpose ILP system as a method to construct a set of features using semantic, syntactic and lexical information. This feature-set is then used by a common modelling technique in the field (a support vector machine) to construct a classifier for predicting the sense of a word. In our investigation we examine one-shot and incremental approaches to feature-set construction applied to monolingual and bilingual WSD tasks. The monolingual tasks use 32 verbs and 85 verbs and nouns (in English) from the SENSEVAL-3 and SemEval-2007 benchmarks; while the bilingual WSD task consists of 7 highly ambiguous verbs in translating from English to Portuguese. The results are encouraging: the ILP-assisted models show substantial improvements over those that simply use shallow features. In addition, incremental feature-set construction appears to identify smaller and better sets of features. Taken together, the results suggest that the use of ILP with diverse sources of background knowledge provide a way for making substantial progress in the field of WSD.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purple acid phosphatases (PAPs) are a group of metallohydrolases that contain a dinuclear Fe(II)M(II) center (M(II) = Fe, Mn, Zn) in the active site and are able to catalyze the hydrolysis of a variety of phosphoric acid esters. The dinuclear complex [(H(2)O)Fe(III)(mu-OH)Zn(II)(L-H)](CIO(4))(2) (2) with the ligand 2-[N-bis(2-pyridylmethyl)aminomethyl]-4-methyl-6-[N-(2-pyridylmethyl)(2-hydroxybenzyl) aminomethyl]phenol (H(2)L-H) has recently been prepared and is found to closely mimic the coordination environment of the Fe(III)Zn(II) active site found in red kidney bean PAP (Neves et al. J. Am. Chem. Soc. 2007, 129, 7486). The biomimetic shows significant catalytic activity in hydrolytic reactions. By using a variety of structural, spectroscopic, and computational techniques the electronic structure of the Fe(III) center of this biomimetic complex was determined. In the solid state the electronic ground state reflects the rhombically distorted Fe(III)N(2)O(4) octahedron with a dominant tetragonal compression align ad along the mu-OH-Fe-O(phenolate) direction. To probe the role of the Fe-O(phenolate) bond, the phenolate moiety was modified to contain electron-donating or -withdrawing groups (-CH(3), -H, -Br, -NO(2)) in the 5-position. Tie effects of the substituents on the electronic properties of the biomimetic complexes were studied with a range of experimental and computational techniques. This study establishes benchmarks against accurate crystallographic struck ral information using spectroscopic techniques that are not restricted to single crystals. Kinetic studies on the hydrolysis reaction revealed that the phosphodiesterase activity increases in the order -NO(2)<- Br <- H <- CH(3) when 2,4-bis(dinitrophenyl)phosphate (2,4-bdnpp) was used as substrate, and a linear free energy relationship is found when log(k(cat)/k(0)) is plotted against the Hammett parameter a. However, nuclease activity measurements in the cleavage of double stranded DNA showed that the complexes containing the electron-withdrawing -NO(2) and electron-donating CH3 groups are the most active while the cytotoxic activity of the biomimetics on leukemia and lung tumoral cells is highest for complexes with electron-donating groups.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The assessment of routing protocols for mobile wireless networks is a difficult task, because of the networks` dynamic behavior and the absence of benchmarks. However, some of these networks, such as intermittent wireless sensors networks, periodic or cyclic networks, and some delay tolerant networks (DTNs), have more predictable dynamics, as the temporal variations in the network topology can be considered as deterministic, which may make them easier to study. Recently, a graph theoretic model-the evolving graphs-was proposed to help capture the dynamic behavior of such networks, in view of the construction of least cost routing and other algorithms. The algorithms and insights obtained through this model are theoretically very efficient and intriguing. However, there is no study about the use of such theoretical results into practical situations. Therefore, the objective of our work is to analyze the applicability of the evolving graph theory in the construction of efficient routing protocols in realistic scenarios. In this paper, we use the NS2 network simulator to first implement an evolving graph based routing protocol, and then to use it as a benchmark when comparing the four major ad hoc routing protocols (AODV, DSR, OLSR and DSDV). Interestingly, our experiments show that evolving graphs have the potential to be an effective and powerful tool in the development and analysis of algorithms for dynamic networks, with predictable dynamics at least. In order to make this model widely applicable, however, some practical issues still have to be addressed and incorporated into the model, like adaptive algorithms. We also discuss such issues in this paper, as a result of our experience.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study contributes a rigorous diagnostic assessment of state-of-the-art multiobjective evolutionary algorithms (MOEAs) and highlights key advances that the water resources field can exploit to better discover the critical tradeoffs constraining our systems. This study provides the most comprehensive diagnostic assessment of MOEAs for water resources to date, exploiting more than 100,000 MOEA runs and trillions of design evaluations. The diagnostic assessment measures the effectiveness, efficiency, reliability, and controllability of ten benchmark MOEAs for a representative suite of water resources applications addressing rainfall-runoff calibration, long-term groundwater monitoring (LTM), and risk-based water supply portfolio planning. The suite of problems encompasses a range of challenging problem properties including (1) many-objective formulations with 4 or more objectives, (2) multi-modality (or false optima), (3) nonlinearity, (4) discreteness, (5) severe constraints, (6) stochastic objectives, and (7) non-separability (also called epistasis). The applications are representative of the dominant problem classes that have shaped the history of MOEAs in water resources and that will be dominant foci in the future. Recommendations are provided for which modern MOEAs should serve as tools and benchmarks in the future water resources literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes the formulation of a Multi-objective Pipe Smoothing Genetic Algorithm (MOPSGA) and its application to the least cost water distribution network design problem. Evolutionary Algorithms have been widely utilised for the optimisation of both theoretical and real-world non-linear optimisation problems, including water system design and maintenance problems. In this work we present a pipe smoothing based approach to the creation and mutation of chromosomes which utilises engineering expertise with the view to increasing the performance of the algorithm whilst promoting engineering feasibility within the population of solutions. MOPSGA is based upon the standard Non-dominated Sorting Genetic Algorithm-II (NSGA-II) and incorporates a modified population initialiser and mutation operator which directly targets elements of a network with the aim to increase network smoothness (in terms of progression from one diameter to the next) using network element awareness and an elementary heuristic. The pipe smoothing heuristic used in this algorithm is based upon a fundamental principle employed by water system engineers when designing water distribution pipe networks where the diameter of any pipe is never greater than the sum of the diameters of the pipes directly upstream resulting in the transition from large to small diameters from source to the extremities of the network. MOPSGA is assessed on a number of water distribution network benchmarks from the literature including some real-world based, large scale systems. The performance of MOPSGA is directly compared to that of NSGA-II with regard to solution quality, engineering feasibility (network smoothness) and computational efficiency. MOPSGA is shown to promote both engineering and hydraulic feasibility whilst attaining good infrastructure costs compared to NSGA-II.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates the impact of price limits on the Brazil- ian future markets using high frequency data. The aim is to identify whether there is a cool-off or a magnet effect. For that purpose, we examine a tick-by-tick data set that includes all contracts on the São Paulo stock index futures traded on the Brazilian Mercantile and Futures Exchange from January 1997 to December 1999. Our main finding is that price limits drive back prices as they approach the lower limit. There is a strong cool-off effect of the lower limit on the conditional mean, whereas the upper limit seems to entail a weak magnet effect on the conditional variance. We then build a trading strategy that accounts for the cool-off effect so as to demonstrate that the latter has not only statistical, but also economic signifi- cance. The resulting Sharpe ratio indeed is way superior to the buy-and-hold benchmarks we consider.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A Brazilian Traction, Light and Power Company ("Light"), formada por empreendedores canadenses em 1899, operou por 80 anos praticamente toda a infraestrutura (bondes, luz, telefones, gás) do eixo Rio-São Paulo. A empresa passou por vários ciclos políticos, desde sua fundação até sua estatização em 1979. Durante este período de 80 anos, a infra-estrutura nacional, inicialmente privada, foi gradativamente passando para as mãos do Estado. O setor voltaria a ser privado a partir dos anos 90, configurando o ciclo privado-público-privado, similar ao ocorrido nos países mais desenvolvidos. A Light, símbolo maior do capital estrangeiro até os anos 50, foi inicialmente bem recebida no país, posto que seu desenvolvimento era simbiótico, causa e conseqüência, ao desenvolvimento industrial. Dos anos 20 em diante, crescem os debates econômicos ou ideológicos quanto ao papel do capital privado estrangeiro no desenvolvimento nacional, vis-a-vis a opção do setor público como ator principal. Sempre permaneceu sob névoa quais teriam sido os lucros da Light no Brasil, e se esses seriam excessivos, acima do razoável. Outra questão recorrente se refere até que ponto os congelamentos de tarifas teriam contribuido para a crise de oferta de infra-estrutura. Através de um trabalho de pesquisa em fontes primárias, esta dissertação procura reconstituir a história da Light, sob um foco de Taxa de Retorno sob o capital investido. Foi reconstruída a história financeira da Light no Brasil, a partir da qual calculou-se, para vários períodos e para os seus 80 anos de vida, os retornos obtidos pelos acionistas da empresa. A partir dos resultados obtidos, e utilizando-se de benchmarks comparativos, foi possível mostrar que: i) ao contrário da crença vigente à época, o retorno obtido pelo maior investidor estrangeiro no setor de infra-estrutura do Brasil do Séc. XX, se mostrou bem abaixo do mínimo aceitável, e ii) o represamento de tarifas, por várias décadas, foi de fato determinante para o subdesenvolvimento do setor de infra-estrutura no Brasil.