814 resultados para swd: Benchmark
Resumo:
Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a simple empirical system based on multiple linear regression for producing probabilistic forecasts of seasonal surface air temperature and precipitation across the globe. The global CO2-equivalent concentration is taken as the primary predictor; subsequent predictors, including large-scale modes of variability in the climate system and local-scale information, are selected on the basis of their physical relationship with the predictand. The focus given to the climate change signal as a source of skill and the probabilistic nature of the forecasts produced constitute a novel approach to global empirical prediction. Hindcasts for the period 1961–2013 are validated against observations using deterministic (correlation of seasonal means) and probabilistic (continuous rank probability skill scores) metrics. Good skill is found in many regions, particularly for surface air temperature and most notably in much of Europe during the spring and summer seasons. For precipitation, skill is generally limited to regions with known El Niño–Southern Oscillation (ENSO) teleconnections. The system is used in a quasi-operational framework to generate empirical seasonal forecasts on a monthly basis.
Resumo:
This thesis is an empirical-based study of the European Union’s Emissions Trading Scheme (EU ETS) and its implications in terms of corporate environmental and financial performance. The novelty of this study includes the extended scope of the data coverage, as most previous studies have examined only the power sector. The use of verified emissions data of ETS-regulated firms as the environmental compliance measure and as the potential differentiating criteria that concern the valuation of EU ETS-exposed firms in the stock market is also an original aspect of this study. The study begins in Chapter 2 by introducing the background information on the emission trading system (ETS), which focuses on (i) the adoption of ETS as an environmental management instrument and (ii) the adoption of ETS by the European Union as one of its central climate policies. Chapter 3 surveys four databases that provide carbon emissions data in order to determine the most suitable source of the data to be used in the later empirical chapters. The first empirical chapter, which is also Chapter 4 of this thesis, investigates the determinants of the emissions compliance performance of the EU ETS-exposed firms through constructing the best possible performance ratio from verified emissions data and self-configuring models for a panel regression analysis. Chapter 5 examines the impacts on the EU ETS-exposed firms in terms of their equity valuation with customised portfolios and multi-factor market models. The research design takes into account the emissions allowance (EUA) price as an additional factor, as it has the most direct association with the EU ETS to control for the exposure. The final empirical Chapter 6 takes the investigation one step further, by specifically testing the degree of ETS exposure facing different sectors with sector-based portfolios and an extended multi-factor market model. The findings from the emissions performance ratio analysis show that the business model of firms significantly influences emissions compliance, as the capital intensity has a positive association with the increasing emissions-to-emissions cap ratio. Furthermore, different sectors show different degrees of sensitivity towards the determining factors. The production factor influences the performance ratio of the Utilities sector, but not the Energy or Materials sectors. The results show that the capital intensity has a more profound influence on the utilities sector than on the materials sector. With regard to the financial performance impact, ETS-exposed firms as aggregate portfolios experienced a substantial underperformance during the 2001–2004 period, but not in the operating period of 2005–2011. The results of the sector-based portfolios show again the differentiating effect of the EU ETS on sectors, as one sector is priced indifferently against its benchmark, three sectors see a constant underperformance, and three sectors have altered outcomes.
Resumo:
The pipe sizing of water networks via evolutionary algorithms is of great interest because it allows the selection of alternative economical solutions that meet a set of design requirements. However, available evolutionary methods are numerous, and methodologies to compare the performance of these methods beyond obtaining a minimal solution for a given problem are currently lacking. A methodology to compare algorithms based on an efficiency rate (E) is presented here and applied to the pipe-sizing problem of four medium-sized benchmark networks (Hanoi, New York Tunnel, GoYang and R-9 Joao Pessoa). E numerically determines the performance of a given algorithm while also considering the quality of the obtained solution and the required computational effort. From the wide range of available evolutionary algorithms, four algorithms were selected to implement the methodology: a PseudoGenetic Algorithm (PGA), Particle Swarm Optimization (PSO), a Harmony Search and a modified Shuffled Frog Leaping Algorithm (SFLA). After more than 500,000 simulations, a statistical analysis was performed based on the specific parameters each algorithm requires to operate, and finally, E was analyzed for each network and algorithm. The efficiency measure indicated that PGA is the most efficient algorithm for problems of greater complexity and that HS is the most efficient algorithm for less complex problems. However, the main contribution of this work is that the proposed efficiency ratio provides a neutral strategy to compare optimization algorithms and may be useful in the future to select the most appropriate algorithm for different types of optimization problems.
Resumo:
Biomass burning impacts vegetation dynamics, biogeochemical cycling, atmospheric chemistry, and climate, with sometimes deleterious socio-economic impacts. Under future climate projections it is often expected that the risk of wildfires will increase. Our ability to predict the magnitude and geographic pattern of future fire impacts rests on our ability to model fire regimes, either using well-founded empirical relationships or process-based models with good predictive skill. A large variety of models exist today and it is still unclear which type of model or degree of complexity is required to model fire adequately at regional to global scales. This is the central question underpinning the creation of the Fire Model Intercomparison Project - FireMIP, an international project to compare and evaluate existing global fire models against benchmark data sets for present-day and historical conditions. In this paper we summarise the current state-of-the-art in fire regime modelling and model evaluation, and outline what essons may be learned from FireMIP.
Resumo:
New models for estimating bioaccumulation of persistent organic pollutants in the agricultural food chain were developed using recent improvements to plant uptake and cattle transfer models. One model named AgriSim was based on K OW regressions of bioaccumulation in plants and cattle, while the other was a steady-state mechanistic model, AgriCom. The two developed models and European Union System for the Evaluation of Substances (EUSES), as a benchmark, were applied to four reported food chain (soil/air-grass-cow-milk) scenarios to evaluate the performance of each model simulation against the observed data. The four scenarios considered were as follows: (1) polluted soil and air, (2) polluted soil, (3) highly polluted soil surface and polluted subsurface and (4) polluted soil and air at different mountain elevations. AgriCom reproduced observed milk bioaccumulation well for all four scenarios, as did AgriSim for scenarios 1 and 2, but EUSES only did this for scenario 1. The main causes of the deviation for EUSES and AgriSim were the lack of the soil-air-plant pathway and the ambient air-plant pathway, respectively. Based on the results, it is recommended that soil-air-plant and ambient air-plant pathway should be calculated separately and the K OW regression of transfer factor to milk used in EUSES be avoided. AgriCom satisfied the recommendations that led to the low residual errors between the simulated and the observed bioaccumulation in agricultural food chain for the four scenarios considered. It is therefore recommended that this model should be incorporated into regulatory exposure assessment tools. The model uncertainty of the three models should be noted since the simulated concentration in milk from 5th to 95th percentile of the uncertainty analysis often varied over two orders of magnitude. Using a measured value of soil organic carbon content was effective to reduce this uncertainty by one order of magnitude.
Resumo:
This paper studies the relationship between institutional investor holdings and stock misvaluation in the U.S. between 1980 and 2010. I find that institutional investors overweigh overvalued and underweigh undervalued stocks in their portfolio, taking the market portfolio as a benchmark. Cross-sectionally, institutional investors hold more overvalued stocks than undervalued stocks. The time-series studies also show that institutional ownership of overvalued portfolios increases as the portfolios' degree of overvaluation. As an investment strategy, institutional investors' ride of stock misvaluation is neither driven by the fund flows from individual investors into institutions, nor industry-specific. Consistent with the agency problem explanation, investment companies and independent investment advisors have a higher tendency to ride stock misvaluation than other institutions. There is weak evidence that institutional investors make positive profit by riding stock misvaluation. My findings challenge the models that view individual investors as noise traders and disregard the role of institutional investors in stock market misvaluation.
Resumo:
We present the discovery of a wide (67 AU) substellar companion to the nearby (21 pc) young solar-metallicity M1 dwarf CD-35 2722, a member of the approximate to 100 Myr AB Doradus association. Two epochs of astrometry from the NICI Planet-Finding Campaign confirm that CD-35 2722 B is physically associated with the primary star. Near-IR spectra indicate a spectral type of L4 +/- 1 with a moderately low surface gravity, making it one of the coolest young companions found to date. The absorption lines and near-IR continuum shape of CD-35 2722 B agree especially well the dusty field L4.5 dwarf 2MASS J22244381-0158521, while the near-IR colors and absolute magnitudes match those of the 5 Myr old L4 planetary-mass companion, 1RXS J160929.1-210524 b. Overall, CD-35 2722 B appears to be an intermediate-age benchmark for L dwarfs, with a less peaked H-band continuum than the youngest objects and near-IR absorption lines comparable to field objects. We fit Ames-Dusty model atmospheres to the near-IR spectra and find T(eff) = 1700-1900 K and log(g) = 4.5 +/- 0.5. The spectra also show that the radial velocities of components A and B agree to within +/- 10 km s(-1), further confirming their physical association. Using the age and bolometric luminosity of CD-35 2722 B, we derive a mass of 31 +/- 8 M(Jup) from the Lyon/Dusty evolutionary models. Altogether, young late-M to mid-L type companions appear to be overluminous for their near-IR spectral type compared with field objects, in contrast to the underluminosity of young late-L and early-T dwarfs.
Resumo:
The diffusion of astrophysical magnetic fields in conducting fluids in the presence of turbulence depends on whether magnetic fields can change their topology via reconnection in highly conducting media. Recent progress in understanding fast magnetic reconnection in the presence of turbulence reassures that the magnetic field behavior in computer simulations and turbulent astrophysical environments is similar, as far as magnetic reconnection is concerned. This makes it meaningful to perform MHD simulations of turbulent flows in order to understand the diffusion of magnetic field in astrophysical environments. Our studies of magnetic field diffusion in turbulent medium reveal interesting new phenomena. First of all, our three-dimensional MHD simulations initiated with anti-correlating magnetic field and gaseous density exhibit at later times a de-correlation of the magnetic field and density, which corresponds well to the observations of the interstellar media. While earlier studies stressed the role of either ambipolar diffusion or time-dependent turbulent fluctuations for de-correlating magnetic field and density, we get the effect of permanent de-correlation with one fluid code, i.e., without invoking ambipolar diffusion. In addition, in the presence of gravity and turbulence, our three-dimensional simulations show the decrease of the magnetic flux-to-mass ratio as the gaseous density at the center of the gravitational potential increases. We observe this effect both in the situations when we start with equilibrium distributions of gas and magnetic field and when we follow the evolution of collapsing dynamically unstable configurations. Thus, the process of turbulent magnetic field removal should be applicable both to quasi-static subcritical molecular clouds and cores and violently collapsing supercritical entities. The increase of the gravitational potential as well as the magnetization of the gas increases the segregation of the mass and magnetic flux in the saturated final state of the simulations, supporting the notion that the reconnection-enabled diffusivity relaxes the magnetic field + gas system in the gravitational field to its minimal energy state. This effect is expected to play an important role in star formation, from its initial stages of concentrating interstellar gas to the final stages of the accretion to the forming protostar. In addition, we benchmark our codes by studying the heat transfer in magnetized compressible fluids and confirm the high rates of turbulent advection of heat obtained in an earlier study.
Resumo:
We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.
Resumo:
Evolutionary change in New World Monkey (NWM) skulls occurred primarily along the line of least resistance defined by size (including allometric) variation (g(max)). Although the direction of evolution was aligned with this axis, it was not clear whether this macroevolutionary pattern results from the conservation of within population genetic covariance patterns (long-term constraint) or long-term selection along a size dimension, or whether both, constraints and selection, were inextricably involved. Furthermore, G-matrix stability can also be a consequence of selection, which implies that both, constraints embodied in g(max) and evolutionary changes observed on the trait averages, would be influenced by selection Here, we describe a combination of approaches that allows one to test whether any particular instance of size evolution is a correlated by-product due to constraints (g(max)) or is due to direct selection on size and apply it to NWM lineages as a case study. The approach is based on comparing the direction and amount of evolutionary change produced by two different simulated sets of net-selection gradients (beta), a size (isometric and allometric size) and a nonsize set. Using this approach it is possible to distinguish between the two hypotheses (indirect size evolution due to constraints or direct selection on size), because although both may produce an evolutionary response aligned with g(max), the amount of change produced by random selection operating through the variance/covariance patterns (constraints hypothesis) will be much smaller than that produced by selection on size (selection hypothesis). Furthermore, the alignment of simulated evolutionary changes with g(max) when selection is not on size is not as tight as when selection is actually on size, allowing a statistical test of whether a particular observed case of evolution along the line of least resistance is the result of selection along it or not. Also, with matrix diagonalization (principal components [PC]) it is possible to calculate directly the net-selection gradient on size alone (first PC [PC1]) by dividing the amount of phenotypic difference between any two populations by the amount of variation in PC1, which allows one to benchmark whether selection was on size or not
Resumo:
The importance of the HSO(2) system in atmospheric and combustion chemistry has motivated several works dedicated to the study of associated structures and chemical reactions. Nevertheless controversy still exists in connection with the reaction SH + O(2) -> H + SO(2) and also related to the role of the HSOO isomers in the potential energy surface (PES). Here we report high-level ab initio calculation for the electronic ground state of the HSO(2) system. Energetic, geometric, and frequency properties for the major stationary states of the PES are reported at the same level of calculations:,CASPT2/aug-cc-pV(T+d)Z. This study introduces three new stationary points (two saddle points and one minimum). These structures allow the connection of the skewed HSOOs and the HSO(2) minima defining new reaction paths for SH + O(2) -> H + SO(2) and SH + O(2) -> OH + SO. In addition, the location of the HSOO isomers in the reaction pathways have been clarified.
Resumo:
The thermodynamic properties of a selected set of benchmark hydrogen-bonded systems (acetic acid dimer and the complexes of acetic acid with acetamide and methanol) was studied with the goal of obtaining detailed information on solvent effects on the hydrogen-bonded interactions using water, chloroform, and n-heptane as representatives for a wide range in the dielectric constant. Solvent effects were investigated using both explicit and implicit solvation models. For the explicit description of the solvent, molecular dynamics and Monte Carlo simulations in the isothermal isobaric (NpT) ensemble combined with the free energy perturbation technique were performed to determine solvation free energies. Within the implicit solvation approach, the polarizable continuum model and the conductor-like screening model were applied. Combination of gas phase results with the results obtained from the different solvation models through an appropriate thermodynamic cycle allows estimation of complexation free energies, enthalpies, and the respective entropic contributions in solution. Owing to the strong solvation effects of water the cyclic acetic acid dimer is not stable in aqueous solution. In less polar solvents the double hydrogen bond structure of the acetic acid dimer remains stable. This finding is in agreement with previous theoretical and experimental results. A similar trend as for the acetic acid dimer is also observed for the acetamide complex. The methanol complex was found to be thermodynamically unstable in gas phase as well as in any of the three solvents. (C) 2010 Wiley Periodicals, Inc. J Comput Chem 31: 2046-2055, 2010
Resumo:
We use a new technique to investigate the systematic behavior of near barrier complete fusion, total fusion and total reaction cross sections of weakly bound systems. A dimensionless fusion excitation function is used as a benchmark to which renormalized fusion data are compared and dynamic breakup effects can be disentangled from static effects. The same reduction procedure is used to study the effect of the direct reaction mechanisms on the total reaction cross section.
Resumo:
High-energy nuclear collisions create an energy density similar to that of the Universe microseconds after the Big Bang(1); in both cases, matter and antimatter are formed with comparable abundance. However, the relatively short-lived expansion in nuclear collisions allows antimatter to decouple quickly from matter, and avoid annihilation. Thus, a high-energy accelerator of heavy nuclei provides an efficient means of producing and studying antimatter. The antimatter helium-4 nucleus ((4)(He) over bar), also known as the anti-alpha ((alpha) over bar), consists of two antiprotons and two antineutrons (baryon number B = -4). It has not been observed previously, although the alpha-particle was identified a century ago by Rutherford and is present in cosmic radiation at the ten per cent level(2). Antimatter nuclei with B -1 have been observed only as rare products of interactions at particle accelerators, where the rate of antinucleus production in high-energy collisions decreases by a factor of about 1,000 with each additional antinucleon(3-5). Here we report the observation of (4)<(He) over bar, the heaviest observed antinucleus to date. In total, 18 (4)(He) over bar counts were detected at the STAR experiment at the Relativistic Heavy Ion Collider (RHIC; ref. 6) in 10(9) recorded gold-on-gold (Au+Au) collisions at centre-of-mass energies of 200 GeV and 62 GeV per nucleon-nucleon pair. The yield is consistent with expectations from thermodynamic(7) and coalescent nucleosynthesis(8) models, providing an indication of the production rate of even heavier antimatter nuclei and a benchmark for possible future observations of (4)(He) over bar in cosmic radiation.
Resumo:
Deviations from the average can provide valuable insights about the organization of natural systems. The present article extends this important principle to the systematic identification and analysis of singular motifs in complex networks. Six measurements quantifying different and complementary features of the connectivity around each node of a network were calculated, and multivariate statistical methods applied to identify singular nodes. The potential of the presented concepts and methodology was illustrated with respect to different types of complex real-world networks, namely the US air transportation network, the protein-protein interactions of the yeast Saccharomyces cerevisiae and the Roget thesaurus networks. The obtained singular motifs possessed unique functional roles in the networks. Three classic theoretical network models were also investigated, with the Barabasi-Albert model resulting in singular motifs corresponding to hubs, confirming the potential of the approach. Interestingly, the number of different types of singular node motifs as well as the number of their instances were found to be considerably higher in the real-world networks than in any of the benchmark networks. Copyright (C) EPLA, 2009