17 resultados para utilities
em CentAUR: Central Archive University of Reading - UK
Resumo:
Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.
Resumo:
Tycho was conceived in 2003 in response to a need by the GridRM [1] resource-monitoring project for a ldquolight-weightrdquo, scalable and easy to use wide-area distributed registry and messaging system. Since Tycho's first release in 2006 a number of modifications have been made to the system to make it easier to use and more flexible. Since its inception, Tycho has been utilised across a number of application domains including widearea resource monitoring, distributed queries across archival databases, providing services for the nodes of a Cray supercomputer, and as a system for transferring multi-terabyte scientific datasets across the Internet. This paper provides an overview of the initial Tycho system, describes a number of applications that utilise Tycho, discusses a number of new utilities, and how the Tycho infrastructure has evolved in response to experience of building applications with it.
Resumo:
Increasingly, distributed systems are being used to host all manner of applications. While these platforms provide a relatively cheap and effective means of executing applications, so far there has been little work in developing tools and utilities that can help application developers understand problems with the supporting software, or the executing applications. To fully understand why an application executing on a distributed system is not behaving as would be expected it is important that not only the application, but also the underlying middleware, and the operating system are analysed too, otherwise issues could be missed and certainly overall performance profiling and fault diagnoses would be harder to understand. We believe that one approach to profiling and the analysis of distributed systems and the associated applications is via the plethora of log files generated at runtime. In this paper we report on a system (Slogger), that utilises various emerging Semantic Web technologies to gather the heterogeneous log files generated by the various layers in a distributed system and unify them in common data store. Once unified, the log data can be queried and visualised in order to highlight potential problems or issues that may be occurring in the supporting software or the application itself.
Resumo:
This paper develops and tests formulas for representing playing strength at chess by the quality of moves played, rather than by the results of games. Intrinsic quality is estimated via evaluations given by computer chess programs run to high depth, ideally so that their playing strength is sufficiently far ahead of the best human players as to be a `relatively omniscient' guide. Several formulas, each having intrinsic skill parameters s for `sensitivity' and c for `consistency', are argued theoretically and tested by regression on large sets of tournament games played by humans of varying strength as measured by the internationally standard Elo rating system. This establishes a correspondence between Elo rating and the parameters. A smooth correspondence is shown between statistical results and the century points on the Elo scale, and ratings are shown to have stayed quite constant over time. That is, there has been little or no `rating inflation'. The theory and empirical results are transferable to other rational-choice settings in which the alternatives have well-defined utilities, but in which complexity and bounded information constrain the perception of the utility values.
Resumo:
The orthodox approach for incentivising Demand Side Participation (DSP) programs is that utility losses from capital, installation and planning costs should be recovered under financial incentive mechanisms which aim to ensure that utilities have the right incentives to implement DSP activities. The recent national smart metering roll-out in the UK implies that this approach needs to be reassessed since utilities will recover the capital costs associated with DSP technology through bills. This paper introduces a reward and penalty mechanism focusing on residential users. DSP planning costs are recovered through payments from those consumers who do not react to peak signals. Those consumers who do react are rewarded by paying lower bills. Because real-time incentives to residential consumers tend to fail due to the negligible amounts associated with net gains (and losses) or individual users, in the proposed mechanism the regulator determines benchmarks which are matched against responses to signals and caps the level of rewards/penalties to avoid market distortions. The paper presents an overview of existing financial incentive mechanisms for DSP; introduces the reward/penalty mechanism aimed at fostering DSP under the hypothesis of smart metering roll-out; considers the costs faced by utilities for DSP programs; assesses linear rate effects and value changes; introduces compensatory weights for those consumers who have physical or financial impediments; and shows findings based on simulation runs on three discrete levels of elasticity.
Resumo:
This paper has two principal aims: first, to unravel some of the arguments mobilized in the controversial privatization debate, and second, to review the scale and nature of private sector provision of water and sanitation in Africa, Asia and Latin America. Despite being vigorously promoted in the policy arena and having been implemented in several countries in the South in the 1990s, privatization has achieved neither the scale nor benefits anticipated. In particular, the paper is pessimistic about the role that privatization can play in achieving the Millennium Development Goals of halving the number of people without access to water and sanitation by 2015. This is not because of some inherent contradiction between private profits and the public good, but because neither publicly nor privately operated utilities are well suited to serving the majority of low-income households with inadequate water and sanitation, and because many of the barriers to service provision in poor settlements can persist whether water and sanitation utilities are publicly or privately operated. This is not to say that well-governed localities should not choose to involve private companies in water and sanitation provision, but it does imply that there is no justification for international agencies and agreements to actively promote greater private sector participation on the grounds that it can significantly reduce deficiencies in water and sanitation services in the South.
Resumo:
This study proposes a utility-based framework for the determination of optimal hedge ratios (OHRs) that can allow for the impact of higher moments on hedging decisions. We examine the entire hyperbolic absolute risk aversion family of utilities which include quadratic, logarithmic, power, and exponential utility functions. We find that for both moderate and large spot (commodity) exposures, the performance of out-of-sample hedges constructed allowing for nonzero higher moments is better than the performance of the simpler OLS hedge ratio. The picture is, however, not uniform throughout our seven spot commodities as there is one instance (cotton) for which the modeling of higher moments decreases welfare out-of-sample relative to the simpler OLS. We support our empirical findings by a theoretical analysis of optimal hedging decisions and we uncover a novel link between OHRs and the minimax hedge ratio, that is the ratio which minimizes the largest loss of the hedged position. © 2011 Wiley Periodicals, Inc. Jrl Fut Mark
Resumo:
Attribute non-attendance in choice experiments affects WTP estimates and therefore the validity of the method. A recent strand of literature uses attenuated estimates of marginal utilities of ignored attributes. Following this approach, we propose a generalisation of the mixed logit model whereby the distribution of marginal utility coefficients of a stated non-attender has a potentially lower mean and lower variance than those of a stated attender. Model comparison shows that our shrinkage approach fits the data better and produces more reliable WTP estimates. We further find that while reliability of stated attribute non-attendance increases in successive choice experiments, it does not increase when respondents report having ignored the same attribute twice.
Resumo:
The recent roll-out of smart metering technologies in several developed countries has intensified research on the impacts of Time-of-Use (TOU) pricing on consumption. This paper analyses a TOU dataset from the Province of Trento in Northern Italy using a stochastic adjustment model. Findings highlight the non-steadiness of the relationship between consumption and TOU price. Weather and active occupancy can partly explain future consumption in relation to price.
Resumo:
Social media utilities have made it easier than ever to know about the range of online or offline social activities one could be engaging. On the upside, these social resources provide a multitude of opportunities for interaction; on the downside, they often broadcast more options than can be pursued, given practical restrictions and limited time. This dual nature of social media has driven popular interest in the concept of Fear of Missing Out – popularly referred to as FoMO. Defined as a pervasive apprehension that others might be having rewarding experiences from which one is absent, FoMO is characterized by the desire to stay continually connected with what others are doing. The present research presents three studies conducted to advance an empirically based understanding of the fear of missing out phenomenon. The first study collected a diverse international sample of participants in order to create a robust individual differences measure of FoMO, the Fear of Missing Out scale (FoMOs); this study is the first to operationalize the construct. Study 2 recruited a nationally representative cohort to investigate how demographic, motivational and well-being factors relate to FoMO. Study 3 examined the behavioral and emotional correlates of fear of missing out in a sample of young adults. Implications of the FoMOs measure and for the future study of FoMO are discussed.
Resumo:
Despite an extensive market segmentation literature, applied academic studies which bridge segmentation theory and practice remain a priority for researchers. The need for studies which examine the segmentation implementation barriers faced by organisations is particularly acute. We explore segmentation implementation through the eyes of a European utilities business, by following its progress through a major segmentation project. The study reveals the character and impact of implementation barriers occurring at different stages in the segmentation process. By classifying the barriers, we develop implementation "rules" for practitioners which are designed to minimise their occurrence and impact. We further contribute to the literature by developing a deeper understanding of the mechanisms through which these implementation rules can be applied.
Resumo:
The performance of rank dependent preference functionals under risk is comprehensively evaluated using Bayesian model averaging. Model comparisons are made at three levels of heterogeneity plus three ways of linking deterministic and stochastic models: the differences in utilities, the differences in certainty equivalents and contextualutility. Overall, the"bestmodel", which is conditional on the form of heterogeneity is a form of Rank Dependent Utility or Prospect Theory that cap tures the majority of behaviour at both the representative agent and individual level. However, the curvature of the probability weighting function for many individuals is S-shaped, or ostensibly concave or convex rather than the inverse S-shape commonly employed. Also contextual utility is broadly supported across all levels of heterogeneity. Finally, the Priority Heuristic model, previously examined within a deterministic setting, is estimated within a stochastic framework, and allowing for endogenous thresholds does improve model performance although it does not compete well with the other specications considered.
Resumo:
Replacement and upgrading of assets in the electricity network requires financial investment for the distribution and transmission utilities. The replacement and upgrading of network assets also represents an emissions impact due to the carbon embodied in the materials used to manufacture network assets. This paper uses investment and asset data for the GB system for 2015-2023 to assess the suitability of using a proxy with peak demand data and network investment data to calculate the carbon impacts of network investments. The proxies are calculated on a regional basis and applied to calculate the embodied carbon associated with current network assets by DNO region. The proxies are also applied to peak demand data across the 2015-2023 period to estimate the expected levels of embodied carbon that will be associated with network investment during this period. The suitability of these proxies in different contexts are then discussed, along with initial scenario analysis to calculate the impact of avoiding or deferring network investments through distributed generation projects. The proxies were found to be effective in estimating the total embodied carbon of electricity system investment in order to compare investment strategies in different regions of the GB network.