155 resultados para Electricity Price Volatility
Resumo:
This paper introduces a parallel implementation of an agent-based model applied to electricity distribution grids. A fine-grained shared memory parallel implementation is presented, detailing the way the agents are grouped and executed on a multi-threaded machine, as well as the way the model is built (in a composable manner) which is an aid to the parallelisation. Current results show a medium level speedup of 2.6, but improvements are expected by incor-porating newer distributed or parallel ABM schedulers into this implementa-tion. While domain-specific, this parallel algorithm can be applied to similarly structured ABMs (directed acyclic graphs).
Resumo:
Electricity is the cornerstone of modern life. It is essential to economic stability and growth, jobs and improved living standards. Electricity is also the fundamental ingredient for a dignified life; it is the source of such basic human requirements as cooked food, a comfortable living temperature and essential health care. For these reasons, it is unimaginable that today's economies could function without electricity and the modern energy services that it delivers. Somewhat ironically, however, the current approach to electricity generation also contributes to two of the gravest and most persistent problems threatening the livelihood of humans. These problems are anthropogenic climate change and sustained human poverty. To address these challenges, the global electricity sector must reduce its reliance on fossil fuel sources. In this context, the object of this research is twofold. Initially it is to consider the design of the Renewable Energy (Electricity) Act 2000 (Cth) (Renewable Electricity Act), which represents Australia's primary regulatory approach to increase the production of renewable sourced electricity. This analysis is conducted by reference to the regulatory models that exist in Germany and Great Britain. Within this context, this thesis then evaluates whether the Renewable Electricity Act is designed effectively to contribute to a more sustainable and dignified electricity generation sector in Australia. On the basis of the appraisal of the Renewable Electricity Act, this thesis contends that while certain aspects of the regulatory regime have merit, ultimately its design does not represent an effective and coherent regulatory approach to increase the production of renewable sourced electricity. In this regard, this thesis proposes a number of recommendations to reform the existing regime. These recommendations are not intended to provide instantaneous or simple solutions to the current regulatory regime. Instead, the purpose of these recommendations is to establish the legal foundations for an effective regulatory regime that is designed to increase the production of renewable sourced electricity in Australia in order to contribute to a more sustainable and dignified approach to electricity production.
Resumo:
The aims of this project is to develop demand side response model which assists electricity consumers who are exposed to the market price through aggregator to manage the air-conditioning peak electricity demand. The main contribution of this research is to show how consumers can optimise the energy cost caused by the air-conditioning load considering the electricity market price and network overload. The model is tested with selected characteristics of the room, Queensland electricity market data from Australian Energy Market Operator and data from the Bureau of Statistics on temperatures in Brisbane, during weekdays on hot days from 2011 - 2012.
Resumo:
Consumer awareness and usage of Unit Price (UP) information continues to hold academic interest. Originally designed as a device to enable shoppers to make comparisons between grocery products, it is argued consumers still lack a sufficient understanding of the device. Previous research has tended to focus on product choice, effect of time, and structural changes to price presentation. No studies have tested the effect of UP consumer education on grocery shopping expenditure. Supported by distributed learning theories, this is the first study to condition participants over a twenty week period, to comprehend and employ UP information while shopping. A 3x5 mixed factorial design was employed to collect data from 357 shoppers. A 3 (Control, Massed, Spaced) x 5 (Time Point: Week 0, 5, 10, 15 and 20) mixed factorial analysis of variance (ANOVA) was performed to analyse the data. Preliminary results revealed that the three groups differed in their average expenditure over the twenty weeks. The Control group remained stable across the five time points. Results indicated that both intensive (Massed) and less intensive (Spaced) exposure to UP information achieved similar results, with both group reducing average expenditure similarly by Week 5. These patterns held for twenty weeks, with conditioned groups reducing their grocery expenditure by over 10%. This research has academic value as a test of applied learning theories. We argue, retailers can attain considerable market advantages as efforts to enhance customers’ knowledge, through consumer education campaigns, can have a positive and strong impact on customer trust and goodwill toward the organisation. Hence, major practical implications for both regulators and retailers exist.
Resumo:
This paper investigates how best to forecast optimal portfolio weights in the context of a volatility timing strategy. It measures the economic value of a number of methods for forming optimal portfolios on the basis of realized volatility. These include the traditional econometric approach of forming portfolios from forecasts of the covariance matrix, and a novel method, where a time series of optimal portfolio weights are constructed from observed realized volatility and directly forecast. The approach proposed here of directly forecasting portfolio weights shows a great deal of merit. Resulting portfolios are of equivalent economic benefit to a number of competing approaches and are more stable across time. These findings have obvious implications for the manner in which volatility timing is undertaken in a portfolio allocation context.
Resumo:
Food prices and food affordability are important determinants of food choices, obesity and non-communicable diseases. As governments around the world consider policies to promote the consumption of healthier foods, data on the relative price and affordability of foods, with a particular focus on the difference between ‘less healthy’ and ‘healthy’ foods and diets, are urgently needed. This paper briefly reviews past and current approaches to monitoring food prices, and identifies key issues affecting the development of practical tools and methods for food price data collection, analysis and reporting. A step-wise monitoring framework, including measurement indicators, is proposed. ‘Minimal’ data collection will assess the differential price of ‘healthy’ and ‘less healthy’ foods; ‘expanded’ monitoring will assess the differential price of ‘healthy’ and ‘less healthy’ diets; and the ‘optimal’ approach will also monitor food affordability, by taking into account household income. The monitoring of the price and affordability of ‘healthy’ and ‘less healthy’ foods and diets globally will provide robust data and benchmarks to inform economic and fiscal policy responses. Given the range of methodological, cultural and logistical challenges in this area, it is imperative that all aspects of the proposed monitoring framework are tested rigorously before implementation.
Resumo:
This paper examines the dynamic behaviour of relative prices across seven Australian cities by applying panel unit root test procedures with structural breaks to quarterly consumer price index data for 1972 Q1–2011 Q4. We find overwhelming evidence of convergence in city relative prices. Three common structural breaks are endogenously determined at 1985, 1995, and 2007. Further, correcting for two potential biases, namely Nickell bias and time aggregation bias, we obtain half-life estimates of 2.3–3.8 quarters that are much shorter than those reported by previous research. Thus, we conclude that both structural breaks and bias corrections are important to obtain shorter half-life estimates.
Resumo:
The purpose of this paper is to document and explain the allocation of takeover purchase price to identifiable intangible assets (IIAs), purchased goodwill, and/or target net tangible assets in an accounting environment unconstrained with respect to IIA accounting policy choice. Using a sample of Australian acquisitions during the unconstrained accounting environment from 1988 to 2004, we find the percentage allocation of purchase price to IIAs averaged 19.09%. The percentage allocation to IIAs is significantly positively related to return on assets and insignificantly related to leverage, contrary to opportunism. Efficiency suggests an explanation: profitable firms acquire and capitalise a higher percentage of IIAs in acquisitions. The target's investment opportunity set is significantly positively related to the percentage allocation to IIAs, consistent with information-signalling. The paper contributes to the accounting policy choice literature by showing how Australian firms make the one-off accounting policy choice in regards allocation of takeover purchase price (which is often a substantial dollar amount to) in an environment where accounting for IIAs was unconstrained.
Resumo:
This paper presents a series of operating schedules for Battery Energy Storage Companies (BESC) to provide peak shaving and spinning reserve services in the electricity markets under increasing wind penetration. As individual market participants, BESC can bid in ancillary services markets in an Independent System Operator (ISO) and contribute towards frequency and voltage support in the grid. Recent development in batteries technologies and availability of the day-ahead spot market prices would make BESC economically feasible. Profit maximization of BESC is achieved by determining the optimum capacity of Energy Storage Systems (ESS) required for meeting spinning reserve requirements as well as peak shaving. Historic spot market prices and frequency deviations from Australia Energy Market Operator (AEMO) are used for numerical simulations and the economic benefits of BESC is considered reflecting various aspects in Australia’s National Electricity Markets (NEM).
Resumo:
Integration of small-scale electricity generators, known as Distributed Generation (DG), into the distribution networks has become increasingly popular at the present. This tendency together with the falling price of synchronous-type generator has potential to give the DG a better chance in participating in the voltage regulation process together with other devices already available in the system. The voltage control issue turns out to be a very challenging problem for the distribution engineers since existing control coordination schemes would need to be reconsidered to take into account the DG operation. In this paper, we propose a control coordination technique, which is able to utilize the ability of the DG as a voltage regulator, and at the same time minimizes interaction with other active devices, such as On-load Tap Changing Transformer (OLTC) and voltage regulator. The technique has been developed based on the concept of control zone, Line Drop Compensation (LDC), as well as the choice of controller's parameters. Simulations carried out on an Australian system show that the technique is suitable and flexible for any system with multiple regulating devices including DG.
Resumo:
Agent-based modelling (ABM), like other modelling techniques, is used to answer specific questions from real world systems that could otherwise be expensive or impractical. Its recent gain in popularity can be attributed to some degree to its capacity to use information at a fine level of detail of the system, both geographically and temporally, and generate information at a higher level, where emerging patterns can be observed. This technique is data-intensive, as explicit data at a fine level of detail is used and it is computer-intensive as many interactions between agents, which can learn and have a goal, are required. With the growing availability of data and the increase in computer power, these concerns are however fading. Nonetheless, being able to update or extend the model as more information becomes available can become problematic, because of the tight coupling of the agents and their dependence on the data, especially when modelling very large systems. One large system to which ABM is currently applied is the electricity distribution where thousands of agents representing the network and the consumers’ behaviours are interacting with one another. A framework that aims at answering a range of questions regarding the potential evolution of the grid has been developed and is presented here. It uses agent-based modelling to represent the engineering infrastructure of the distribution network and has been built with flexibility and extensibility in mind. What distinguishes the method presented here from the usual ABMs is that this ABM has been developed in a compositional manner. This encompasses not only the software tool, which core is named MODAM (MODular Agent-based Model) but the model itself. Using such approach enables the model to be extended as more information becomes available or modified as the electricity system evolves, leading to an adaptable model. Two well-known modularity principles in the software engineering domain are information hiding and separation of concerns. These principles were used to develop the agent-based model on top of OSGi and Eclipse plugins which have good support for modularity. Information regarding the model entities was separated into a) assets which describe the entities’ physical characteristics, and b) agents which describe their behaviour according to their goal and previous learning experiences. This approach diverges from the traditional approach where both aspects are often conflated. It has many advantages in terms of reusability of one or the other aspect for different purposes as well as composability when building simulations. For example, the way an asset is used on a network can greatly vary while its physical characteristics are the same – this is the case for two identical battery systems which usage will vary depending on the purpose of their installation. While any battery can be described by its physical properties (e.g. capacity, lifetime, and depth of discharge), its behaviour will vary depending on who is using it and what their aim is. The model is populated using data describing both aspects (physical characteristics and behaviour) and can be updated as required depending on what simulation is to be run. For example, data can be used to describe the environment to which the agents respond to – e.g. weather for solar panels, or to describe the assets and their relation to one another – e.g. the network assets. Finally, when running a simulation, MODAM calls on its module manager that coordinates the different plugins, automates the creation of the assets and agents using factories, and schedules their execution which can be done sequentially or in parallel for faster execution. Building agent-based models in this way has proven fast when adding new complex behaviours, as well as new types of assets. Simulations have been run to understand the potential impact of changes on the network in terms of assets (e.g. installation of decentralised generators) or behaviours (e.g. response to different management aims). While this platform has been developed within the context of a project focussing on the electricity domain, the core of the software, MODAM, can be extended to other domains such as transport which is part of future work with the addition of electric vehicles.
Resumo:
Global awareness for cleaner and renewable energy is transforming the electricity sector at many levels. New technologies are being increasingly integrated into the electricity grid at high, medium and low voltage levels, new taxes on carbon emissions are being introduced and individuals can now produce electricity, mainly through rooftop photovoltaic (PV) systems. While leading to improvements, these changes also introduce challenges, and a question that often rises is ‘how can we manage this constantly evolving grid?’ The Queensland Government and Ergon Energy, one of the two Queensland distribution companies, have partnered with some Australian and German universities on a project to answer this question in a holistic manner. The project investigates the impact the integration of renewables and other new technologies has on the physical structure of the grid, and how this evolving system can be managed in a sustainable and economical manner. To aid understanding of what the future might bring, a software platform has been developed that integrates two modelling techniques: agent-based modelling (ABM) to capture the characteristics of the different system units accurately and dynamically, and particle swarm optimization (PSO) to find the most economical mix of network extension and integration of distributed generation over long periods of time. Using data from Ergon Energy, two types of networks (3 phase, and Single Wired Earth Return or SWER) have been modelled; three-phase networks are usually used in dense networks such as urban areas, while SWER networks are widely used in rural Queensland. Simulations can be performed on these networks to identify the required upgrades, following a three-step process: a) what is already in place and how it performs under current and future loads, b) what can be done to manage it and plan the future grid and c) how these upgrades/new installations will perform over time. The number of small-scale distributed generators, e.g. PV and battery, is now sufficient (and expected to increase) to impact the operation of the grid, which in turn needs to be considered by the distribution network manager when planning for upgrades and/or installations to stay within regulatory limits. Different scenarios can be simulated, with different levels of distributed generation, in-place as well as expected, so that a large number of options can be assessed (Step a). Once the location, sizing and timing of assets upgrade and/or installation are found using optimisation techniques (Step b), it is possible to assess the adequacy of their daily performance using agent-based modelling (Step c). One distinguishing feature of this software is that it is possible to analyse a whole area at once, while still having a tailored solution for each of the sub-areas. To illustrate this, using the impact of battery and PV can have on the two types of networks mentioned above, three design conditions can be identified (amongst others): · Urban conditions o Feeders that have a low take-up of solar generators, may benefit from adding solar panels o Feeders that need voltage support at specific times, may be assisted by installing batteries · Rural conditions - SWER network o Feeders that need voltage support as well as peak lopping may benefit from both battery and solar panel installations. This small example demonstrates that no single solution can be applied across all three areas, and there is a need to be selective in which one is applied to each branch of the network. This is currently the function of the engineer who can define various scenarios against a configuration, test them and iterate towards an appropriate solution. Future work will focus on increasing the level of automation in identifying areas where particular solutions are applicable.
Resumo:
Electricity network investment and asset management require accurate estimation of future demand in energy consumption within specified service areas. For this purpose, simple models are typically developed to predict future trends in electricity consumption using various methods and assumptions. This paper presents a statistical model to predict electricity consumption in the residential sector at the Census Collection District (CCD) level over the state of New South Wales, Australia, based on spatial building and household characteristics. Residential household demographic and building data from the Australian Bureau of Statistics (ABS) and actual electricity consumption data from electricity companies are merged for 74 % of the 12,000 CCDs in the state. Eighty percent of the merged dataset is randomly set aside to establish the model using regression analysis, and the remaining 20 % is used to independently test the accuracy of model prediction against actual consumption. In 90 % of the cases, the predicted consumption is shown to be within 5 kWh per dwelling per day from actual values, with an overall state accuracy of -1.15 %. Given a future scenario with a shift in climate zone and a growth in population, the model is used to identify the geographical or service areas that are most likely to have increased electricity consumption. Such geographical representation can be of great benefit when assessing alternatives to the centralised generation of energy; having such a model gives a quantifiable method to selecting the 'most' appropriate system when a review or upgrade of the network infrastructure is required.
Resumo:
The international tax system, designed a century ago, has not kept pace with the modern multinational entity rendering it ineffective in taxing many modern businesses according to economic activity. One of those modern multinational entities is the multinational financial institution (MNFI). The recent global financial crisis provides a particularly relevant and significant example of the failure of the current system on a global scale. The modern MNFI is increasingly undertaking more globalised and complex trading operations. A primary reason for the globalisation of financial institutions is that they typically ‘follow-the-customer’ into jurisdictions where international capital and international investors are required. The International Monetary Fund (IMF) recently reported that from 1995-2009, foreign bank presence in developing countries grew by 122 per cent. The same study indicates that foreign banks have a 20 per cent market share in OECD countries and 50 per cent in emerging markets and developing countries. Hence, most significant is that fact that MNFIs are increasingly undertaking an intermediary role in developing economies where they are financing core business activities such as mining and tourism. IMF analysis also suggests that in the future, foreign bank expansion will be greatest in emerging economies. The difficulties for developing countries in applying current international tax rules, especially the current traditional transfer pricing regime, are particularly acute in relation to MNFIs, which are the biggest users of tax havens and offshore finance. This paper investigates whether a unitary taxation approach which reflects economic reality would more easily and effectively ensure that the profits of MNFIs are taxed in the jurisdictions which give rise to those profits. It has previously been argued that the uniqueness of MNFIs results in a failure of the current system to accurately allocate profits and that unitary tax as an alternative could provide a sounder allocation model for international tax purposes. This paper goes a step further, and examines the practicalities of the implementation of unitary taxation for MNFIs in terms of the key components of such a regime, along with their their implications. This paper adopts a two-step approach in considering the implications of unitary taxation as a means of improved corporate tax coordination which requires international acceptance and agreement. First, the definitional issues of the unitary MNFI are examined and second, an appropriate allocation formula for this sector is investigated. To achieve this, the paper asks first, how the financial sector should be defined for the purposes of unitary taxation and what should constitute a unitary business for that sector and second, what is the ‘best practice’ model of an allocation formula for the purposes of the apportionment of the profits of the unitary business of a financial institution.
Resumo:
Objective This article explores patterns of terrorist activity over the period from 2000 through 2010 across three target countries: Indonesia, the Philippines and Thailand. Methods We use self-exciting point process models to create interpretable and replicable metrics for three key terrorism concepts: risk, resilience and volatility, as defined in the context of terrorist activity. Results Analysis of the data shows significant and important differences in the risk, volatility and resilience metrics over time across the three countries. For the three countries analysed, we show that risk varied on a scale from 0.005 to 1.61 “expected terrorist attacks per day”, volatility ranged from 0.820 to 0.994 “additional attacks caused by each attack”, and resilience, as measured by the number of days until risk subsides to a pre-attack level, ranged from 19 to 39 days. We find that of the three countries, Indonesia had the lowest average risk and volatility, and the highest level of resilience, indicative of the relatively sporadic nature of terrorist activity in Indonesia. The high terrorism risk and low resilience in the Philippines was a function of the more intense, less clustered pattern of terrorism than what was evident in Indonesia. Conclusions Mathematical models hold great promise for creating replicable, reliable and interpretable “metrics” to key terrorism concepts such as risk, resilience and volatility.