58 resultados para Election forecasting
Resumo:
The operation of the doctrine of election, as it applies in a conveyancing context, was recently considered by the Queensland Court of Appeal (McMurdo P and White and Fryberg JJ) in Barooga Projects (Investments) Pty Ltd v Duncan [2004] QCA 149.
Resumo:
We consider a continuous time model for election timing in a Majoritarian Parliamentary System where the government maintains a constitutional right to call an early election. Our model is based on the two-party-preferred data that measure the popularity of the government and the opposition over time. We describe the poll process by a Stochastic Differential Equation (SDE) and use a martingale approach to derive a Partial Differential Equation (PDE) for the government’s expected remaining life in office. A comparison is made between a three-year and a four-year maximum term and we also provide the exercise boundary for calling an election. Impacts on changes in parameters in the SDE, the probability of winning the election and maximum terms on the call exercise boundaries are discussed and analysed. An application of our model to the Australian Federal Election for House of Representatives is also given.
Resumo:
This article analyses the 2010 federal election and the impact the internet and social media had on electoral law, and what this may mean for electoral law in the future. Four electoral law issues arising out of the 2010 election as a result of the internet are considered, including online enrolment, regulation of online advertising and comment, fundraising and the role of lobby groups, especially when it comes to crowdsourcing court challenges. Finally, the article offers some suggestions as to how the parliament and the courts should respond to these challenges.
Resumo:
Purpose – The purpose of this paper is to jointly assess the impact of regulatory reform for corporate fundraising in Australia (CLERP Act 1999) and the relaxation of ASX admission rules in 1999, on the accuracy of management earnings forecasts in initial public offer (IPO) prospectuses. The relaxation of ASX listing rules permitted a new category of new economy firms (commitments test entities (CTEs))to list without a prior history of profitability, while the CLERP Act (introduced in 2000) was accompanied by tighter disclosure obligations and stronger enforcement action by the corporate regulator (ASIC). Design/methodology/approach – All IPO earnings forecasts in prospectuses lodged between 1998 and 2003 are examined to assess the pre- and post-CLERP Act impact. Based on active ASIC enforcement action in the post-reform period, IPO firms are hypothesised to provide more accurate forecasts, particularly CTE firms, which are less likely to have a reasonable basis for forecasting. Research models are developed to empirically test the impact of the reforms on CTE and non-CTE IPO firms. Findings – The new regulatory environment has had a positive impact on management forecasting behaviour. In the post-CLERP Act period, the accuracy of prospectus forecasts and their revisions significantly improved and, as expected, the results are primarily driven by CTE firms. However, the majority of prospectus forecasts continue to be materially inaccurate. Originality/value – The results highlight the need to control for both the changing nature of listed firms and the level of enforcement action when examining responses to regulatory changes to corporate fundraising activities.
Resumo:
While the 2007 Australian federal election was notable for the use of social media by the Australian Labor Party in campaigning, the 2010 election took place in a media landscape in which social media–especially Twitter–had become much more embedded in both political journalism and independent political commentary. This article draws on the computer-aided analysis of election-related Twitter messages, collected under the #ausvotes hashtag, to describe the key patterns of activity and thematic foci of the election’s coverage in this particular social media site. It introduces novel metrics for analysing public communication via Twitter, and describes the related methods. What emerges from this analysis is the role of the #ausvotes hashtag as a means of gathering an ad hoc ‘issue public’– a finding which is likely to be replicated for other hashtag communities.
Resumo:
This paper draws on a larger study of the uses of Australian user-created content and online social networks to examine the relationships between professional journalists and highly engaged Australian users of political media within the wider media ecology, with a particular focus on Twitter. It uses an analysis of topic based conversation networks using the #ausvotes hashtag on Twitter around the 2010 federal election to explore the key themes and issues addressed by this Twitter community during the campaign, and finds that Twitter users were largely commenting on the performance of mainstream media and politicians rather than engaging in direct political discussion. The often critical attitude of Twitter users towards the political establishment mirrors the approach of news and political bloggers to political actors, nearly a decade earlier, but the increasing adoption of Twitter as a communication tool by politicians, journalists, and everyday users alike makes a repetition of the polarisation experienced at that time appear unlikely.
Resumo:
Forecasts generated by time series models traditionally place greater weight on more recent observations. This paper develops an alternative semi-parametric method for forecasting that does not rely on this convention and applies it to the problem of forecasting asset return volatility. In this approach, a forecast is a weighted average of historical volatility, with the greatest weight given to periods that exhibit similar market conditions to the time at which the forecast is being formed. Weighting is determined by comparing short-term trends in volatility across time (as a measure of market conditions) by means of a multivariate kernel scheme. It is found that the semi-parametric method produces forecasts that are significantly more accurate than a number of competing approaches at both short and long forecast horizons.
Resumo:
Forecasts of volatility and correlation are important inputs into many practical financial problems. Broadly speaking, there are two ways of generating forecasts of these variables. Firstly, time-series models apply a statistical weighting scheme to historical measurements of the variable of interest. The alternative methodology extracts forecasts from the market traded value of option contracts. An efficient options market should be able to produce superior forecasts as it utilises a larger information set of not only historical information but also the market equilibrium expectation of options market participants. While much research has been conducted into the relative merits of these approaches, this thesis extends the literature along several lines through three empirical studies. Firstly, it is demonstrated that there exist statistically significant benefits to taking the volatility risk premium into account for the implied volatility for the purposes of univariate volatility forecasting. Secondly, high-frequency option implied measures are shown to lead to superior forecasts of the intraday stochastic component of intraday volatility and that these then lead on to superior forecasts of intraday total volatility. Finally, the use of realised and option implied measures of equicorrelation are shown to dominate measures based on daily returns.
Resumo:
Our aim is to develop a set of leading performance indicators to enable managers of large projects to forecast during project execution how various stakeholders will perceive success months or even years into the operation of the output. Large projects have many stakeholders who have different objectives for the project, its output, and the business objectives they will deliver. The output of a large project may have a lifetime that lasts for years, or even decades, and ultimate impacts that go beyond its immediate operation. How different stakeholders perceive success can change with time, and so the project manager needs leading performance indicators that go beyond the traditional triple constraint to forecast how key stakeholders will perceive success months or even years later. In this article, we develop a model for project success that identifies how project stakeholders might perceive success in the months and years following a project. We identify success or failure factors that will facilitate or mitigate against achievement of those success criteria, and a set of potential leading performance indicators that forecast how stakeholders will perceive success during the life of the project's output. We conducted a scale development study with 152 managers of large projects and identified two project success factor scales and seven stakeholder satisfaction scales that can be used by project managers to predict stakeholder satisfaction on projects and so may be used by the managers of large projects for the basis of project control.
Resumo:
Client owners usually need an estimate or forecast of their likely building costs in advance of detailed design in order to confirm the financial feasibility of their projects. Because of their timing in the project life cycle, these early stage forecasts are characterized by the minimal amount of information available concerning the new (target) project to the point that often only its size and type are known. One approach is to use the mean contract sum of a sample, or base group, of previous projects of a similar type and size to the project for which the estimate is needed. Bernoulli’s law of large numbers implies that this base group should be as large as possible. However, increasing the size of the base group inevitably involves including projects that are less and less similar to the target project. Deciding on the optimal number of base group projects is known as the homogeneity or pooling problem. A method of solving the homogeneity problem is described involving the use of closed form equations to compare three different sampling arrangements of previous projects for their simulated forecasting ability by a cross-validation method, where a series of targets are extracted, with replacement, from the groups and compared with the mean value of the projects in the base groups. The procedure is then demonstrated with 450 Hong Kong projects (with different project types: Residential, Commercial centre, Car parking, Social community centre, School, Office, Hotel, Industrial, University and Hospital) clustered into base groups according to their type and size.
Resumo:
The internet has become important in political communication in Australia. Using Habermas' ideal types, it is argued that political blogs can be viewed as public spheres that might provide scope for the expansion of deliberative democratic discussion. This hypothesis is explored through analysis of the group political blog Pineapple Party Time. It is evident that the bloggers and those who commented on their posts were highly knowledgeable about and interested in politics. Form an examination of these posts and the comments on them, Pineapple Party Time did act as a public sphere to some degree, and did provide for the deliberative discussion essential for a democracy, but it was largely restricted to Crikey readers. For a deliberative public sphere and democratic discussion to function to any extent, the public sphere must be open to all citizens, who need to have the access and knowledge to engage in deliberative discussion.
Resumo:
All elections are unique, but the Australian federal election of 2010 was unusual for many reasons. It came in the wake of the unprecedented ousting of the Prime Minister who had led the Australian Labor Party to a landslide victory, after eleven years in opposition, at the previous election in 2007. In a move that to many would have been unthinkable, Kevin Rudd’s increasing unpopularity within his own parliamentary party finally took its toll and in late June he was replaced by his deputy, Julia Gillard. Thus the second unusual feature of the election was that it was contested by Australia’s first female prime minister. The third unusual feature was that the election almost saw a first-term government, with a comfortable majority, defeated. Instead it resulted in a hung parliament, for the first time since 1940, and Labor scraped back into power as a minority government, supported by three independents and the first member of the Australian Greens ever to be elected to the House of Representatives. The Coalition Liberal and National opposition parties themselves had a leader of only eight months standing, Tony Abbott, whose ascension to the position had surprised more than a few. This was the context for an investigation of voting behaviour in the 2010 election....
Resumo:
Leadership change formed the backdrop to the 2010 Australian federal election, with the replacement of Kevin Rudd as prime minister by Julia Gillard, the country’s first female prime minister. This article uses the 2010 Australian Election Study, a post-election survey of voters, to examine patterns of voter defection between the 2007 and 2010 elections. The results show that the predominant influence on defection was how voters rated the leaders. Julia Gillard was particularly popular among female voters and her overall impact on the vote was slightly greater than that of Tony Abbott. Policy issues were second in importance after leadership, particularly for those moving from the Coalition to Labor, who were concerned about health and unemployment. Labor defectors to the Greens particularly disliked Labor’s education policies. Overall, the results point to the enduring importance of leaders as the predominant influence on how voters cast their ballot.