153 resultados para Net expected return
Resumo:
This thesis consists of the novel Brolga and an exegesis examining in what ways the ideas of katabasis and deterritorialisation inform an understanding of descent narratives in contemporary Australian outback fiction. When writing the creative piece, it was observed that Joseph Campbell’s Hero’s Journey was an imprecise model for my manuscript and indeed for many of the contemporary novels I had read written in similar outback settings. On analysis a better fit lies in the idea of a heroic journey from which there is no clear return from the underworld. This narrative form is defined in this thesis as a katabatic narrative. To unpack this narrative trope, the inverse of territoriality, deterritorialisation, is used as a lens to examine the complex thematic and symbolic resonances of the outback in both Brolga and analogous works of contemporary outback fiction.
Resumo:
The price formation of financial assets is a complex process. It extends beyond the standard economic paradigm of supply and demand to the understanding of the dynamic behavior of price variability, the price impact of information, and the implications of trading behavior of market participants on prices. In this thesis, I study aggregate market and individual assets volatility, liquidity dimensions, and causes of mispricing for US equities over a recent sample period. How volatility forecasts are modeled, what determines intradaily jumps and causes changes in intradaily volatility and what drives the premium of traded equity indexes? Are they induced, for example, by the information content of lagged volatility and return parameters or by macroeconomic news, changes in liquidity and volatility? Besides satisfying our intellectual curiosity, answers to these questions are of direct importance to investors developing trading strategies, policy makers evaluating macroeconomic policies and to arbitrageurs exploiting mispricing in exchange-traded funds. Results show that the leverage effect and lagged absolute returns improve forecasts of continuous components of daily realized volatility as well as jumps. Implied volatility does not subsume the information content of lagged returns in forecasting realized volatility and its components. The reported results are linked to the heterogeneous market hypothesis and demonstrate the validity of extending the hypothesis to returns. Depth shocks, signed order flow, the number of trades, and resiliency are the most important determinants of intradaily volatility. In contrast, spread shock and resiliency are predictive of signed intradaily jumps. There are fewer macroeconomic news announcement surprises that cause extreme price movements or jumps than those that elevate intradaily volatility. Finally, the premium of exchange-traded funds is significantly associated with momentum in net asset value and a number of liquidity parameters including the spread, traded volume, and illiquidity. The mispricing of industry exchange traded funds suggest that limits to arbitrage are driven by potential illiquidity.
Resumo:
In Social Science (Organization Studies, Economics, Management Science, Strategy, International Relations, Political Science…) the quest for addressing the question “what is a good practitioner?” has been around for centuries, with the underlying assumptions that good practitioners should lead organizations to higher levels of performance. Hence to ask “what is a good “captain”?” is not a new question, we should add! (e.g. Tsoukas & Cummings, 1997, p. 670; Söderlund, 2004, p. 190). This interrogation leads to consider problems such as the relations between dichotomies Theory and Practice, rigor and relevance of research, ways of knowing and knowledge forms. On the one hand we face the “Enlightenment” assumptions underlying modern positivist Social science, grounded in “unity-of-science dream of transforming and reducing all kinds of knowledge to one basic form and level” and cause-effects relationships (Eikeland, 2012, p. 20), and on the other, the postmodern interpretivist proposal, and its “tendency to make all kinds of knowing equivalent” (Eikeland, 2012, p. 20). In the project management space, this aims at addressing one of the fundamental problems in the field: projects still do not deliver their expected benefits and promises and therefore the socio-economical good (Hodgson & Cicmil, 2007; Bredillet, 2010, Lalonde et al., 2012). The Cartesian tradition supporting projects research and practice for the last 60 years (Bredillet, 2010, p. 4) has led to the lack of relevance to practice of the current conceptual base of project management, despite the sum of research, development of standards, best & good practices and the related development of project management bodies of knowledge (Packendorff, 1995, p. 319-323; Cicmil & Hodgson, 2006, p. 2–6, Hodgson & Cicmil, 2007, p. 436–7; Winter et al., 2006, p. 638). Referring to both Hodgson (2002) and Giddens (1993), we could say that “those who expect a “social-scientific Newton” to revolutionize this young field “are not only waiting for a train that will not arrive, but are in the wrong station altogether” (Hodgson, 2002, p. 809; Giddens, 1993, p. 18). While, in the postmodern stream mainly rooted in the “practice turn” (e.g. Hällgren & Lindahl, 2012), the shift from methodological individualism to social viscosity and the advocated pluralism lead to reinforce the “functional stupidity” (Alvesson & Spicer, 2012, p. 1194) this postmodern stream aims at overcoming. We suggest here that addressing the question “what is a good PM?” requires a philosophy of practice perspective to complement the “usual” philosophy of science perspective. The questioning of the modern Cartesian tradition mirrors a similar one made within Social science (Say, 1964; Koontz, 1961, 1980; Menger, 1985; Warry, 1992; Rothbard, 1997a; Tsoukas & Cummings, 1997; Flyvbjerg, 2001; Boisot & McKelvey, 2010), calling for new thinking. In order to get outside the rationalist ‘box’, Toulmin (1990, p. 11), along with Tsoukas & Cummings (1997, p. 655), suggests a possible path, summarizing the thoughts of many authors: “It can cling to the discredited research program of the purely theoretical (i.e. “modern”) philosophy, which will end up by driving it out of business: it can look for new and less exclusively theoretical ways of working, and develop the methods needed for a more practical (“post-modern”) agenda; or it can return to its pre-17th century traditions, and try to recover the lost (“pre-modern”) topics that were side-tracked by Descartes, but can be usefully taken up for the future” (Toulmin, 1990, p. 11). Thus, paradoxically and interestingly, in their quest for the so-called post-modernism, many authors build on “pre-modern” philosophies such as the Aristotelian one (e.g. MacIntyre, 1985, 2007; Tsoukas & Cummings, 1997; Flyvbjerg, 2001; Blomquist et al., 2010; Lalonde et al., 2012). It is perhaps because the post-modern stream emphasizes a dialogic process restricted to reliance on voice and textual representation, it limits the meaning of communicative praxis, and weaking the practice because it turns away attention from more fundamental issues associated with problem-definition and knowledge-for-use in action (Tedlock, 1983, p. 332–4; Schrag, 1986, p. 30, 46–7; Warry, 1992, p. 157). Eikeland suggests that the Aristotelian “gnoseology allows for reconsidering and reintegrating ways of knowing: traditional, practical, tacit, emotional, experiential, intuitive, etc., marginalised and considered insufficient by modernist [and post-modernist] thinking” (Eikeland, 2012, p. 20—21). By contrast with the modernist one-dimensional thinking and relativist and pluralistic post-modernism, we suggest, in a turn to an Aristotelian pre-modern lens, to re-conceptualise (“re” involving here a “re”-turn to pre-modern thinking) the “do” and to shift the perspective from what a good PM is (philosophy of science lens) to what a good PM does (philosophy of practice lens) (Aristotle, 1926a). As Tsoukas & Cummings put it: “In the Aristotelian tradition to call something good is to make a factual statement. To ask, for example, ’what is a good captain’?’ is not to come up with a list of attributes that good captains share (as modem contingency theorists would have it), but to point out the things that those who are recognized as good captains do.” (Tsoukas & Cummings, 1997, p. 670) Thus, this conversation offers a dialogue and deliberation about a central question: What does a good project manager do? The conversation is organized around a critic of the underlying assumptions supporting the modern, post-modern and pre-modern relations to ways of knowing, forms of knowledge and “practice”.
Resumo:
Over the last decade, the majority of existing search techniques is either keyword- based or category-based, resulting in unsatisfactory effectiveness. Meanwhile, studies have illustrated that more than 80% of users preferred personalized search results. As a result, many studies paid a great deal of efforts (referred to as col- laborative filtering) investigating on personalized notions for enhancing retrieval performance. One of the fundamental yet most challenging steps is to capture precise user information needs. Most Web users are inexperienced or lack the capability to express their needs properly, whereas the existent retrieval systems are highly sensitive to vocabulary. Researchers have increasingly proposed the utilization of ontology-based tech- niques to improve current mining approaches. The related techniques are not only able to refine search intentions among specific generic domains, but also to access new knowledge by tracking semantic relations. In recent years, some researchers have attempted to build ontological user profiles according to discovered user background knowledge. The knowledge is considered to be both global and lo- cal analyses, which aim to produce tailored ontologies by a group of concepts. However, a key problem here that has not been addressed is: how to accurately match diverse local information to universal global knowledge. This research conducts a theoretical study on the use of personalized ontolo- gies to enhance text mining performance. The objective is to understand user information needs by a \bag-of-concepts" rather than \words". The concepts are gathered from a general world knowledge base named the Library of Congress Subject Headings. To return desirable search results, a novel ontology-based mining approach is introduced to discover accurate search intentions and learn personalized ontologies as user profiles. The approach can not only pinpoint users' individual intentions in a rough hierarchical structure, but can also in- terpret their needs by a set of acknowledged concepts. Along with global and local analyses, another solid concept matching approach is carried out to address about the mismatch between local information and world knowledge. Relevance features produced by the Relevance Feature Discovery model, are determined as representatives of local information. These features have been proven as the best alternative for user queries to avoid ambiguity and consistently outperform the features extracted by other filtering models. The two attempt-to-proposed ap- proaches are both evaluated by a scientific evaluation with the standard Reuters Corpus Volume 1 testing set. A comprehensive comparison is made with a num- ber of the state-of-the art baseline models, including TF-IDF, Rocchio, Okapi BM25, the deploying Pattern Taxonomy Model, and an ontology-based model. The gathered results indicate that the top precision can be improved remarkably with the proposed ontology mining approach, where the matching approach is successful and achieves significant improvements in most information filtering measurements. This research contributes to the fields of ontological filtering, user profiling, and knowledge representation. The related outputs are critical when systems are expected to return proper mining results and provide personalized services. The scientific findings have the potential to facilitate the design of advanced preference mining models, where impact on people's daily lives.
Resumo:
This study explores the accuracy and valuation implications of the application of a comprehensive list of equity multiples in the takeover context. Motivating the study is the prevalent use of equity multiples in practice, the observed long-run underperformance of acquirers following takeovers, and the scarcity of multiplesbased research in the merger and acquisition setting. In exploring the application of equity multiples in this context three research questions are addressed: (1) how accurate are equity multiples (RQ1); which equity multiples are more accurate in valuing the firm (RQ2); and which equity multiples are associated with greater misvaluation of the firm (RQ3). Following a comprehensive review of the extant multiples-based literature it is hypothesised that the accuracy of multiples in estimating stock market prices in the takeover context will rank as follows (from best to worst): (1) forecasted earnings multiples, (2) multiples closer to bottom line earnings, (3) multiples based on Net Cash Flow from Operations (NCFO) and trading revenue. The relative inaccuracies in multiples are expected to flow through to equity misvaluation (as measured by the ratio of estimated market capitalisation to residual income value, or P/V). Accordingly, it is hypothesised that greater overvaluation will be exhibited for multiples based on Trading Revenue, NCFO, Book Value (BV) and earnings before interest, tax, depreciation and amortisation (EBITDA) versus multiples based on bottom line earnings; and that multiples based on Intrinsic Value will display the least overvaluation. The hypotheses are tested using a sample of 147 acquirers and 129 targets involved in Australian takeover transactions announced between 1990 and 2005. The results show that first, the majority of computed multiples examined exhibit valuation errors within 30 percent of stock market values. Second, and consistent with expectations, the results provide support for the superiority of multiples based on forecasted earnings in valuing targets and acquirers engaged in takeover transactions. Although a gradual improvement in estimating stock market values is not entirely evident when moving down the Income Statement, historical earnings multiples perform better than multiples based on Trading Revenue or NCFO. Third, while multiples based on forecasted earnings have the highest valuation accuracy they, along with Trading Revenue multiples for targets, produce the most overvalued valuations for acquirers and targets. Consistent with predictions, greater overvaluation is exhibited for multiples based on Trading Revenue for targets, and NCFO and EBITDA for both acquirers and targets. Finally, as expected, multiples based Intrinsic Value (along with BV) are associated with the least overvaluation. Given the widespread usage of valuation multiples in takeover contexts these findings offer a unique insight into their relative effectiveness. Importantly, the findings add to the growing body of valuation accuracy literature, especially within Australia, and should assist market participants to better understand the relative accuracy and misvaluation consequences of various equity multiples used in takeover documentation and assist them in subsequent investment decision making.
Resumo:
The purpose of this paper is to document and explain the allocation of takeover purchase price to identifiable intangible assets (IIAs), purchased goodwill, and/or target net tangible assets in an accounting environment unconstrained with respect to IIA accounting policy choice. Using a sample of Australian acquisitions during the unconstrained accounting environment from 1988 to 2004, we find the percentage allocation of purchase price to IIAs averaged 19.09%. The percentage allocation to IIAs is significantly positively related to return on assets and insignificantly related to leverage, contrary to opportunism. Efficiency suggests an explanation: profitable firms acquire and capitalise a higher percentage of IIAs in acquisitions. The target's investment opportunity set is significantly positively related to the percentage allocation to IIAs, consistent with information-signalling. The paper contributes to the accounting policy choice literature by showing how Australian firms make the one-off accounting policy choice in regards allocation of takeover purchase price (which is often a substantial dollar amount to) in an environment where accounting for IIAs was unconstrained.
Resumo:
Rubus yellow net virus (RYNV) was cloned and sequenced from a red raspberry (Rubus idaeus L.) plant exhibiting symptoms of mosaic and mottling in the leaves. Its genomic sequence indicates that it is a distinct member of the genus Badnavirus, with 7932. bp and seven ORFs, the first three corresponding in size and location to the ORFs found in the type member Commelina yellow mottle virus. Bioinformatic analysis of the genomic sequence detected several features including nucleic acid binding motifs, multiple zinc finger-like sequences and domains associated with cellular signaling. Subsequent sequencing of the small RNAs (sRNAs) from RYNV-infected R. idaeus leaf tissue was used to determine any RYNV sequences targeted by RNA silencing and identified abundant virus-derived small RNAs (vsRNAs). The majority of the vsRNAs were 22-nt in length. We observed a highly uneven genome-wide distribution of vsRNAs with strong clustering to small defined regions distributed over both strands of the RYNV genome. Together, our data show that sequences of the aphid-transmitted pararetrovirus RYNV are targeted in red raspberry by the interfering RNA pathway, a predominant antiviral defense mechanism in plants. © 2013.
Resumo:
Global awareness for cleaner and renewable energy is transforming the electricity sector at many levels. New technologies are being increasingly integrated into the electricity grid at high, medium and low voltage levels, new taxes on carbon emissions are being introduced and individuals can now produce electricity, mainly through rooftop photovoltaic (PV) systems. While leading to improvements, these changes also introduce challenges, and a question that often rises is ‘how can we manage this constantly evolving grid?’ The Queensland Government and Ergon Energy, one of the two Queensland distribution companies, have partnered with some Australian and German universities on a project to answer this question in a holistic manner. The project investigates the impact the integration of renewables and other new technologies has on the physical structure of the grid, and how this evolving system can be managed in a sustainable and economical manner. To aid understanding of what the future might bring, a software platform has been developed that integrates two modelling techniques: agent-based modelling (ABM) to capture the characteristics of the different system units accurately and dynamically, and particle swarm optimization (PSO) to find the most economical mix of network extension and integration of distributed generation over long periods of time. Using data from Ergon Energy, two types of networks (3 phase, and Single Wired Earth Return or SWER) have been modelled; three-phase networks are usually used in dense networks such as urban areas, while SWER networks are widely used in rural Queensland. Simulations can be performed on these networks to identify the required upgrades, following a three-step process: a) what is already in place and how it performs under current and future loads, b) what can be done to manage it and plan the future grid and c) how these upgrades/new installations will perform over time. The number of small-scale distributed generators, e.g. PV and battery, is now sufficient (and expected to increase) to impact the operation of the grid, which in turn needs to be considered by the distribution network manager when planning for upgrades and/or installations to stay within regulatory limits. Different scenarios can be simulated, with different levels of distributed generation, in-place as well as expected, so that a large number of options can be assessed (Step a). Once the location, sizing and timing of assets upgrade and/or installation are found using optimisation techniques (Step b), it is possible to assess the adequacy of their daily performance using agent-based modelling (Step c). One distinguishing feature of this software is that it is possible to analyse a whole area at once, while still having a tailored solution for each of the sub-areas. To illustrate this, using the impact of battery and PV can have on the two types of networks mentioned above, three design conditions can be identified (amongst others): · Urban conditions o Feeders that have a low take-up of solar generators, may benefit from adding solar panels o Feeders that need voltage support at specific times, may be assisted by installing batteries · Rural conditions - SWER network o Feeders that need voltage support as well as peak lopping may benefit from both battery and solar panel installations. This small example demonstrates that no single solution can be applied across all three areas, and there is a need to be selective in which one is applied to each branch of the network. This is currently the function of the engineer who can define various scenarios against a configuration, test them and iterate towards an appropriate solution. Future work will focus on increasing the level of automation in identifying areas where particular solutions are applicable.
Resumo:
This study aimed to determine if systematic variation of the diagnostic terminology embedded within written discharge information (i.e., concussion or mild traumatic brain injury, mTBI) would produce different expected symptoms and illness perceptions. We hypothesized that compared to concussion advice, mTBI advice would be associated with worse outcomes. Sixty-two volunteers with no history of brain injury or neurological disease were randomly allocated to one of two conditions in which they read a mTBI vignette followed by information that varied only by use of the embedded terms concussion (n = 28) or mTBI (n = 34). Both groups reported illness perceptions (timeline and consequences subscale of the Illness Perception Questionnaire-Revised) and expected Postconcussion Syndrome (PCS) symptoms 6 months post injury (Neurobehavioral Symptom Inventory, NSI). Statistically significant group differences due to terminology were found on selected NSI scores (i.e., total, cognitive and sensory symptom cluster scores (concussion > mTBI)), but there was no effect of terminology on illness perception. When embedded in discharge advice, diagnostic terminology affects some but not all expected outcomes. Given that such expectations are a known contributor to poor mTBI outcome, clinicians should consider the potential impact of varied terminology on their patients.
Resumo:
Before returning from Australia for the BCLA's Pioneers Day, Professor Nathan Efron spoke to OT. Professor Efron, you’re back in the UK for a short while – What tempted you away from Australia’s summer and back to Britain in November...
Resumo:
Co-producer of this contemporary music album.
Resumo:
This work presents a demand side response model (DSR) which assists small electricity consumers, through an aggregator, exposed to the market price to proactively mitigate price and peak impact on the electrical system. The proposed model allows consumers to manage air-conditioning when as a function of possible price spikes. The main contribution of this research is to demonstrate how consumers can minimise the total expected cost by optimising air-conditioning to account for occurrences of a price spike in the electricity market. This model investigates how pre-cooling method can be used to minimise energy costs when there is a substantial risk of an electricity price spike. The model was tested with Queensland electricity market data from the Australian Energy Market Operator and Brisbane temperature data from the Bureau of Statistics during hot days on weekdays in the period 2011 to 2012.
Resumo:
Throughout a long and occasionally distinguished career first as a television sports correspondent, then chat show host (dramatically ended by the accidental homicide of a guest live on air), then rebirth as a radio presenter at North Norfolk Digital, Alan Partridge has navigated the stormy waters of the British media landscape, now achieving mainstream success on the big screen with a starring role in Steve Coogan’s Alpha Papa (Declan Lowney, 2013). A man who in his desperation for a television series of his own once sank so low as to pitch a show called Monkey Tennis to the BBC finally finds his inner hero in a film which, while presenting mainly as comedy, also contains a biting critique of trends in the British media with which all journalists and media practitioners in general will be familiar. Alpha Papa is a nostalgic, elegiac riff on the pleasures and values of local radio the way it used to be, exemplified by North Norfolk Digital’s stable of flawed, but endearing jocks – Wally Banter, Bruno Brooks, Dave Clifton (who in one scene recounts the depths to which he sank as an alcoholic, drug addicted wreck—“I woke up in a skip with someone else’s underpants in my mouth. I can laugh about it now …”), and Pat Farrell. 50- something Pat is sacked by the new owners of North Norfolk Digital, who in their efforts to transform the station into a “multiplatform content provider” going by the more Gen Yfriendly name of Shape (“the way you want it to be”), wish to replace him with a younger, brattish model lacking in taste and manners. Out go records by the likes of Glen Campbell and Neil Diamond (“You can keep Jesus Christ”, observes Partridge after playing Diamond’s Sweet Caroline in a demonstration of the crackling radio repartee for which he is by now renowned, “that was the king of the Jews”), in comes Roachford. Pat, grieving his dead wife Molly, finally snaps and turns the glitzy media launch of Shape into a hostage siege. Only Alan Partridge, it seems, can step in and talk Pat out of a looming catastrophe.
Resumo:
In this paper we define two models of users that require diversity in search results; these models are theoretically grounded in the notion of intrinsic and extrinsic diversity. We then examine Intent-Aware Expected Reciprocal Rank (ERR-IA), one of the official measures used to assess diversity in TREC 2011-12, with respect to the proposed user models. By analyzing ranking preferences as expressed by the user models and those estimated by ERR-IA, we investigate whether ERR-IA assesses document rankings according to the requirements of the diversity retrieval task expressed by the two models. Empirical results demonstrate that ERR-IA neglects query-intents coverage by attributing excessive importance to redundant relevant documents. ERR-IA behavior is contrary to the user models that require measures to first assess diversity through the coverage of intents, and then assess the redundancy of relevant intents. Furthermore, diversity should be considered separately from document relevance and the documents positions in the ranking.