906 resultados para Returns to scale
Resumo:
Ordinary desktop computers continue to obtain ever more resources – in-creased processing power, memory, network speed and bandwidth – yet these resources spend much of their time underutilised. Cycle stealing frameworks harness these resources so they can be used for high-performance computing. Traditionally cycle stealing systems have used client-server based architectures which place significant limits on their ability to scale and the range of applica-tions they can support. By applying a fully decentralised network model to cycle stealing the limits of centralised models can be overcome. Using decentralised networks in this manner presents some difficulties which have not been encountered in their previous uses. Generally decentralised ap-plications do not require any significant fault tolerance guarantees. High-performance computing on the other hand requires very stringent guarantees to ensure correct results are obtained. Unfortunately mechanisms developed for traditional high-performance computing cannot be simply translated because of their reliance on a reliable storage mechanism. In the highly dynamic world of P2P computing this reliable storage is not available. As part of this research a fault tolerance system has been created which provides considerable reliability without the need for a persistent storage. As well as increased scalability, fully decentralised networks offer the ability for volunteers to communicate directly. This ability provides the possibility of supporting applications whose tasks require direct, message passing style communication. Previous cycle stealing systems have only supported embarrassingly parallel applications and applications with limited forms of communication so a new programming model has been developed which can support this style of communication within a cycle stealing context. In this thesis I present a fully decentralised cycle stealing framework. The framework addresses the problems of providing a reliable fault tolerance sys-tem and supporting direct communication between parallel tasks. The thesis includes a programming model for developing cycle stealing applications with direct inter-process communication and methods for optimising object locality on decentralised networks.
Resumo:
Each year, The Australian Centre for Philanthropy and Nonprofit Studies (CPNS) at Queensland University of Technology (QUT) collects and analyses statistics on the amount and extent of tax-deductible donations made and claimed by Australians in their individual income tax returns to deductible gift recipients (DGRs). The information presented below is based on the amount and type of tax-deductible donations made and claimed by Australian individual taxpayers to DGRs for the period 1 July 2006 to 30 June 2007. This information has been extracted mainly from the Australian Taxation Office's (ATO) publication Taxation Statistics 2006-07. The 2006-07 report is the latest report that has been made publicly available. It represents information in tax returns for the 2006-07 year processed by the ATO as at 31 October 2008. This study uses information based on published ATO material and represents only the extent of tax-deductible donations made and claimed by Australian taxpayers to DGRs at Item D9 Gifts or Donations in their individual income tax returns for the 2006-07 income year. The data does not include corporate taxpayers. Expenses such as raffles, sponsorships, fundraising purchases (e.g., sweets, tea towels, special events) or volunteering are generally not deductible as „gifts‟. The Giving Australia Report used a more liberal definition of gift to arrive at an estimated total of giving at $11 billion for 2005 (excluding Tsunami giving of $300 million). The $11 billion total comprised $5.7 billion from adult Australians, $2 billion from charity gambling or special events and $3.3 billion from business sources.
Resumo:
We explore the empirical usefulness of conditional coskewness to explain the cross-section of equity returns. We find that coskewness is an important determinant of the returns to equity, and that the pricing relationship varies through time. In particular we find that when the conditional market skewness is positive investors are willing to sacrifice 7.87% annually per unit of gamma (a standardized measure of coskewness risk) while they only demand a premium of 1.80% when the market is negatively skewed. A similar picture emerges from the coskewness factor of Harvey and Siddique (Harvey, C., Siddique, A., 2000a. Conditional skewness in asset pricing models tests. Journal of Finance 65, 1263–1295.) (a portfolio that is long stocks with small coskewness with the market and short high coskewness stocks) which earns 5.00% annually when the market is positively skewed but only 2.81% when the market is negatively skewed. The conditional two-moment CAPM and a conditional Fama and French (Fama, E., French, K., 1992. The cross-section of expected returns. Journal of Finance 47,427465.) three-factor model are rejected, but a model which includes coskewness is not rejected by the data. The model also passes a structural break test which many existing asset pricing models fail.
Resumo:
Nitrous oxide (N2O) is primarily produced by the microbially-mediated nitrification and denitrification processes in soils. It is influenced by a suite of climate (i.e. temperature and rainfall) and soil (physical and chemical) variables, interacting soil and plant nitrogen (N) transformations (either competing or supplying substrates) as well as land management practices. It is not surprising that N2O emissions are highly variable both spatially and temporally. Computer simulation models, which can integrate all of these variables, are required for the complex task of providing quantitative determinations of N2O emissions. Numerous simulation models have been developed to predict N2O production. Each model has its own philosophy in constructing simulation components as well as performance strengths. The models range from those that attempt to comprehensively simulate all soil processes to more empirical approaches requiring minimal input data. These N2O simulation models can be classified into three categories: laboratory, field and regional/global levels. Process-based field-scale N2O simulation models, which simulate whole agroecosystems and can be used to develop N2O mitigation measures, are the most widely used. The current challenge is how to scale up the relatively more robust field-scale model to catchment, regional and national scales. This paper reviews the development history, main construction components, strengths, limitations and applications of N2O emissions models, which have been published in the literature. The three scale levels are considered and the current knowledge gaps and challenges in modelling N2O emissions from soils are discussed.
Resumo:
In a report in the New York Times about a public symposium on the future of theory held at University of Chicago in 2002, staff writer Emily Eakin suggests that theory appears to have taken a back seat to more pressing current affairs – the Bush Administration, Al Qaeda, Iraq. Further, she reports that the symposium’s panel of high-profile theorists and scholars, including Homi Bhabha, Stanley Fish, Fredric Jameson, seemed reticent to offer their views on what is often touted as the demise or irrelevance of theory. The symposium and other commentaries on the topic of theory have prompted the view that the ‘Golden Age of Theory’ has passed and we are now in a ‘Post-Theory Age’. Given these pronouncements, we need to ask – Does theory matter any longer? Is it time for the obituary? Or are reports of the death of theory greatly exaggerated? The question remains whether to mourn or celebrate the demise of theory, and whether the body has in fact breathed its last. The title of this Introduction – ‘Bringing back theory’ – suggests a resurrection, or perhaps a haunting, as if the funeral has passed and, like Banquo’s ghost, theory returns to unsettle or disturb the celebration. It also suggests an entreaty, or perhaps a return performance. Rather than settle on one meaning, one interpretation, we are happy for all possibilities to coexist. The coexistence of different theories, different approaches, different interpretations also reflects the state of literary and cultural studies generally and children’s literature criticism in particular. No single theory or viewpoint predominates or vies for hegemony. Yet, one further question lingers – what is theory?
Resumo:
Each year, The Australian Centre for Philanthropy and Nonprofit Studies (CPNS) at Queensland University of Technology (QUT) collects and analyses statistics on the amount and extent of tax-deductible donations made and claimed by Australians in their individual income tax returns to deductible gift recipients (DGRs). The information presented below is based on the amount and type of tax-deductible donations made and claimed by Australian individual taxpayers to DGRs for the period 1 July 2008 to 30 June 2009. This information has been extracted mainly from the Australian Taxation Office's (ATO) publication Taxation Statistics 2008-09. The 2008-09 report is the latest report that has been made publicly available. It represents information in tax returns for the 2008-09 year processed by the ATO as at 31 October 2010.
Resumo:
In this paper we propose a new method for face recognition using fractal codes. Fractal codes represent local contractive, affine transformations which when iteratively applied to range-domain pairs in an arbitrary initial image result in a fixed point close to a given image. The transformation parameters such as brightness offset, contrast factor, orientation and the address of the corresponding domain for each range are used directly as features in our method. Features of an unknown face image are compared with those pre-computed for images in a database. There is no need to iterate, use fractal neighbor distances or fractal dimensions for comparison in the proposed method. This method is robust to scale change, frame size change and rotations as well as to some noise, facial expressions and blur distortion in the image
Resumo:
This paper describes a scene invariant crowd counting algorithm that uses local features to monitor crowd size. Unlike previous algorithms that require each camera to be trained separately, the proposed method uses camera calibration to scale between viewpoints, allowing a system to be trained and tested on different scenes. A pre-trained system could therefore be used as a turn-key solution for crowd counting across a wide range of environments. The use of local features allows the proposed algorithm to calculate local occupancy statistics, and Gaussian process regression is used to scale to conditions which are unseen in the training data, also providing confidence intervals for the crowd size estimate. A new crowd counting database is introduced to the computer vision community to enable a wider evaluation over multiple scenes, and the proposed algorithm is tested on seven datasets to demonstrate scene invariance and high accuracy. To the authors' knowledge this is the first system of its kind due to its ability to scale between different scenes and viewpoints.
Resumo:
We describe a scaling method for templating digital radiographs using conventional acetate templates independent of template magnification without the need for a calibration marker. The mean magnification factor for the radiology department was determined (119.8%, range117%-123.4%). This fixed magnification factor was used to scale the radiographs by the method described. 32 femoral heads on postoperative THR radiographs were then measured and compared to the actual size. The mean absolute accuracy was within 0.5% of actual head size (range 0 to 3%) with a mean absolute difference of 0.16mm (range 0-1mm, SD 0.26mm). Intraclass Correlation Coefficient (ICC) showed excellent reliability for both inter and intraobserver measurements with ICC scores of 0.993 (95% CI 0.988-0.996) for interobserver measurements and intraobserver measurements ranging between 0.990-0.993 (95% CI 0.980-0.997).
Resumo:
Solo Show is a to-scale model Metro Arts’ gallery, in which it was exhibited. Set upon a timber frame, the model depicts a miniature ‘installation’ within the ‘space’: a foam block that obstructs one of the gallery’s walkways. Developed and produced for a group exhibition that explored the relationship between humour and art, this work explores and pokes fun at ideas of the institution, scale and the artist ego as well as communicating feelings of emergence, insecurity and hesitancy. The work was included in the group show 'Lean Towards Indifference!' at MetroArts, Brisbane, curated by art collective No Frills.
Resumo:
The next-generation of service-oriented architecture (SOA) needs to scale for flexible service consumption, beyond organizational and application boundaries, into communities, ecosystems and business networks. In wider and, ultimately, global settings, new capabilities are needed so that business partners can efficiently and reliably enable, adapt and expose services. Those services can then be discovered, ordered, consumed, metered and paid for, through new applications and opportunities, driven by third-parties in the global “village”. This trend is already underway, in different ways, through different early adopter market segments. This paper proposes an architectural strategy for the provisioning and delivery of services in communities, ecosystems and business networks – a Service Delivery Framework (SDF). The SDF is intended to support multiple industries and deployments where a SOA platform is needed for collaborating partners and diverse consumers. Specifically, it is envisaged that the SDF allows providers to publish their services into network directories so that they can be repurposed, traded and consumed, and leveraging network utilities like B2B gateways and cloud hosting. To support these different facets of service delivery, the SDF extends the conventional service provider, service broker and service consumer of the Web Services Architecture to include service gateway, service hoster, service aggregator and service channel maker.
Resumo:
The next generation of SOA needs to scale for flexible service consumption, beyond organizational boundaries and current B2B applications, into communities, eco-systems, and business networks. In the wider and, ultimately, global settings, new capabilities are needed so that business partners can efficiently and reliably enable, adapt, and expose services where they can be discovered, ordered, consumed, metered, and paid for, through new applications and opportunities, driven by third parties in the global "village". This trend is already underway, in different ways, through various early adopter market segments. For the small medium enterprises segment, Google, Intuit-Microsoft, and others have launched appstores, through which an open-ended array of hosted applications are sourced from the development community and procured as maketplace commondities. In the corporate sector, the marketplace model and business network hubs are being put in place on top of connectivity and network orchestration investments for capitalizing services as tradable assets, seen in banking/finance (e.g. American Express Intelligent Marketplace), logistics (e.g., the E2open hub), and the public sector (e.g., UK DirectGov whole-of-government citizen services delivery).
Resumo:
Each year QUT’s Centre of Philanthropy and Nonprofit Studies collects and analyses statistics on the extent of tax-deductible donations claimed by Australians in their individual income tax returns to deductible gift recipients (DGRs). The information presented below is based on the amount and type of tax-deductible donations claimed by Australian taxpayers to deductible gift recipients (DGRs) for the period 1 July 1999 to 30 June 2000. This information has been extracted from the Australian Taxation Office's publication Taxation Statistics 1999-2000 which provides an overview and profile of the income and taxation status of Australian taxpayers using information extracted from their income tax returns for the period 1 July 1999 to 30 June 2000. The 1999/2000 report is the latest report that has been made publicly available...
Resumo:
China's market-oriented labor market reform has been in place for about one and a half decades. This study uses individual data for 1981 and 1987 to examine the success of the first half of the reform program. Success is evaluated by examining changes in the wage setting structure in the state-owned sector over the reform period. Have the market reforms stimulated worker incentives by increasing the returns to human capital acquisition? Has the wage structure altered to more closely mimic that of a market economy? In 1987, there is evidence of a structural change in the system of wage determination, with slightly increased rates of return to human capital. However, changes in industrial wage differentials appear to play the dominant role. It is argued that this may be due to labor market reforms, in particular the introduction of the profit related bonus scheme.J. Comp. Econom.,December 1997,25(3), pp. 403–421. Australian National University, Canberra, ACT0200, Australia and University of Tasmania, Hobart, Tasmania, Australia, and University of Aberdeen, Old Aberdeen, Scotland AB24 3QY.
Resumo:
The mechanisms of force generation and transference via microfilament networks are crucial to the understandings of mechanobiology of cellular processes in living cells. However, there exists an enormous challenge for all-atom physics simulation of real size microfilament networks due to scale limitation of molecular simulation techniques. Following biophysical investigations of constitutive relations between adjacent globular actin monomers on filamentous actin, a hierarchical multiscale model was developed to investigate the biomechanical properties of microfilament networks. This model was validated by previous experimental studies of axial tension and transverse vibration of single F-actin. The biomechanics of microfilament networks can be investigated at the scale of real eukaryotic cell size (10 μm). This multiscale approach provides a powerful modeling tool which can contribute to the understandings of actin-related cellular processes in living cells.