285 resultados para accounting-based valuation models

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Childcare workers play a significant role in the learning and development of children in their care. This has major implications for the training of workers. Under new reforms of the childcare industry the Australian government now requires all workers to obtain qualifications from a vocational education and training provider (eg. Technical and Further Education) or university. Effective models of employment-based training are critical to provide training to highly competent workers. This paper presents findings from a study that examined current and emerging models of employment-based training in the childcare sector, particularly at the Diploma level. Semi-structured interviews were conducted with a sample of 16 participants who represented childcare directors, employers, and workers located in childcare services in urban, regional and remote locations in the State of Queensland. The study proposes a ‘best-fit’ employment-based training approach that is characterised by a compendium of five models instead of a ‘one size fits all’. Issues with successful implementation of the EBT models are also discussed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper evaluates the efficiency of a number of popular corpus-based distributional models in performing discovery on very large document sets, including online collections. Literature-based discovery is the process of identifying previously unknown connections from text, often published literature, that could lead to the development of new techniques or technologies. Literature-based discovery has attracted growing research interest ever since Swanson's serendipitous discovery of the therapeutic effects of fish oil on Raynaud's disease in 1986. The successful application of distributional models in automating the identification of indirect associations underpinning literature-based discovery has been heavily demonstrated in the medical domain. However, we wish to investigate the computational complexity of distributional models for literature-based discovery on much larger document collections, as they may provide computationally tractable solutions to tasks including, predicting future disruptive innovations. In this paper we perform a computational complexity analysis on four successful corpus-based distributional models to evaluate their fit for such tasks. Our results indicate that corpus-based distributional models that store their representations in fixed dimensions provide superior efficiency on literature-based discovery tasks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quality of experience (QoE) measures the overall perceived quality of mobile video delivery from subjective user experience and objective system performance. Current QoE computing models have two main limitations: 1) insufficient consideration of the factors influencing QoE, and; 2) limited studies on QoE models for acceptability prediction. In this paper, a set of novel acceptability-based QoE models, denoted as A-QoE, is proposed based on the results of comprehensive user studies on subjective quality acceptance assessments. The models are able to predict users’ acceptability and pleasantness in various mobile video usage scenarios. Statistical regression analysis has been used to build the models with a group of influencing factors as independent predictors, including encoding parameters and bitrate, video content characteristics, and mobile device display resolution. The performance of the proposed A-QoE models has been compared with three well-known objective Video Quality Assessment metrics: PSNR, SSIM and VQM. The proposed A-QoE models have high prediction accuracy and usage flexibility. Future user-centred mobile video delivery systems can benefit from applying the proposed QoE-based management to optimize video coding and quality delivery decisions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Topic modelling, such as Latent Dirichlet Allocation (LDA), was proposed to generate statistical models to represent multiple topics in a collection of documents, which has been widely utilized in the fields of machine learning and information retrieval, etc. But its effectiveness in information filtering is rarely known. Patterns are always thought to be more representative than single terms for representing documents. In this paper, a novel information filtering model, Pattern-based Topic Model(PBTM) , is proposed to represent the text documents not only using the topic distributions at general level but also using semantic pattern representations at detailed specific level, both of which contribute to the accurate document representation and document relevance ranking. Extensive experiments are conducted to evaluate the effectiveness of PBTM by using the TREC data collection Reuters Corpus Volume 1. The results show that the proposed model achieves outstanding performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an event-based failure model to predict the number of failures that occur in water distribution assets. Often, such models have been based on analysis of historical failure data combined with pipe characteristics and environmental conditions. In this paper weather data have been added to the model to take into account the commonly observed seasonal variation of the failure rate. The theoretical basis of existing logistic regression models is briefly described in this paper, along with the refinements made to the model for inclusion of seasonal variation of weather. The performance of these refinements is tested using data from two Australian water authorities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ability to estimate the expected Remaining Useful Life (RUL) is critical to reduce maintenance costs, operational downtime and safety hazards. In most industries, reliability analysis is based on the Reliability Centred Maintenance (RCM) and lifetime distribution models. In these models, the lifetime of an asset is estimated using failure time data; however, statistically sufficient failure time data are often difficult to attain in practice due to the fixed time-based replacement and the small population of identical assets. When condition indicator data are available in addition to failure time data, one of the alternate approaches to the traditional reliability models is the Condition-Based Maintenance (CBM). The covariate-based hazard modelling is one of CBM approaches. There are a number of covariate-based hazard models; however, little study has been conducted to evaluate the performance of these models in asset life prediction using various condition indicators and data availability. This paper reviews two covariate-based hazard models, Proportional Hazard Model (PHM) and Proportional Covariate Model (PCM). To assess these models’ performance, the expected RUL is compared to the actual RUL. Outcomes demonstrate that both models achieve convincingly good results in RUL prediction; however, PCM has smaller absolute prediction error. In addition, PHM shows over-smoothing tendency compared to PCM in sudden changes of condition data. Moreover, the case studies show PCM is not being biased in the case of small sample size.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is a deductive theoretical enquiry into the flow of effects from the geometry of price bubbles/busts, to price indices, to pricing behaviours of sellers and buyers, and back to price bubbles/busts. The intent of the analysis is to suggest analytical approaches to identify the presence, maturity, and/or sustainability of a price bubble. We present a pricing model to emulate market behaviour, including numeric examples and charts of the interaction of supply and demand. The model extends into dynamic market solutions myopic (single- and multi-period) backward looking rational expectations to demonstrate how buyers and sellers interact to affect supply and demand and to show how capital gain expectations can be a destabilising influence – i.e. the lagged effects of past price gains can drive the market price away from long-run market-worth. Investing based on the outputs of past price-based valuation models appear to be more of a game-of-chance than a sound investment strategy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the 1960s, the value relevance of accounting information has been an important topic in accounting research. The value relevance research provides evidence as to whether accounting numbers relate to corporate value in a predicted manner (Beaver, 2002). Such research is not only important for investors but also provides useful insights into accounting reporting effectiveness for standard setters and other users. Both the quality of accounting standards used and the effectiveness associated with implementing these standards are fundamental prerequisites for high value relevance (Hellstrom, 2006). However, while the literature comprehensively documents the value relevance of accounting information in developed markets, little attention has been given to emerging markets where the quality of accounting standards and their enforcement are questionable. Moreover, there is currently no known research that explores the association between level of compliance with International Financial Reporting Standards (IFRS) and the value relevance of accounting information. Motivated by the lack of research on the value relevance of accounting information in emerging markets and the unique institutional setting in Kuwait, this study has three objectives. First, it investigates the extent of compliance with IFRS with respect to firms listed on the Kuwait Stock Exchange (KSE). Second, it examines the value relevance of accounting information produced by KSE-listed firms over the 1995 to 2006 period. The third objective links the first two and explores the association between the level of compliance with IFRS and the value relevance of accounting information to market participants. Since it is among the first countries to adopt IFRS, Kuwait provides an ideal setting in which to explore these objectives. In addition, the Kuwaiti accounting environment provides an interesting regulatory context in which each KSE-listed firm is required to appoint at least two external auditors from separate auditing firms. Based on the research objectives, five research questions (RQs) are addressed. RQ1 and RQ2 aim to determine the extent to which KSE-listed firms comply with IFRS and factors contributing to variations in compliance levels. These factors include firm attributes (firm age, leverage, size, profitability, liquidity), the number of brand name (Big-4) auditing firms auditing a firm’s financial statements, and industry categorization. RQ3 and RQ4 address the value relevance of IFRS-based financial statements to investors. RQ5 addresses whether the level of compliance with IFRS contributes to the value relevance of accounting information provided to investors. Based on the potential improvement in value relevance from adopting and complying with IFRS, it is predicted that the higher the level of compliance with IFRS, the greater the value relevance of book values and earnings. The research design of the study consists of two parts. First, in accordance with prior disclosure research, the level of compliance with mandatory IFRS is examined using a disclosure index. Second, the value relevance of financial statement information, specifically, earnings and book value, is examined empirically using two valuation models: price and returns models. The combined empirical evidence that results from the application of both models provides comprehensive insights into value relevance of accounting information in an emerging market setting. Consistent with expectations, the results show the average level of compliance with IFRS mandatory disclosures for all KSE-listed firms in 2006 was 72.6 percent; thus, indicating KSE-listed firms generally did not fully comply with all requirements. Significant variations in the extent of compliance are observed among firms and across accounting standards. As predicted, older, highly leveraged, larger, and profitable KSE-listed firms are more likely to comply with IFRS required disclosures. Interestingly, significant differences in the level of compliance are observed across the three possible auditor combinations of two Big-4, two non-Big 4, and mixed audit firm types. The results for the price and returns models provide evidence that earnings and book values are significant factors in the valuation of KSE-listed firms during the 1995 to 2006 period. However, the results show that the value relevance of earnings and book values decreased significantly during that period, suggesting that investors rely less on financial statements, possibly due to the increase in the available non-financial statement sources. Notwithstanding this decline, a significant association is observed between the level of compliance with IFRS and the value relevance of earnings and book value to KSE investors. The findings make several important contributions. First, they raise concerns about the effectiveness of the regulatory body that oversees compliance with IFRS in Kuwait. Second, they challenge the effectiveness of the two-auditor requirement in promoting compliance with regulations as well as the associated cost-benefit of this requirement for firms. Third, they provide the first known empirical evidence linking the level of IFRS compliance with the value relevance of financial statement information. Finally, the findings are relevant for standard setters and for their current review of KSE regulations. In particular, they highlight the importance of establishing and maintaining adequate monitoring and enforcement mechanisms to ensure compliance with accounting standards. In addition, the finding that stricter compliance with IFRS improves the value relevance of accounting information highlights the importance of full compliance with IFRS and not just mere adoption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This conceptual paper explores the extent to which reported accounting information captures unique family firm decision-making and intangible asset factors that impact financial value. We review the family firm valuation-relevant literature and identify that this body of research is predicated on the assumption that accounting information reflects the underlying reality of family firms. This research, however, fails to recognise that current accounting technology does not fully recognise the family firm factors in the book value of the firm or the implications for long run persistence of earnings. Thus, valuation models underpinning the extant empirical research, which are predicated on reported accounting information, may not fully reflect the intrinsic value of family firms. We present propositions on the interaction between accounting information, family factors and valuation as a road map for future empirical research with a discussion of appropriate methodologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Australia needs highly skilled workers to sustain a healthy economy. Current employment-based training models have limitations in meeting the demands for highly skilled labour supply. The research explored current and emerging models of employment-based training to propose more effective models at higher VET qualifications that can maintain a balance between institution and work-based learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Water management is vital for mine sites both for production and sustainability related issues. Effective water management is a complex task since the role of water on mine sites is multifaceted. Computers models are tools that represent mine site water interaction and can be used by mine sites to inform or evaluate their water management strategies. There exist several types of models that can be used to represent mine site water interactions. This paper presents three such models: an operational model, an aggregated systems model and a generic systems model. For each model the paper provides a description and example followed by an analysis of its advantages and disadvantages. The paper hypotheses that since no model is optimal for all situations, each model should be applied in situations where it is most appropriate based upon the scale of water interactions being investigated, either unit (operation), inter-site (aggregated systems) or intra-site (generic systems).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The validation of Computed Tomography (CT) based 3D models takes an integral part in studies involving 3D models of bones. This is of particular importance when such models are used for Finite Element studies. The validation of 3D models typically involves the generation of a reference model representing the bones outer surface. Several different devices have been utilised for digitising a bone’s outer surface such as mechanical 3D digitising arms, mechanical 3D contact scanners, electro-magnetic tracking devices and 3D laser scanners. However, none of these devices is capable of digitising a bone’s internal surfaces, such as the medullary canal of a long bone. Therefore, this study investigated the use of a 3D contact scanner, in conjunction with a microCT scanner, for generating a reference standard for validating the internal and external surfaces of a CT based 3D model of an ovine femur. One fresh ovine limb was scanned using a clinical CT scanner (Phillips, Brilliance 64) with a pixel size of 0.4 mm2 and slice spacing of 0.5 mm. Then the limb was dissected to obtain the soft tissue free bone while care was taken to protect the bone’s surface. A desktop mechanical 3D contact scanner (Roland DG Corporation, MDX 20, Japan) was used to digitise the surface of the denuded bone. The scanner was used with the resolution of 0.3 × 0.3 × 0.025 mm. The digitised surfaces were reconstructed into a 3D model using reverse engineering techniques in Rapidform (Inus Technology, Korea). After digitisation, the distal and proximal parts of the bone were removed such that the shaft could be scanned with a microCT (µCT40, Scanco Medical, Switzerland) scanner. The shaft, with the bone marrow removed, was immersed in water and scanned with a voxel size of 0.03 mm3. The bone contours were extracted from the image data utilising the Canny edge filter in Matlab (The Mathswork).. The extracted bone contours were reconstructed into 3D models using Amira 5.1 (Visage Imaging, Germany). The 3D models of the bone’s outer surface reconstructed from CT and microCT data were compared against the 3D model generated using the contact scanner. The 3D model of the inner canal reconstructed from the microCT data was compared against the 3D models reconstructed from the clinical CT scanner data. The disparity between the surface geometries of two models was calculated in Rapidform and recorded as average distance with standard deviation. The comparison of the 3D model of the whole bone generated from the clinical CT data with the reference model generated a mean error of 0.19±0.16 mm while the shaft was more accurate(0.08±0.06 mm) than the proximal (0.26±0.18 mm) and distal (0.22±0.16 mm) parts. The comparison between the outer 3D model generated from the microCT data and the contact scanner model generated a mean error of 0.10±0.03 mm indicating that the microCT generated models are sufficiently accurate for validation of 3D models generated from other methods. The comparison of the inner models generated from microCT data with that of clinical CT data generated an error of 0.09±0.07 mm Utilising a mechanical contact scanner in conjunction with a microCT scanner enabled to validate the outer surface of a CT based 3D model of an ovine femur as well as the surface of the model’s medullary canal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An information filtering (IF) system monitors an incoming document stream to find the documents that match the information needs specified by the user profiles. To learn to use the user profiles effectively is one of the most challenging tasks when developing an IF system. With the document selection criteria better defined based on the users’ needs, filtering large streams of information can be more efficient and effective. To learn the user profiles, term-based approaches have been widely used in the IF community because of their simplicity and directness. Term-based approaches are relatively well established. However, these approaches have problems when dealing with polysemy and synonymy, which often lead to an information overload problem. Recently, pattern-based approaches (or Pattern Taxonomy Models (PTM) [160]) have been proposed for IF by the data mining community. These approaches are better at capturing sematic information and have shown encouraging results for improving the effectiveness of the IF system. On the other hand, pattern discovery from large data streams is not computationally efficient. Also, these approaches had to deal with low frequency pattern issues. The measures used by the data mining technique (for example, “support” and “confidences”) to learn the profile have turned out to be not suitable for filtering. They can lead to a mismatch problem. This thesis uses the rough set-based reasoning (term-based) and pattern mining approach as a unified framework for information filtering to overcome the aforementioned problems. This system consists of two stages - topic filtering and pattern mining stages. The topic filtering stage is intended to minimize information overloading by filtering out the most likely irrelevant information based on the user profiles. A novel user-profiles learning method and a theoretical model of the threshold setting have been developed by using rough set decision theory. The second stage (pattern mining) aims at solving the problem of the information mismatch. This stage is precision-oriented. A new document-ranking function has been derived by exploiting the patterns in the pattern taxonomy. The most likely relevant documents were assigned higher scores by the ranking function. Because there is a relatively small amount of documents left after the first stage, the computational cost is markedly reduced; at the same time, pattern discoveries yield more accurate results. The overall performance of the system was improved significantly. The new two-stage information filtering model has been evaluated by extensive experiments. Tests were based on the well-known IR bench-marking processes, using the latest version of the Reuters dataset, namely, the Reuters Corpus Volume 1 (RCV1). The performance of the new two-stage model was compared with both the term-based and data mining-based IF models. The results demonstrate that the proposed information filtering system outperforms significantly the other IF systems, such as the traditional Rocchio IF model, the state-of-the-art term-based models, including the BM25, Support Vector Machines (SVM), and Pattern Taxonomy Model (PTM).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reliable infrastructure assets impact significantly on quality of life and provide a stable foundation for economic growth and competitiveness. Decisions about the way assets are managed are of utmost importance in achieving this. Timely renewal of infrastructure assets supports reliability and maximum utilisation of infrastructure and enables business and community to grow and prosper. This research initially examined a framework for asset management decisions and then focused on asset renewal optimisation and renewal engineering optimisation in depth. This study had four primary objectives. The first was to develop a new Asset Management Decision Framework (AMDF) for identifying and classifying asset management decisions. The AMDF was developed by applying multi-criteria decision theory, classical management theory and life cycle management. The AMDF is an original and innovative contribution to asset management in that: · it is the first framework to provide guidance for developing asset management decision criteria based on fundamental business objectives; · it is the first framework to provide a decision context identification and analysis process for asset management decisions; and · it is the only comprehensive listing of asset management decision types developed from first principles. The second objective of this research was to develop a novel multi-attribute Asset Renewal Decision Model (ARDM) that takes account of financial, customer service, health and safety, environmental and socio-economic objectives. The unique feature of this ARDM is that it is the only model to optimise timing of asset renewal with respect to fundamental business objectives. The third objective of this research was to develop a novel Renewal Engineering Decision Model (REDM) that uses multiple criteria to determine the optimal timing for renewal engineering. The unique features of this model are that: · it is a novel extension to existing real options valuation models in that it uses overall utility rather than present value of cash flows to model engineering value; and · it is the only REDM that optimises timing of renewal engineering with respect to fundamental business objectives; The final objective was to develop and validate an Asset Renewal Engineering Philosophy (AREP) consisting of three principles of asset renewal engineering. The principles were validated using a novel application of real options theory. The AREP is the only renewal engineering philosophy in existence. The original contributions of this research are expected to enrich the body of knowledge in asset management through effectively addressing the need for an asset management decision framework, asset renewal and renewal engineering optimisation based on fundamental business objectives and a novel renewal engineering philosophy.