919 resultados para accounting-based valuation models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the first decade of the new millennium, fueled by the economic development in Spain, urban bus services were extended. Since the years 2008 and 2009, the root of the economic crisis, the improvement of these services is at risk due to economic problems. In this paper, the technical efficiency of the main urban bus companies in Spain during the 2004–2009 period are studied using SBM (slack-based measures) models and by establishing the slacks in the services' production inputs. The influence of a series of exogenous variables on the operation of the different services is also analyzed. It is concluded that only the 24% of the case studies are efficient, and some urban form variables can explain part of the inefficiency. The methodology used allows studying the inefficiency in a disaggregated way that other DEA (data envelopment analysis) models do not.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The economic design of a distillation column or distillation sequences is a challenging problem that has been addressed by superstructure approaches. However, these methods have not been widely used because they lead to mixed-integer nonlinear programs that are hard to solve, and require complex initialization procedures. In this article, we propose to address this challenging problem by substituting the distillation columns by Kriging-based surrogate models generated via state of the art distillation models. We study different columns with increasing difficulty, and show that it is possible to get accurate Kriging-based surrogate models. The optimization strategy ensures that convergence to a local optimum is guaranteed for numerical noise-free models. For distillation columns (slightly noisy systems), Karush–Kuhn–Tucker optimality conditions cannot be tested directly on the actual model, but still we can guarantee a local minimum in a trust region of the surrogate model that contains the actual local minimum.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Keyword identification in one of two simultaneous sentences is improved when the sentences differ in F0, particularly when they are almost continuously voiced. Sentences of this kind were recorded, monotonised using PSOLA, and re-synthesised to give a range of harmonic ?F0s (0, 1, 3, and 10 semitones). They were additionally re-synthesised by LPC with the LPC residual frequency shifted by 25% of F0, to give excitation with inharmonic but regularly spaced components. Perceptual identification of frequency-shifted sentences showed a similar large improvement with nominal ?F0 as seen for harmonic sentences, although overall performance was about 10% poorer. We compared performance with that of two autocorrelation-based computational models comprising four stages: (i) peripheral frequency selectivity and half-wave rectification; (ii) within-channel periodicity extraction; (iii) identification of the two major peaks in the summary autocorrelation function (SACF); (iv) a template-based approach to speech recognition using dynamic time warping. One model sampled the correlogram at the target-F0 period and performed spectral matching; the other deselected channels dominated by the interferer and performed matching on the short-lag portion of the residual SACF. Both models reproduced the monotonic increase observed in human performance with increasing ?F0 for the harmonic stimuli, but not for the frequency-shifted stimuli. A revised version of the spectral-matching model, which groups patterns of periodicity that lie on a curve in the frequency-delay plane, showed a closer match to the perceptual data for frequency-shifted sentences. The results extend the range of phenomena originally attributed to harmonic processing to grouping by common spectral pattern.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The behaviour of self adaptive systems can be emergent, which means that the system’s behaviour may be seen as unexpected by its customers and its developers. Therefore, a self-adaptive system needs to garner confidence in its customers and it also needs to resolve any surprise on the part of the developer during testing and maintenance. We believe that these two functions can only be achieved if a self-adaptive system is also capable of self-explanation. We argue a self-adaptive system’s behaviour needs to be explained in terms of satisfaction of its requirements. Since self-adaptive system requirements may themselves be emergent, we propose the use of goal-based requirements models at runtime to offer self-explanation of how a system is meeting its requirements. We demonstrate the analysis of run-time requirements models to yield a self-explanation codified in a domain specific language, and discuss possible future work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We developed diatom-based prediction models of hydrology and periphyton abundance to inform assessment tools for a hydrologically managed wetland. Because hydrology is an important driver of ecosystem change, hydrologic alterations by restoration efforts could modify biological responses, such as periphyton characteristics. In karstic wetlands, diatoms are particularly important components of mat-forming calcareous periphyton assemblages that both respond and contribute to the structural organization and function of the periphyton matrix. We examined the distribution of diatoms across the Florida Everglades landscape and found hydroperiod and periphyton biovolume were strongly correlated with assemblage composition. We present species optima and tolerances for hydroperiod and periphyton biovolume, for use in interpreting the directionality of change in these important variables. Predictions of these variables were mapped to visualize landscape-scale spatial patterns in a dominant driver of change in this ecosystem (hydroperiod) and an ecosystem-level response metric of hydrologic change (periphyton biovolume). Specific diatom assemblages inhabiting periphyton mats of differing abundance can be used to infer past conditions and inform management decisions based on how assemblages are changing. This study captures diatom responses to wide gradients of hydrology and periphyton characteristics to inform ecosystem-scale bioassessment efforts in a large wetland.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In our research we investigate the output accuracy of discrete event simulation models and agent based simulation models when studying human centric complex systems. In this paper we focus on human reactive behaviour as it is possible in both modelling approaches to implement human reactive behaviour in the model by using standard methods. As a case study we have chosen the retail sector, and here in particular the operations of the fitting room in the women wear department of a large UK department store. In our case study we looked at ways of determining the efficiency of implementing new management policies for the fitting room operation through modelling the reactive behaviour of staff and customers of the department. First, we have carried out a validation experiment in which we compared the results from our models to the performance of the real system. This experiment also allowed us to establish differences in output accuracy between the two modelling methods. In a second step a multi-scenario experiment was carried out to study the behaviour of the models when they are used for the purpose of operational improvement. Overall we have found that for our case study example both, discrete event simulation and agent based simulation have the same potential to support the investigation into the efficiency of implementing new management policies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Att företag ska visa en så rättvisande bild av sin verksamhet som möjligt i årsredovisningarna har blivit ett allmänt accepterat faktum. Den här studien behandlar och granskar om och hur konsultföretag redovisar sitt humankapital samt om det är möjligt att komplettera årsredovisningen med en icke obligatorisk humankapitalsvärderingsmodell, likt hur GRI fungerar idag.  Syftet med uppsatsen är således att förklara hur konsultfirmor hanterar redovisning av humankapital, samt att baserat på det ge förslag till kompletteringar till årsredovisningen som gör årsredovisningen mer rättvisande.  I den teoretiska referensramen presenteras olika definitioner samt tidigare forskning inom redovisning, redovisning av humankapital samt de vanligaste förslagen till humankapitalsvärderingsmodeller. I studien har det brukats en kvalitativ metod med redskapen; personlig intervju, mejlintervju samt dokumentinsamling.  Analysen fokuserar på vad det arbete som bedrivs idag i konsultföretag med humankapitalsredovisning innebär samt vad som eventuellt kan tilläggas det redan existerande arbetet. I analysen behandlas även konsultföretagens åsikter angående de humankapitalsvärderingsmodeller som finns.  Exempel på viktiga slutsatser som kan dras av analyskapitlet är att det med absolut säkerhet kan sägas att det finns en möjlighet och en vilja från konsultföretag att komplettera årsredovisningar med humankapitalsvärderingsmodeller. Anledningen till att värderingsmodellen läggs som komplement är att det är svårt att införa humankapitalsvärderingsmodeller i den officiella årsredovisningen, med tanke på rådande värderingsprinciper för tillgångar. 

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The validation of Computed Tomography (CT) based 3D models takes an integral part in studies involving 3D models of bones. This is of particular importance when such models are used for Finite Element studies. The validation of 3D models typically involves the generation of a reference model representing the bones outer surface. Several different devices have been utilised for digitising a bone’s outer surface such as mechanical 3D digitising arms, mechanical 3D contact scanners, electro-magnetic tracking devices and 3D laser scanners. However, none of these devices is capable of digitising a bone’s internal surfaces, such as the medullary canal of a long bone. Therefore, this study investigated the use of a 3D contact scanner, in conjunction with a microCT scanner, for generating a reference standard for validating the internal and external surfaces of a CT based 3D model of an ovine femur. One fresh ovine limb was scanned using a clinical CT scanner (Phillips, Brilliance 64) with a pixel size of 0.4 mm2 and slice spacing of 0.5 mm. Then the limb was dissected to obtain the soft tissue free bone while care was taken to protect the bone’s surface. A desktop mechanical 3D contact scanner (Roland DG Corporation, MDX 20, Japan) was used to digitise the surface of the denuded bone. The scanner was used with the resolution of 0.3 × 0.3 × 0.025 mm. The digitised surfaces were reconstructed into a 3D model using reverse engineering techniques in Rapidform (Inus Technology, Korea). After digitisation, the distal and proximal parts of the bone were removed such that the shaft could be scanned with a microCT (µCT40, Scanco Medical, Switzerland) scanner. The shaft, with the bone marrow removed, was immersed in water and scanned with a voxel size of 0.03 mm3. The bone contours were extracted from the image data utilising the Canny edge filter in Matlab (The Mathswork).. The extracted bone contours were reconstructed into 3D models using Amira 5.1 (Visage Imaging, Germany). The 3D models of the bone’s outer surface reconstructed from CT and microCT data were compared against the 3D model generated using the contact scanner. The 3D model of the inner canal reconstructed from the microCT data was compared against the 3D models reconstructed from the clinical CT scanner data. The disparity between the surface geometries of two models was calculated in Rapidform and recorded as average distance with standard deviation. The comparison of the 3D model of the whole bone generated from the clinical CT data with the reference model generated a mean error of 0.19±0.16 mm while the shaft was more accurate(0.08±0.06 mm) than the proximal (0.26±0.18 mm) and distal (0.22±0.16 mm) parts. The comparison between the outer 3D model generated from the microCT data and the contact scanner model generated a mean error of 0.10±0.03 mm indicating that the microCT generated models are sufficiently accurate for validation of 3D models generated from other methods. The comparison of the inner models generated from microCT data with that of clinical CT data generated an error of 0.09±0.07 mm Utilising a mechanical contact scanner in conjunction with a microCT scanner enabled to validate the outer surface of a CT based 3D model of an ovine femur as well as the surface of the model’s medullary canal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An information filtering (IF) system monitors an incoming document stream to find the documents that match the information needs specified by the user profiles. To learn to use the user profiles effectively is one of the most challenging tasks when developing an IF system. With the document selection criteria better defined based on the users’ needs, filtering large streams of information can be more efficient and effective. To learn the user profiles, term-based approaches have been widely used in the IF community because of their simplicity and directness. Term-based approaches are relatively well established. However, these approaches have problems when dealing with polysemy and synonymy, which often lead to an information overload problem. Recently, pattern-based approaches (or Pattern Taxonomy Models (PTM) [160]) have been proposed for IF by the data mining community. These approaches are better at capturing sematic information and have shown encouraging results for improving the effectiveness of the IF system. On the other hand, pattern discovery from large data streams is not computationally efficient. Also, these approaches had to deal with low frequency pattern issues. The measures used by the data mining technique (for example, “support” and “confidences”) to learn the profile have turned out to be not suitable for filtering. They can lead to a mismatch problem. This thesis uses the rough set-based reasoning (term-based) and pattern mining approach as a unified framework for information filtering to overcome the aforementioned problems. This system consists of two stages - topic filtering and pattern mining stages. The topic filtering stage is intended to minimize information overloading by filtering out the most likely irrelevant information based on the user profiles. A novel user-profiles learning method and a theoretical model of the threshold setting have been developed by using rough set decision theory. The second stage (pattern mining) aims at solving the problem of the information mismatch. This stage is precision-oriented. A new document-ranking function has been derived by exploiting the patterns in the pattern taxonomy. The most likely relevant documents were assigned higher scores by the ranking function. Because there is a relatively small amount of documents left after the first stage, the computational cost is markedly reduced; at the same time, pattern discoveries yield more accurate results. The overall performance of the system was improved significantly. The new two-stage information filtering model has been evaluated by extensive experiments. Tests were based on the well-known IR bench-marking processes, using the latest version of the Reuters dataset, namely, the Reuters Corpus Volume 1 (RCV1). The performance of the new two-stage model was compared with both the term-based and data mining-based IF models. The results demonstrate that the proposed information filtering system outperforms significantly the other IF systems, such as the traditional Rocchio IF model, the state-of-the-art term-based models, including the BM25, Support Vector Machines (SVM), and Pattern Taxonomy Model (PTM).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reliable infrastructure assets impact significantly on quality of life and provide a stable foundation for economic growth and competitiveness. Decisions about the way assets are managed are of utmost importance in achieving this. Timely renewal of infrastructure assets supports reliability and maximum utilisation of infrastructure and enables business and community to grow and prosper. This research initially examined a framework for asset management decisions and then focused on asset renewal optimisation and renewal engineering optimisation in depth. This study had four primary objectives. The first was to develop a new Asset Management Decision Framework (AMDF) for identifying and classifying asset management decisions. The AMDF was developed by applying multi-criteria decision theory, classical management theory and life cycle management. The AMDF is an original and innovative contribution to asset management in that: · it is the first framework to provide guidance for developing asset management decision criteria based on fundamental business objectives; · it is the first framework to provide a decision context identification and analysis process for asset management decisions; and · it is the only comprehensive listing of asset management decision types developed from first principles. The second objective of this research was to develop a novel multi-attribute Asset Renewal Decision Model (ARDM) that takes account of financial, customer service, health and safety, environmental and socio-economic objectives. The unique feature of this ARDM is that it is the only model to optimise timing of asset renewal with respect to fundamental business objectives. The third objective of this research was to develop a novel Renewal Engineering Decision Model (REDM) that uses multiple criteria to determine the optimal timing for renewal engineering. The unique features of this model are that: · it is a novel extension to existing real options valuation models in that it uses overall utility rather than present value of cash flows to model engineering value; and · it is the only REDM that optimises timing of renewal engineering with respect to fundamental business objectives; The final objective was to develop and validate an Asset Renewal Engineering Philosophy (AREP) consisting of three principles of asset renewal engineering. The principles were validated using a novel application of real options theory. The AREP is the only renewal engineering philosophy in existence. The original contributions of this research are expected to enrich the body of knowledge in asset management through effectively addressing the need for an asset management decision framework, asset renewal and renewal engineering optimisation based on fundamental business objectives and a novel renewal engineering philosophy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a novel two-stage information filtering model which combines the merits of term-based and pattern- based approaches to effectively filter sheer volume of information. In particular, the first filtering stage is supported by a novel rough analysis model which efficiently removes a large number of irrelevant documents, thereby addressing the overload problem. The second filtering stage is empowered by a semantically rich pattern taxonomy mining model which effectively fetches incoming documents according to the specific information needs of a user, thereby addressing the mismatch problem. The experiments have been conducted to compare the proposed two-stage filtering (T-SM) model with other possible "term-based + pattern-based" or "term-based + term-based" IF models. The results based on the RCV1 corpus show that the T-SM model significantly outperforms other types of "two-stage" IF models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this thesis is to investigate the corporate governance attributes of smaller listed Australian firms. This study is motivated by evidence that these firms are associated with more regulatory concerns, the introduction of ASX Corporate Governance Recommendations in 2004, and a paucity of research to guide regulators and stakeholders of smaller firms. While there is an extensive body of literature examining the effectiveness of corporate governance, the literature principally focuses on larger companies, resulting in a deficiency in the understanding of the nature and effectiveness of corporate governance in smaller firms. Based on a review of agency theory literature, a theoretical model is developed that posits that agency costs are mitigated by internal governance mechanisms and transparency. The model includes external governance factors but in many smaller firms these factors are potentially absent, increasing the reliance on the internal governance mechanisms of the firm. Based on the model, the observed greater regulatory intervention in smaller companies may be due to sub-optimal internal governance practices. Accordingly, this study addresses four broad research questions (RQs). First, what is the extent and nature of the ASX Recommendations that have been adopted by smaller firms (RQ1)? Second, what firm characteristics explain differences in the recommendations adopted by smaller listed firms (RQ2), and third, what firm characteristics explain changes in the governance of smaller firms over time (RQ3)? Fourth, how effective are the corporate governance attributes of smaller firms (RQ4)? Six hypotheses are developed to address the RQs. The first two hypotheses explore the extent and nature of corporate governance, while the remaining hypotheses evaluate its effectiveness. A time-series, cross-sectional approach is used to evaluate the effectiveness of governance. Three models, based on individual governance attributes, an index of six items derived from the literature, and an index based on the full list of ASX Recommendations, are developed and tested using a sample of 298 smaller firms with annual observations over a five-year period (2002-2006) before and after the introduction of the ASX Recommendations in 2004. With respect to (RQ1) the results reveal that the overall adoption of the recommendations increased from 66 per cent in 2004 to 74 per cent in 2006. Interestingly, the adoption rate for recommendations regarding the structure of the board and formation of committees is significantly lower than the rates for other categories of recommendations. With respect to (RQ2) the results reveal that variations in rates of adoption are explained by key firm differences including, firm size, profitability, board size, audit quality, and ownership dispersion, while the results for (RQ3) were inconclusive. With respect to (RQ4), the results provide support for the association between better governance and superior accounting-based performance. In particular, the results highlight the importance of the independence of both the board and audit committee chairs, and of greater accounting-based expertise on the audit committee. In contrast, while there is little evidence that a majority independent board is associated with superior outcomes, there is evidence linking board independence with adverse audit opinion outcomes. These results suggest that board and chair independence are substitutes; in the presence of an independent chair a majority independent board may be an unnecessary and costly investment for smaller firms. The findings make several important contributions. First, the findings contribute to the literature by providing evidence on the extent, nature and effectiveness of governance in smaller firms. The findings also contribute to the policy debate regarding future development of Australia’s corporate governance code. The findings regarding board and chair independence, and audit committee characteristics, suggest that policy-makers could consider providing additional guidance for smaller companies. In general, the findings offer support for the “if not, why not?” approach of the ASX, rather than a prescriptive rules-based approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background In an attempt to establish some consensus on the proper use and design of experimental animal models in musculoskeletal research, AOVET (the veterinary specialty group of the AO Foundation) in concert with the AO Research Institute (ARI), and the European Academy for the Study of Scientific and Technological Advance, convened a group of musculoskeletal researchers, veterinarians, legal experts, and ethicists to discuss, in a frank and open forum, the use of animals in musculoskeletal research. Methods The group narrowed the field to fracture research. The consensus opinion resulting from this workshop can be summarized as follows: Results & Conclusion Anaesthesia and pain management protocols for research animals should follow standard protocols applied in clinical work for the species involved. This will improve morbidity and mortality outcomes. A database should be established to facilitate selection of anaesthesia and pain management protocols for specific experimental surgical procedures and adopted as an International Standard (IS) according to animal species selected. A list of 10 golden rules and requirements for conduction of animal experiments in musculoskeletal research was drawn up comprising 1) Intelligent study designs to receive appropriate answers; 2) Minimal complication rates (5 to max. 10%); 3) Defined end-points for both welfare and scientific outputs analogous to quality assessment (QA) audit of protocols in GLP studies; 4) Sufficient details for materials and methods applied; 5) Potentially confounding variables (genetic background, seasonal, hormonal, size, histological, and biomechanical differences); 6) Post-operative management with emphasis on analgesia and follow-up examinations; 7) Study protocols to satisfy criteria established for a "justified animal study"; 8) Surgical expertise to conduct surgery on animals; 9) Pilot studies as a critical part of model validation and powering of the definitive study design; 10) Criteria for funding agencies to include requirements related to animal experiments as part of the overall scientific proposal review protocols. Such agencies are also encouraged to seriously consider and adopt the recommendations described here when awarding funds for specific projects. Specific new requirements and mandates related both to improving the welfare and scientific rigour of animal-based research models are urgently needed as part of international harmonization of standards.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Orthopaedic fracture fixation implants are increasingly being designed using accurate 3D models of long bones based on computer tomography (CT). Unlike CT, magnetic resonance imaging (MRI) does not involve ionising radiation and is therefore a desirable alternative to CT. This study aims to quantify the accuracy of MRI-based 3D models compared to CT-based 3D models of long bones. The femora of five intact cadaver ovine limbs were scanned using a 1.5T MRI and a CT scanner. Image segmentation of CT and MRI data was performed using a multi-threshold segmentation method. Reference models were generated by digitising the bone surfaces free of soft tissue with a mechanical contact scanner. The MRI- and CT-derived models were validated against the reference models. The results demonstrated that the CT-based models contained an average error of 0.15mm while the MRI-based models contained an average error of 0.23mm. Statistical validation shows that there are no significant differences between 3D models based on CT and MRI data. These results indicate that the geometric accuracy of MRI based 3D models was comparable to that of CT-based models and therefore MRI is a potential alternative to CT for generation of 3D models with high geometric accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Legacies of the Global Financial Crisis and major domestic corporate collapses – such as HIH Insurance Pty Ltd and One.Tel Ltd (telecommunications) – have significantly changed Australia‟s financial regulatory landscape. Legal requirements for auditors have attracted particular attention as have practice standards more broadly around disclosure and conflict of interest. Conversely, although successful detection and prosecution of breaches may rest in significant part on forensic accounting activities, Australia‟s practitioners in this field have no minimum training or qualifications standards other than the baseline requirements mandated by the country‟s three professional accounting bodies. For those unaffiliated with these organizations, no professional oversight exists. In Australia, growth in the forensic accounting industry has been in direct response to public demand for expertise in a broad range of fraud, forensic and business analytics areas in order to improve the corporate governance practices of Australian organizations. During the 1990s, Australian forensic accounting firms expanded and diversified into a number of different areas going well beyond just the examination of financial documents and involvement in financial litigation disputes. “Big 4” accounting firms such as PriceWaterhouseCoopers, KPMG, Deloitte and Ernst and Young formed independent forensic accounting or forensic services units; a number of mid-tier and „boutique‟ forensic accounting firms similarly expanded into forensic investigative, analytical and advisory services. By 2008, 800 forensic accountants were registered with the country‟s largest specialist forensic accounting group, the Forensic Accounting Special Interest Group (FASIG) of the ICAA1. Currently, obtaining more precise figures on numbers of forensic accounting practitioners is problematic: professional accounting bodies either do not keep a register or have ceased registering their forensic accounting members; lack of formal recognition, admission or certification processes complicate identification of candidates; and diversity of the skills sets the industry requires has meant the influx of non-accounting based specialists.