5 resultados para Task Performance and Analysis
em CORA - Cork Open Research Archive - University College Cork - Ireland
Resumo:
Malaysian Financial Reporting Standard (FRS) No. 136, Impairment of Assets, was issued in 2005. The standard requires public listed companies to report their non-current assets at no more than their recoverable amount. When the value of impaired assets is recovered, or partly recovered, FRS 136 requires the impairment charges to be reversed to its new recoverable amount. This study tests whether the reversal of impairment losses by Malaysian firms is more closely associated with economic reasons or reporting incentives. The sample of this study consists of 182 public companies listed on Bursa Malaysia (formerly known as the Kuala Lumpur Stock Exchange) that reported reversals of their impairment charges during the period 2006-2009. These firms are matched with firms which do not reverse impairment on the basis of industrial classification and size. In the year of reversal, this study finds that the reversal firms are more profitable (before reversals) than their matched firms. On average, the Malaysian stock market values the reversals of impairment losses positively. These results suggest that the reversals generally reflect increases in the value of the previously impaired assets. After partitioning firms that are likely to manage earnings and those that are not, this study finds that there are some Malaysian firms which reverse the impairment charges to manage earnings. Their reversals are not value-relevant, and are negatively associated with future firm performance. On the other hand, the reversals of firms which are deemed not to be earnings managers are positively associated with both future firm performance and current stock price performance, and this is the dominant motivation for the reversal of impairment charges in Malaysia. In further analysis, this study provides evidence that the opportunistic reversals are also associated with other earnings management manifestations, namely abnormal working capital accruals and the motivation to avoid earnings declines. In general, the findings suggest that the fair value measurement in impairment standard provides useful information to the users of financial statements.
Resumo:
There has been an increased use of the Doubly-Fed Induction Machine (DFIM) in ac drive applications in recent times, particularly in the field of renewable energy systems and other high power variable-speed drives. The DFIM is widely regarded as the optimal generation system for both onshore and offshore wind turbines and has also been considered in wave power applications. Wind power generation is the most mature renewable technology. However, wave energy has attracted a large interest recently as the potential for power extraction is very significant. Various wave energy converter (WEC) technologies currently exist with the oscillating water column (OWC) type converter being one of the most advanced. There are fundemental differences in the power profile of the pneumatic power supplied by the OWC WEC and that of a wind turbine and this causes significant challenges in the selection and rating of electrical generators for the OWC devises. The thesis initially aims to provide an accurate per-phase equivalent circuit model of the DFIM by investigating various characterisation testing procedures. Novel testing methodologies based on the series-coupling tests is employed and is found to provide a more accurate representation of the DFIM than the standard IEEE testing methods because the series-coupling tests provide a direct method of determining the equivalent-circuit resistances and inductances of the machine. A second novel method known as the extended short-circuit test is also presented and investigated as an alternative characterisation method. Experimental results on a 1.1 kW DFIM and a 30 kW DFIM utilising the various characterisation procedures are presented in the thesis. The various test methods are analysed and validated through comparison of model predictions and torque-versus-speed curves for each induction machine. Sensitivity analysis is also used as a means of quantifying the effect of experimental error on the results taken from each of the testing procedures and is used to determine the suitability of the test procedures for characterising each of the devices. The series-coupling differential test is demonstrated to be the optimum test. The research then focuses on the OWC WEC and the modelling of this device. A software model is implemented based on data obtained from a scaled prototype device situated at the Irish test site. Test data from the electrical system of the device is analysed and this data is used to develop a performance curve for the air turbine utilised in the WEC. This performance curve was applied in a software model to represent the turbine in the electro-mechanical system and the software results are validated by the measured electrical output data from the prototype test device. Finally, once both the DFIM and OWC WEC power take-off system have been modeled succesfully, an investigation of the application of the DFIM to the OWC WEC model is carried out to determine the electrical machine rating required for the pulsating power derived from OWC WEC device. Thermal analysis of a 30 kW induction machine is carried out using a first-order thermal model. The simulations quantify the limits of operation of the machine and enable thedevelopment of rating requirements for the electrical generation system of the OWC WEC. The thesis can be considered to have three sections. The first section of the thesis contains Chapters 2 and 3 and focuses on the accurate characterisation of the doubly-fed induction machine using various testing procedures. The second section, containing Chapter 4, concentrates on the modelling of the OWC WEC power-takeoff with particular focus on the Wells turbine. Validation of this model is carried out through comparision of simulations and experimental measurements. The third section of the thesis utilises the OWC WEC model from Chapter 4 with a 30 kW induction machine model to determine the optimum device rating for the specified machine. Simulations are carried out to perform thermal analysis of the machine to give a general insight into electrical machine rating for an OWC WEC device.
Resumo:
In 1966, Roy Geary, Director of the ESRI, noted “the absence of any kind of import and export statistics for regions is a grave lacuna” and further noted that if regional analyses were to be developed then regional Input-Output Tables must be put on the “regular statistical assembly line”. Forty-five years later, the lacuna lamented by Geary still exists and remains the most significant challenge to the construction of regional Input-Output Tables in Ireland. The continued paucity of sufficient regional data to compile effective regional Supply and Use and Input-Output Tables has retarded the capacity to construct sound regional economic models and provide a robust evidence base with which to formulate and assess regional policy. This study makes a first step towards addressing this gap by presenting the first set of fully integrated, symmetric, Supply and Use and domestic Input-Output Tables compiled for the NUTS 2 regions in Ireland: The Border, Midland and Western region and the Southern & Eastern region. These tables are general purpose in nature and are consistent fully with the official national Supply & Use and Input-Output Tables, and the regional accounts. The tables are constructed using a survey-based or bottom-up approach rather than employing modelling techniques, yielding more robust and credible tables. These tables are used to present a descriptive statistical analysis of the two administrative NUTS 2 regions in Ireland, drawing particular attention to the underlying structural differences of regional trade balances and composition of Gross Value Added in those regions. By deriving regional employment multipliers, Domestic Demand Employment matrices are constructed to quantify and illustrate the supply chain impact on employment. In the final part of the study, the predictive capability of the Input-Output framework is tested over two time periods. For both periods, the static Leontief production function assumptions are relaxed to allow for labour productivity. Comparative results from this experiment are presented.
Resumo:
M66 an X-ray induced mutant of winter wheat (Triticum aestivum) cv. Guardian exhibits broad-spectrum resistance to powdery mildew (Blumeria graminis f. sp. tritici), yellow rust (Puccinia striiformis f. sp. tritici), and leaf rust (Puccinia recondita f. sp. tritici), along with partial resistance to stagnonospora nodorum blotch (caused by the necrotroph Stagonosporum nodorum) and septoria tritici blotch (caused by the hemibiotroph Mycosphaerella graminicola) compared to the parent plant ‘Guardian’. Analysis revealed that M66 exhibited no symptoms of infection following artificial inoculation with Bgt in the glasshouse after adult growth stage (GS 45). Resistance in M66 was associated with widespread leaf flecking which developed during tillering. Flecking also occurred in M66 leaves without Bgt challenge; as a result grain yields were reduced by approximately 17% compared to ‘Guardian’ in the absence of disease. At the seedling stage, M66 exhibited partial resistance. M66, along with Tht mutants (Tht 12, Tht13), also exhibit increased tolerance to environmental stresses (abiotic), such as drought and heat stress at seedling and adult growth stages, However, adult M66 exhibited increased susceptibility to the aphid Schizaphis graminum compared to ‘Guardian’. Resistance to Bgt in M66 was characterized with increased and earlier H2O2 accumulation at the site of infection which resulted in increased papilla formation in epidermal cells, compared to ‘Guardian’. Papilla formation was associated with reduced pathogen ingress and haustorium formation, indicating that the primary cause of resistance in M66 was prevention of pathogen penetration. Heat treatment at 46º C prior to challenge with Bgt also induced partial disease resistance to Blumeria graminis f. sp. tritici in ‘Guardian’ and M66 seedlings. This was characterized by a delay in primary infection, due to increased production of ROS species, such as hydrogen peroxide, ROS-scavenging enzymes and Hsp70, resulting in cross-linking of cell wall components prior to inoculation. This actively prevented the fungus from penetrating the epidermal cell wall. Proteomics analysis using 2-D gel electrophoresis identified primary and secondary disease resistance effects in M66 including detection of ROS scavenging enzymes (4, 24 hai), such as ascorbate peroxidase and a superoxidase dismutase isoform (CuZnSOD) in M66 which were absent from ‘Guardian’. Chitinase (PR protein) was also upregulated (24 hai) in M66 compared to ‘Guardian’.Monosomic and ditelosomic analysis of M66 revealed that the mutation in M66 is located on the long arm of chromosome 2B (2BL). Chromosome 2BL is known to have key genes involved in resistance to pathogens such as those causing stripe rust and powdery mildew. The TaMloB1 gene, an orthologue of the barley Mlo gene, is also located on chromosome 2BL. Sanger sequencing of part of the coding sequence revealed no deletions in the TaMloB1 gene between ‘Guardian’ and M66.
Resumo:
It is estimated that the quantity of digital data being transferred, processed or stored at any one time currently stands at 4.4 zettabytes (4.4 × 2 70 bytes) and this figure is expected to have grown by a factor of 10 to 44 zettabytes by 2020. Exploiting this data is, and will remain, a significant challenge. At present there is the capacity to store 33% of digital data in existence at any one time; by 2020 this capacity is expected to fall to 15%. These statistics suggest that, in the era of Big Data, the identification of important, exploitable data will need to be done in a timely manner. Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor networks, can be made up of massive numbers of individual components. These components can be geographically distributed yet may interact with one another via continuous data streams, which in turn may affect the state of the sender or receiver. This introduces a dynamic causality, which further complicates the overall system by introducing a temporal constraint that is difficult to accommodate. Practical approaches to realising the system described above have led to a multiplicity of analysis techniques, each of which concentrates on specific characteristics of the system being analysed and treats these characteristics as the dominant component affecting the results being sought. The multiplicity of analysis techniques introduces another layer of heterogeneity, that is heterogeneity of approach, partitioning the field to the extent that results from one domain are difficult to exploit in another. The question is asked can a generic solution for the monitoring and analysis of data that: accommodates temporal constraints; bridges the gap between expert knowledge and raw data; and enables data to be effectively interpreted and exploited in a transparent manner, be identified? The approach proposed in this dissertation acquires, analyses and processes data in a manner that is free of the constraints of any particular analysis technique, while at the same time facilitating these techniques where appropriate. Constraints are applied by defining a workflow based on the production, interpretation and consumption of data. This supports the application of different analysis techniques on the same raw data without the danger of incorporating hidden bias that may exist. To illustrate and to realise this approach a software platform has been created that allows for the transparent analysis of data, combining analysis techniques with a maintainable record of provenance so that independent third party analysis can be applied to verify any derived conclusions. In order to demonstrate these concepts, a complex real world example involving the near real-time capturing and analysis of neurophysiological data from a neonatal intensive care unit (NICU) was chosen. A system was engineered to gather raw data, analyse that data using different analysis techniques, uncover information, incorporate that information into the system and curate the evolution of the discovered knowledge. The application domain was chosen for three reasons: firstly because it is complex and no comprehensive solution exists; secondly, it requires tight interaction with domain experts, thus requiring the handling of subjective knowledge and inference; and thirdly, given the dearth of neurophysiologists, there is a real world need to provide a solution for this domain