961 resultados para POWER QUALITY
Resumo:
Aston University has been working closely with key companies from within the electricity industry for several years, initially in the development and delivery of an employer-led foundation degree programme in electrical power engineering, and more recently, in the development of a progression pathway for foundation degree graduates to achieve a Bachelors-level qualification. The Electrical Power Engineering foundation degree was developed in close consultation with the industry such that the programme is essentially owned by the sector. Programme delivery has required significant shifts away from traditional HE teaching patterns whilst maintaining the quality requirement and without compromise of the academic degree standard. Block teaching (2-week slots), partnership delivery, off-site student support and work-based learning have all presented challenges as we have sought to maximise the student learning experience and to ensure that the graduates are fit-for purpose and "hit the ground running" within a defined career structure for sponsoring companies. This paper will outline the skills challenges facing the sector; describe programme developments and delivery challenges; before articulating some observations and conclusions around programme effectiveness, impact of foundation degree graduates in the workplace and the significance of the close working relationship with key sponsoring companies. Copyright © 2012, September.
Resumo:
Spark-ignited (SI) gas engines are for the use of fuel gas only and are limited to the flammable range of the gas; this means the range of a concentration of a gas or vapor that will burn after ignition. Fuel gas like syngas from gasification or biogas must meet high quality and chemical purity standards for combustion in SI gas engines. Considerable effort has been devoted to fast pyrolysis over the years and some of the product oils have been tested in diesel or dual-fuel engines since 1993. For biogas conversion, usually dual-fuel engines are used, while for synthesis gas the use of gas engines is more common. The trials using wood derived pyrolysis oil from fast pyrolysis have not yet been a success story and these approaches have usually failed due to the high corrosivity of the pyrolysis oils.
Resumo:
Background/Aims: To develop and assess the psychometric validity of a Chinese language Vision Health related quality-of-life (VRQoL) measurement instrument for the Chinese visually impaired. Methods: The Low Vision Quality of Life Questionnaire (LVQOL) was translated and adapted into the Chinese-version Low Vision Quality of Life Questionnaire (CLVQOL). The CLVQOL was completed by 100 randomly selected people with low vision (primary group) and 100 people with normal vision (control group). Ninety-four participants from the primary group completed the CLVQOL a second time 2 weeks later (test-retest group). The internal consistency reliability, test-retest reliability, item-internal consistency, item-discrimination validity, construct validity and discriminatory power of the CLVQOL were calculated. Results: The review committee agreed that the CLVQOL replicated the meaning of the LVQOL and was sensitive to cultural differences. The Cronbach's α coefficient and the split-half coefficient for the four scales and total CLVQOL scales were 0.75-0.97. The test-retest reliability as estimated by the intraclass correlations coefficient was 0.69-0.95. Item-internal consistency was >0.4 and item-discrimination validity was generally <0.40. The Varimax rotation factor analysis of the CLVQOL identified four principal factors. the quality-of-life rating of four subscales and the total score of the CLVQOL of the primary group were lower than those of the Control group, both in hospital-based subjects and community-based subjects. Conclusion: The CLVQOL Chinese is a culturally specific vision-related quality-of-life measure instrument. It satisfies conventional psychometric criteria, discriminates visually healthy populations from low vision patients and may be valuable in screening the local community as well as for use in clinical practice or research. © Springer 2005.
Resumo:
We present experimental results for the effect of an increased supervisory signal power in a high-loss loopback supervisory system in an optically amplified wavelength division multiplexing (WDM) transmission line. The study focuses on the investigation of increasing the input power for the supervisory signal and the effect on the co-propagating WDM data signals using different channel spacing. This investigation is useful for determining the power limitation of the supervisory signal if extra power is needed to improve the monitoring. The study also shows the effect of spacing on the quality of the supervisory signal itself because of interaction with adjacent data signals.
Resumo:
We present experimental results for the effect of an increased supervisory signal power in a high-loss loopback supervisory system in an optically amplified wavelength division multiplexing (WDM) transmission line. The study focuses on the investigation of increasing the input power for the supervisory signal and the effect on the co-propagating WDM data signals using different channel spacing. This investigation is useful for determining the power limitation of the supervisory signal if extra power is needed to improve the monitoring. The study also shows the effect of spacing on the quality of the supervisory signal itself because of interaction with adjacent data signals.
Resumo:
This paper presents an assessment of the technical and economic performance of thermal processes to generate electricity from a wood chip feedstock by combustion, gasification and fast pyrolysis. The scope of the work begins with the delivery of a wood chip feedstock at a conversion plant and ends with the supply of electricity to the grid, incorporating wood chip preparation, thermal conversion, and electricity generation in dual fuel diesel engines. Net generating capacities of 1–20 MWe are evaluated. The techno-economic assessment is achieved through the development of a suite of models that are combined to give cost and performance data for the integrated system. The models include feed pretreatment, combustion, atmospheric and pressure gasification, fast pyrolysis with pyrolysis liquid storage and transport (an optional step in de-coupled systems) and diesel engine or turbine power generation. The models calculate system efficiencies, capital costs and production costs. An identical methodology is applied in the development of all the models so that all of the results are directly comparable. The electricity production costs have been calculated for 10th plant systems, indicating the costs that are achievable in the medium term after the high initial costs associated with novel technologies have reduced. The costs converge at the larger scale with the mean electricity price paid in the EU by a large consumer, and there is therefore potential for fast pyrolysis and diesel engine systems to sell electricity directly to large consumers or for on-site generation. However, competition will be fierce at all capacities since electricity production costs vary only slightly between the four biomass to electricity systems that are evaluated. Systems de-coupling is one way that the fast pyrolysis and diesel engine system can distinguish itself from the other conversion technologies. Evaluations in this work show that situations requiring several remote generators are much better served by a large fast pyrolysis plant that supplies fuel to de-coupled diesel engines than by constructing an entire close-coupled system at each generating site. Another advantage of de-coupling is that the fast pyrolysis conversion step and the diesel engine generation step can operate independently, with intermediate storage of the fast pyrolysis liquid fuel, increasing overall reliability. Peak load or seasonal power requirements would also benefit from de-coupling since a small fast pyrolysis plant could operate continuously to produce fuel that is stored for use in the engine on demand. Current electricity production costs for a fast pyrolysis and diesel engine system are 0.091/kWh at 1 MWe when learning effects are included. These systems are handicapped by the typical characteristics of a novel technology: high capital cost, high labour, and low reliability. As such the more established combustion and steam cycle produces lower cost electricity under current conditions. The fast pyrolysis and diesel engine system is a low capital cost option but it also suffers from relatively low system efficiency particularly at high capacities. This low efficiency is the result of a low conversion efficiency of feed energy into the pyrolysis liquid, because of the energy in the char by-product. A sensitivity analysis has highlighted the high impact on electricity production costs of the fast pyrolysis liquids yield. The liquids yield should be set realistically during design, and it should be maintained in practice by careful attention to plant operation and feed quality. Another problem is the high power consumption during feedstock grinding. Efficiencies may be enhanced in ablative fast pyrolysis which can tolerate a chipped feedstock. This has yet to be demonstrated at commercial scale. In summary, the fast pyrolysis and diesel engine system has great potential to generate electricity at a profit in the long term, and at a lower cost than any other biomass to electricity system at small scale. This future viability can only be achieved through the construction of early plant that could, in the short term, be more expensive than the combustion alternative. Profitability in the short term can best be achieved by exploiting niches in the market place and specific features of fast pyrolysis. These include: •countries or regions with fiscal incentives for renewable energy such as premium electricity prices or capital grants; •locations with high electricity prices so that electricity can be sold direct to large consumers or generated on-site by companies who wish to reduce their consumption from the grid; •waste disposal opportunities where feedstocks can attract a gate fee rather than incur a cost; •the ability to store fast pyrolysis liquids as a buffer against shutdowns or as a fuel for peak-load generating plant; •de-coupling opportunities where a large, single pyrolysis plant supplies fuel to several small and remote generators; •small-scale combined heat and power opportunities; •sales of the excess char, although a market has yet to be established for this by-product; and •potential co-production of speciality chemicals and fuel for power generation in fast pyrolysis systems.
Resumo:
Purpose: To evaluate lenses produced by excimer laser ablation of poly(methyl methacrylate) (PMMA) plates. Setting: University research laboratory. Methods: Two Nidek EC-5000 scanning-slit excimer laser systems were used to ablate plane-parallel plates of PMMA. The ablated lenses were examined by focimetry, interferometry, and mechanical surface profiling. Results: The spherical optical powers of the lenses matched the expected values, but the cylindrical powers were generally lower than intended. Interferometry revealed marked irregularity in the surface of negative corrections, which often had a positive “island” at their center. Positive corrections were generally smoother. These findings were supported by the results of mechanical profiling. Contrast sensitivity measurements carried out when observing through ablated lenses whose power had been neutralized with a suitable spectacle lens of opposite sign confirmed that the surface irregularities of the ablated lenses markedly reduced contrast sensitivity over a range of spatial frequencies. Conclusion: Improvements in beam delivery systems seem desirable.
Resumo:
The modulation instability (MI) is one of the main factors responsible for the degradation of beam quality in high-power laser systems. The so-called B-integral restriction is commonly used as the criteria for MI control in passive optics devices. For amplifiers the adiabatic model, assuming locally the Bespalov-Talanov expression for MI growth, is commonly used to estimate the destructive impact of the instability. We present here the exact solution of MI development in amplifiers. We determine the parameters which control the effect of MI in amplifiers and calculate the MI growth rate as a function of those parameters. The safety range of operational parameters is presented. The results of the exact calculations are compared with the adiabatic model, and the range of validity of the latest is determined. We demonstrate that for practical situations the adiabatic approximation noticeably overestimates MI. The additional margin of laser system design is quantified. © 2010 Optical Society of America.
Resumo:
In this work, we report high growth rate of nanocrystalline diamond (NCD) films on silicon wafers of 2 inches in diameter using a new growth regime, which employs high power and CH4/H2/N2/O2 plasma using a 5 kW MPCVD system. This is distinct from the commonly used hydrogen-poor Ar/CH4 chemistries for NCD growth. Upon rising microwave power from 2000 W to 3200 W, the growth rate of the NCD films increases from 0.3 to 3.4 μm/h, namely one order of magnitude enhancement on the growth rate was achieved at high microwave power. The morphology, grain size, microstructure, orientation or texture, and crystalline quality of the NCD samples were characterized by scanning electron microscopy (SEM), atomic force microscopy (AFM), X-ray diffraction, and micro-Raman spectroscopy. The combined effect of nitrogen addition, microwave power, and temperature on NCD growth is discussed from the point view of gas phase chemistry and surface reactions. © 2011 Elsevier B.V. All rights reserved.
Resumo:
The availability of regular supply has been identified as one of the major stimulants for the growth and development of any nation and is thus important for the economic well-being of a nation. The problems of the Nigerian power sector stems from a lot of factors culminating in her slow developmental growth and inability to meet the power demands of her citizens regardless of the abundance of human and natural resources prevalent in the nation. The research therefore had the main aim of investigating the importance and contributions of risk management to the success of projects specific to the power sector. To achieve this aim it was pertinent to examine the efficacy of risk management process in practice and elucidate the various risks typically associated with projects (Construction, Contractual, Political, Financial, Design, Human resource and Environmental risk factors) in the power sector as well as determine the current situation of risk management practice in Nigeria. To address this factors inhibiting the proficiency of the overarching and prevailing issue which have only been subject to limited in-depth academic research, a rigorous mixed research method was adopted (quantitative and qualitative data analysis). A review of the Nigeria power sector was also carried out as a precursor to the data collection stage. Using purposive sampling technique, respondents were identified and a questionnaire survey was administered. The research hypotheses were tested using inferential statistics (Pearson correlation, Chi-square test, t-test and ANOVA technique) and the findings revealed the need for the development of a new risk management implementation Framework. The proposed Framework was tested within a company project, for interpreting the dynamism and essential benefits of risk management with the aim of improving the project performances (time), reducing the level of fragmentation (quality) and improving profitability (cost) within the Nigerian power sector in order to bridge a gap between theory and practice. It was concluded that Nigeria’s poor risk management practices have prevented it from experiencing strong growth and development. The study however, concludes that the successful implementation of the developed risk management framework may help it to attain this status by enabling it to become more prepared and flexible, to face challenges that previously led to project failures, and thus contributing to its prosperity. The research study provides an original contribution theoretically, methodologically and practically which adds to the project risk management body of knowledge and to the Nigerian power sector.
Resumo:
Four-leg dc-ac power converters are widely used for the power grids to manage grid voltage unbalance caused by the interconnection of single-phase or three-phase unbalanced loads. These converters can further be connected in parallel to increase the overall power rating. The control of these converters poses a particular challenge if they are placed far apart with no links between them (e.g., in islanded microgrids). This challenge is studied in this paper with each four-leg converter designed to have improved common current sharing and selective voltage-quality enhancement. The common current sharing, including zero sequence component, is necessary since loads are spread over the microgrid and they are hence the common responsibility of all converters. The voltage-quality enhancement consideration should however be more selective since different loads have different sensitivity levels towards voltage disturbances. Converters connected to the more sensitive load buses should therefore be selectively triggered for compensation when voltage unbalances at their protected buses exceed the predefined thresholds. The proposed scheme is therefore different from conventional centralized schemes protecting only a common bus. Simulation and experimental results obtained have verified the effectiveness of the proposed scheme when applied to a four-wire islanded microgrid.
Resumo:
This particular study was a sub-study of an on-going investigation by Porter and Kazcaraba (1994) at the Veterans Administration Medical Center in Miami. While the Porter and Kazcaraba study utilizes multiple measures to determine the impact of nurse patient collaborative care on quality of life of cardiovascular patients receiving anticoagulant therapy, this study sought to find whether health education could empower similar clients to improve their quality of life. A health education program based on Freire's belief that shared collective knowledge empowers individuals to improve their lives and their community and Porter's nurse patient collaborative care model was used. Findings on a sample of thirty-eight subjects revealed strong correlations between self-esteem and life satisfaction as well as a trend towards increased power post-treatment. No group comparisons were made at posttest because the sample size was too small for meaningful statistical analysis.
Resumo:
Integration of the measurement activity into the production process is an essential rule in digital enterprise technology, especially for large volume product manufacturing, such as aerospace, shipbuilding, power generation and automotive industries. Measurement resource planning is a structured method of selecting and deploying necessary measurement resources to implement quality aims of product development. In this research, a new mapping approach for measurement resource planning is proposed. Firstly, quality aims are identified in the form of a number of specifications and engineering requirements of one quality characteristics (QCs) at a specific stage of product life cycle, and also measurement systems are classified according to the attribute of QCs. Secondly, a matrix mapping approach for measurement resource planning is outlined together with an optimization algorithm for combination between quality aims and measurement systems. Finally, the proposed methodology has been studied in shipbuilding to solve the problem of measurement resource planning, by which the measurement resources are deployed to satisfy all the quality aims. © Springer-Verlag Berlin Heidelberg 2010.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
The extractive industry is characterized by high levels of risk and uncertainty. These attributes create challenges when applying traditional accounting concepts (such as the revenue recognition and matching concepts) to the preparation of financial statements in the industry. The International Accounting Standards Board (2010) states that the objective of general purpose financial statements is to provide useful financial information to assist the capital allocation decisions of existing and potential providers of capital. The usefulness of information is defined as being relevant and faithfully represented so as to best aid in the investment decisions of capital providers. Value relevance research utilizes adaptations of the Ohlson (1995) to assess the attribute of value relevance which is one part of the attributes resulting in useful information. This study firstly examines the value relevance of the financial information disclosed in the financial reports of extractive firms. The findings reveal that the value relevance of information disclosed in the financial reports depends on the circumstances of the firm including sector, size and profitability. Traditional accounting concepts such as the matching concept can be ineffective when applied to small firms who are primarily engaged in nonproduction activities that involve significant levels of uncertainty such as exploration activities or the development of sites. Standard setting bodies such as the International Accounting Standards Board and the Financial Accounting Standards Board have addressed the financial reporting challenges in the extractive industry by allowing a significant amount of accounting flexibility in industryspecific accounting standards, particularly in relation to the accounting treatment of exploration and evaluation expenditure. Therefore, secondly this study examines whether the choice of exploration accounting policy has an effect on the value relevance of information disclosed in the financial reports. The findings show that, in general, the Successful Efforts method produces value relevant information in the financial reports of profitable extractive firms. However, specifically in the oil & gas sector, the Full Cost method produces value relevant asset disclosures if the firm is lossmaking. This indicates that investors in production and non-production orientated firms have different information needs and these needs cannot be simultaneously fulfilled by a single accounting policy. In the mining sector, a preference by large profitable mining companies towards a more conservative policy than either the Full Cost or Successful Efforts methods does not result in more value relevant information being disclosed in the financial reports. This finding supports the fact that the qualitative characteristic of prudence is a form of bias which has a downward effect on asset values. The third aspect of this study is an examination of the effect of corporate governance on the value relevance of disclosures made in the financial reports of extractive firms. The findings show that the key factor influencing the value relevance of financial information is the ability of the directors to select accounting policies which reflect the economic substance of the particular circumstances facing the firms in an effective way. Corporate governance is found to have an effect on value relevance, particularly in the oil & gas sector. However, there is no significant difference between the exploration accounting policy choices made by directors of firms with good systems of corporate governance and those with weak systems of corporate governance.