890 resultados para Power quality mitigations
Resumo:
Purpose: To evaluate lenses produced by excimer laser ablation of poly(methyl methacrylate) (PMMA) plates. Setting: University research laboratory. Methods: Two Nidek EC-5000 scanning-slit excimer laser systems were used to ablate plane-parallel plates of PMMA. The ablated lenses were examined by focimetry, interferometry, and mechanical surface profiling. Results: The spherical optical powers of the lenses matched the expected values, but the cylindrical powers were generally lower than intended. Interferometry revealed marked irregularity in the surface of negative corrections, which often had a positive “island” at their center. Positive corrections were generally smoother. These findings were supported by the results of mechanical profiling. Contrast sensitivity measurements carried out when observing through ablated lenses whose power had been neutralized with a suitable spectacle lens of opposite sign confirmed that the surface irregularities of the ablated lenses markedly reduced contrast sensitivity over a range of spatial frequencies. Conclusion: Improvements in beam delivery systems seem desirable.
Resumo:
The modulation instability (MI) is one of the main factors responsible for the degradation of beam quality in high-power laser systems. The so-called B-integral restriction is commonly used as the criteria for MI control in passive optics devices. For amplifiers the adiabatic model, assuming locally the Bespalov-Talanov expression for MI growth, is commonly used to estimate the destructive impact of the instability. We present here the exact solution of MI development in amplifiers. We determine the parameters which control the effect of MI in amplifiers and calculate the MI growth rate as a function of those parameters. The safety range of operational parameters is presented. The results of the exact calculations are compared with the adiabatic model, and the range of validity of the latest is determined. We demonstrate that for practical situations the adiabatic approximation noticeably overestimates MI. The additional margin of laser system design is quantified. © 2010 Optical Society of America.
Resumo:
In this work, we report high growth rate of nanocrystalline diamond (NCD) films on silicon wafers of 2 inches in diameter using a new growth regime, which employs high power and CH4/H2/N2/O2 plasma using a 5 kW MPCVD system. This is distinct from the commonly used hydrogen-poor Ar/CH4 chemistries for NCD growth. Upon rising microwave power from 2000 W to 3200 W, the growth rate of the NCD films increases from 0.3 to 3.4 μm/h, namely one order of magnitude enhancement on the growth rate was achieved at high microwave power. The morphology, grain size, microstructure, orientation or texture, and crystalline quality of the NCD samples were characterized by scanning electron microscopy (SEM), atomic force microscopy (AFM), X-ray diffraction, and micro-Raman spectroscopy. The combined effect of nitrogen addition, microwave power, and temperature on NCD growth is discussed from the point view of gas phase chemistry and surface reactions. © 2011 Elsevier B.V. All rights reserved.
Resumo:
The availability of regular supply has been identified as one of the major stimulants for the growth and development of any nation and is thus important for the economic well-being of a nation. The problems of the Nigerian power sector stems from a lot of factors culminating in her slow developmental growth and inability to meet the power demands of her citizens regardless of the abundance of human and natural resources prevalent in the nation. The research therefore had the main aim of investigating the importance and contributions of risk management to the success of projects specific to the power sector. To achieve this aim it was pertinent to examine the efficacy of risk management process in practice and elucidate the various risks typically associated with projects (Construction, Contractual, Political, Financial, Design, Human resource and Environmental risk factors) in the power sector as well as determine the current situation of risk management practice in Nigeria. To address this factors inhibiting the proficiency of the overarching and prevailing issue which have only been subject to limited in-depth academic research, a rigorous mixed research method was adopted (quantitative and qualitative data analysis). A review of the Nigeria power sector was also carried out as a precursor to the data collection stage. Using purposive sampling technique, respondents were identified and a questionnaire survey was administered. The research hypotheses were tested using inferential statistics (Pearson correlation, Chi-square test, t-test and ANOVA technique) and the findings revealed the need for the development of a new risk management implementation Framework. The proposed Framework was tested within a company project, for interpreting the dynamism and essential benefits of risk management with the aim of improving the project performances (time), reducing the level of fragmentation (quality) and improving profitability (cost) within the Nigerian power sector in order to bridge a gap between theory and practice. It was concluded that Nigeria’s poor risk management practices have prevented it from experiencing strong growth and development. The study however, concludes that the successful implementation of the developed risk management framework may help it to attain this status by enabling it to become more prepared and flexible, to face challenges that previously led to project failures, and thus contributing to its prosperity. The research study provides an original contribution theoretically, methodologically and practically which adds to the project risk management body of knowledge and to the Nigerian power sector.
Resumo:
Four-leg dc-ac power converters are widely used for the power grids to manage grid voltage unbalance caused by the interconnection of single-phase or three-phase unbalanced loads. These converters can further be connected in parallel to increase the overall power rating. The control of these converters poses a particular challenge if they are placed far apart with no links between them (e.g., in islanded microgrids). This challenge is studied in this paper with each four-leg converter designed to have improved common current sharing and selective voltage-quality enhancement. The common current sharing, including zero sequence component, is necessary since loads are spread over the microgrid and they are hence the common responsibility of all converters. The voltage-quality enhancement consideration should however be more selective since different loads have different sensitivity levels towards voltage disturbances. Converters connected to the more sensitive load buses should therefore be selectively triggered for compensation when voltage unbalances at their protected buses exceed the predefined thresholds. The proposed scheme is therefore different from conventional centralized schemes protecting only a common bus. Simulation and experimental results obtained have verified the effectiveness of the proposed scheme when applied to a four-wire islanded microgrid.
Resumo:
This particular study was a sub-study of an on-going investigation by Porter and Kazcaraba (1994) at the Veterans Administration Medical Center in Miami. While the Porter and Kazcaraba study utilizes multiple measures to determine the impact of nurse patient collaborative care on quality of life of cardiovascular patients receiving anticoagulant therapy, this study sought to find whether health education could empower similar clients to improve their quality of life. A health education program based on Freire's belief that shared collective knowledge empowers individuals to improve their lives and their community and Porter's nurse patient collaborative care model was used. Findings on a sample of thirty-eight subjects revealed strong correlations between self-esteem and life satisfaction as well as a trend towards increased power post-treatment. No group comparisons were made at posttest because the sample size was too small for meaningful statistical analysis.
Resumo:
Integration of the measurement activity into the production process is an essential rule in digital enterprise technology, especially for large volume product manufacturing, such as aerospace, shipbuilding, power generation and automotive industries. Measurement resource planning is a structured method of selecting and deploying necessary measurement resources to implement quality aims of product development. In this research, a new mapping approach for measurement resource planning is proposed. Firstly, quality aims are identified in the form of a number of specifications and engineering requirements of one quality characteristics (QCs) at a specific stage of product life cycle, and also measurement systems are classified according to the attribute of QCs. Secondly, a matrix mapping approach for measurement resource planning is outlined together with an optimization algorithm for combination between quality aims and measurement systems. Finally, the proposed methodology has been studied in shipbuilding to solve the problem of measurement resource planning, by which the measurement resources are deployed to satisfy all the quality aims. © Springer-Verlag Berlin Heidelberg 2010.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
The extractive industry is characterized by high levels of risk and uncertainty. These attributes create challenges when applying traditional accounting concepts (such as the revenue recognition and matching concepts) to the preparation of financial statements in the industry. The International Accounting Standards Board (2010) states that the objective of general purpose financial statements is to provide useful financial information to assist the capital allocation decisions of existing and potential providers of capital. The usefulness of information is defined as being relevant and faithfully represented so as to best aid in the investment decisions of capital providers. Value relevance research utilizes adaptations of the Ohlson (1995) to assess the attribute of value relevance which is one part of the attributes resulting in useful information. This study firstly examines the value relevance of the financial information disclosed in the financial reports of extractive firms. The findings reveal that the value relevance of information disclosed in the financial reports depends on the circumstances of the firm including sector, size and profitability. Traditional accounting concepts such as the matching concept can be ineffective when applied to small firms who are primarily engaged in nonproduction activities that involve significant levels of uncertainty such as exploration activities or the development of sites. Standard setting bodies such as the International Accounting Standards Board and the Financial Accounting Standards Board have addressed the financial reporting challenges in the extractive industry by allowing a significant amount of accounting flexibility in industryspecific accounting standards, particularly in relation to the accounting treatment of exploration and evaluation expenditure. Therefore, secondly this study examines whether the choice of exploration accounting policy has an effect on the value relevance of information disclosed in the financial reports. The findings show that, in general, the Successful Efforts method produces value relevant information in the financial reports of profitable extractive firms. However, specifically in the oil & gas sector, the Full Cost method produces value relevant asset disclosures if the firm is lossmaking. This indicates that investors in production and non-production orientated firms have different information needs and these needs cannot be simultaneously fulfilled by a single accounting policy. In the mining sector, a preference by large profitable mining companies towards a more conservative policy than either the Full Cost or Successful Efforts methods does not result in more value relevant information being disclosed in the financial reports. This finding supports the fact that the qualitative characteristic of prudence is a form of bias which has a downward effect on asset values. The third aspect of this study is an examination of the effect of corporate governance on the value relevance of disclosures made in the financial reports of extractive firms. The findings show that the key factor influencing the value relevance of financial information is the ability of the directors to select accounting policies which reflect the economic substance of the particular circumstances facing the firms in an effective way. Corporate governance is found to have an effect on value relevance, particularly in the oil & gas sector. However, there is no significant difference between the exploration accounting policy choices made by directors of firms with good systems of corporate governance and those with weak systems of corporate governance.
Resumo:
PURPOSE: To study, for the first time, the effect of wearing ready-made glasses and glasses with power determined by self-refraction on children's quality of life. METHODS: This is a randomized, double-masked non-inferiority trial. Children in grades 7 and 8 (age 12-15 years) in nine Chinese secondary schools, with presenting visual acuity (VA) ≤6/12 improved with refraction to ≥6/7.5 bilaterally, refractive error ≤-1.0 D and <2.0 D of anisometropia and astigmatism bilaterally, were randomized to receive ready-made spectacles (RM) or identical-appearing spectacles with power determined by: subjective cycloplegic retinoscopy by a university optometrist (U), a rural refractionist (R) or non-cycloplegic self-refraction (SR). Main study outcome was global score on the National Eye Institute Refractive Error Quality of Life-42 (NEI-RQL-42) after 2 months of wearing study glasses, comparing other groups with the U group, adjusting for baseline score. RESULTS: Only one child (0.18%) was excluded for anisometropia or astigmatism. A total of 426 eligible subjects (mean age 14.2 years, 84.5% without glasses at baseline) were allocated to U [103 (24.2%)], RM [113 (26.5%)], R [108 (25.4%)] and SR [102 (23.9%)] groups, respectively. Baseline and endline score data were available for 398 (93.4%) of subjects. In multiple regression models adjusting for baseline score, older age (p = 0.003) and baseline spectacle wear (p = 0.016), but not study group assignment, were significantly associated with lower final score. CONCLUSION: Quality of life wearing ready-mades or glasses based on self-refraction did not differ from that with cycloplegic refraction by an experienced optometrist in this non-inferiority trial.
Resumo:
Digital Image Processing is a rapidly evolving eld with growing applications in Science and Engineering. It involves changing the nature of an image in order to either improve its pictorial information for human interpretation or render it more suitable for autonomous machine perception. One of the major areas of image processing for human vision applications is image enhancement. The principal goal of image enhancement is to improve visual quality of an image, typically by taking advantage of the response of human visual system. Image enhancement methods are carried out usually in the pixel domain. Transform domain methods can often provide another way to interpret and understand image contents. A suitable transform, thus selected, should have less computational complexity. Sequency ordered arrangement of unique MRT (Mapped Real Transform) coe cients can give rise to an integer-to-integer transform, named Sequency based unique MRT (SMRT), suitable for image processing applications. The development of the SMRT from UMRT (Unique MRT), forward & inverse SMRT algorithms and the basis functions are introduced. A few properties of the SMRT are explored and its scope in lossless text compression is presented.
Resumo:
Cognitive radio (CR) is fast emerging as a promising technology that can meet the machine-to machine (M2M) communication requirements for spectrum utilization and power control for large number of machines/devices expected to be connected to the Internet-of Things (IoT). Power control in CR as a secondary user can been modelled as a non-cooperative game cost function to quantify and reduce its effects of interference while occupying the same spectrum as primary user without adversely affecting the required quality of service (QoS) in the network. In this paper a power loss exponent that factors in diverse operating environments for IoT is employed in the non-cooperative game cost function to quantify the required power of transmission in the network. The approach would enable various CRs to transmit with lesser power thereby saving battery consumption or increasing the number of secondary users thereby optimizing the network resources efficiently.
Resumo:
We compare the optical properties and device performance of unpackaged InGaN/GaN multiple-quantum-well light-emitting diodes (LEDs) emitting at ∼430 nm grown simultaneously on a high-cost small-size bulk semipolar (11 2 - 2) GaN substrate (Bulk-GaN) and a low-cost large-size (11 2 - 2) GaN template created on patterned (10 1 - 2) r-plane sapphire substrate (PSS-GaN). The Bulk-GaN substrate has the threading dislocation density (TDD) of ∼ and basal-plane stacking fault (BSF) density of 0 cm-1, while the PSS-GaN substrate has the TDD of ∼2 × 108cm-2 and BSF density of ∼1 × 103cm-1. Despite an enhanced light extraction efficiency, the LED grown on PSS-GaN has two-times lower internal quantum efficiency than the LED grown on Bulk-GaN as determined by photoluminescence measurements. The LED grown on PSS-GaN substrate also has about two-times lower output power compared to the LED grown on Bulk-GaN substrate. This lower output power was attributed to the higher TDD and BSF density.
Resumo:
In economics of information theory, credence products are those whose quality is difficult or impossible for consumers to assess, even after they have consumed the product (Darby & Karni, 1973). This dissertation is focused on the content, consumer perception, and power of online reviews for credence services. Economics of information theory has long assumed, without empirical confirmation, that consumers will discount the credibility of claims about credence quality attributes. The same theories predict that because credence services are by definition obscure to the consumer, reviews of credence services are incapable of signaling quality. Our research aims to question these assumptions. In the first essay we examine how the content and structure of online reviews of credence services systematically differ from the content and structure of reviews of experience services and how consumers judge these differences. We have found that online reviews of credence services have either less important or less credible content than reviews of experience services and that consumers do discount the credibility of credence claims. However, while consumers rationally discount the credibility of simple credence claims in a review, more complex argument structure and the inclusion of evidence attenuate this effect. In the second essay we ask, “Can online reviews predict the worst doctors?” We examine the power of online reviews to detect low quality, as measured by state medical board sanctions. We find that online reviews are somewhat predictive of a doctor’s suitability to practice medicine; however, not all the data are useful. Numerical or star ratings provide the strongest quality signal; user-submitted text provides some signal but is subsumed almost completely by ratings. Of the ratings variables in our dataset, we find that punctuality, rather than knowledge, is the strongest predictor of medical board sanctions. These results challenge the definition of credence products, which is a long-standing construct in economics of information theory. Our results also have implications for online review users, review platforms, and for the use of predictive modeling in the context of information systems research.
Resumo:
This dissertation research points out major challenging problems with current Knowledge Organization (KO) systems, such as subject gateways or web directories: (1) the current systems use traditional knowledge organization systems based on controlled vocabulary which is not very well suited to web resources, and (2) information is organized by professionals not by users, which means it does not reflect intuitively and instantaneously expressed users’ current needs. In order to explore users’ needs, I examined social tags which are user-generated uncontrolled vocabulary. As investment in professionally-developed subject gateways and web directories diminishes (support for both BUBL and Intute, examined in this study, is being discontinued), understanding characteristics of social tagging becomes even more critical. Several researchers have discussed social tagging behavior and its usefulness for classification or retrieval; however, further research is needed to qualitatively and quantitatively investigate social tagging in order to verify its quality and benefit. This research particularly examined the indexing consistency of social tagging in comparison to professional indexing to examine the quality and efficacy of tagging. The data analysis was divided into three phases: analysis of indexing consistency, analysis of tagging effectiveness, and analysis of tag attributes. Most indexing consistency studies have been conducted with a small number of professional indexers, and they tended to exclude users. Furthermore, the studies mainly have focused on physical library collections. This dissertation research bridged these gaps by (1) extending the scope of resources to various web documents indexed by users and (2) employing the Information Retrieval (IR) Vector Space Model (VSM) - based indexing consistency method since it is suitable for dealing with a large number of indexers. As a second phase, an analysis of tagging effectiveness with tagging exhaustivity and tag specificity was conducted to ameliorate the drawbacks of consistency analysis based on only the quantitative measures of vocabulary matching. Finally, to investigate tagging pattern and behaviors, a content analysis on tag attributes was conducted based on the FRBR model. The findings revealed that there was greater consistency over all subjects among taggers compared to that for two groups of professionals. The analysis of tagging exhaustivity and tag specificity in relation to tagging effectiveness was conducted to ameliorate difficulties associated with limitations in the analysis of indexing consistency based on only the quantitative measures of vocabulary matching. Examination of exhaustivity and specificity of social tags provided insights into particular characteristics of tagging behavior and its variation across subjects. To further investigate the quality of tags, a Latent Semantic Analysis (LSA) was conducted to determine to what extent tags are conceptually related to professionals’ keywords and it was found that tags of higher specificity tended to have a higher semantic relatedness to professionals’ keywords. This leads to the conclusion that the term’s power as a differentiator is related to its semantic relatedness to documents. The findings on tag attributes identified the important bibliographic attributes of tags beyond describing subjects or topics of a document. The findings also showed that tags have essential attributes matching those defined in FRBR. Furthermore, in terms of specific subject areas, the findings originally identified that taggers exhibited different tagging behaviors representing distinctive features and tendencies on web documents characterizing digital heterogeneous media resources. These results have led to the conclusion that there should be an increased awareness of diverse user needs by subject in order to improve metadata in practical applications. This dissertation research is the first necessary step to utilize social tagging in digital information organization by verifying the quality and efficacy of social tagging. This dissertation research combined both quantitative (statistics) and qualitative (content analysis using FRBR) approaches to vocabulary analysis of tags which provided a more complete examination of the quality of tags. Through the detailed analysis of tag properties undertaken in this dissertation, we have a clearer understanding of the extent to which social tagging can be used to replace (and in some cases to improve upon) professional indexing.