943 resultados para Question-answering systems
Resumo:
A „Vezetési és döntési rendszerek” alprojekt kutatói a döntéshozatal minőségének és a versenyképességnek a kapcsolatát vizsgálták. Alapkérdésünk az volt, hogy mely vállalatok a sikeresebbek, azok, amelyek a döntéshozatali közelítésmódok közül a szigorúan racionális, analitikus gondolkodást, felfogást favorizálják, vagy inkább a kreativitást ösztönző és középpontba állító, a kreatív döntéshozatali és vezetési stílust követő cégek. Azt tapasztaltuk, hogy a vállalatok menedzsmentjének egyre többször kell megbirkóznia vészhelyzetekkel és azok következményeivel. Az üzleti döntések és az üzleti teljesítmény, az üzleti siker kapcsolatának vizsgálatára külön kutatási irányt jelöltünk meg. A felelős döntéshozatal témakörében a mi kutatásunk a konkrét döntéseket helyezte előtérbe, amely új közelítésmódot jelent. Ugyanis nem csak specifikus CSR gyakorlatokkal foglalkoztunk, hanem konkrét vezetői döntésekben vizsgáltuk meg a CSR és a fenntarthatóság elemeit. ______ Within the framework of the “Management and decision-making systems” subproject we investigated the link between the quality of decision making and competitiveness. Our basic question was the following: which companies are more successful, those who are strictly follow the rational/analytical way of decision making or the others who mainly focus on creative decision making and creative management. We found that nowadays the company managements more often face to crisis situations and their consequences. We initiated a focused research on the relationship of the business decision making, business performance and business success. When we did research in the field of the responsible decision making we focused on concrete decision cases, that was a brand new approach. We have not analyzed the CSR practice, but identified CSR and sustainability elements in concrete management decisions.
Resumo:
In the discussion - Ethics, Value Systems And The Professionalization Of Hoteliers by K. Michael Haywood, Associate Professor, School of Hotel and Food Administration, University of Guelph, Haywood initially presents: “Hoteliers and executives in other service industries should realize that the foundation of success in their businesses is based upon personal and corporate value systems and steady commitment to excellence. The author illustrates how ethical issues and manager morality are linked to, and shaped by the values of executives and the organization, and how improved professionalism can only be achieved through the adoption of a value system that rewards contributions rather than the mere attainment of results.” The bottom line of this discussion is, how does the hotel industry reconcile its behavior with that of public perception? “The time has come for hoteliers to examine their own standards of ethics, value systems, and professionalism,” Haywood says. And it is ethics that are at the center of this issue; Haywood holds that component in an estimable position. “Hoteliers must become value-driven,” advises Haywood. “They must be committed to excellence both in actualizing their best potentialities and in excelling in all they do. In other words, the professionalization of the hotelier can be achieved through a high degree of self-control, internalized values, codes of ethics, and related socialization processes,” he expands. “Serious ethical issues exist for hoteliers as well as for many business people and professionals in positions of responsibility,” Haywood alludes in defining some inter-industry problems. “The acceptance of kickbacks and gifts from suppliers, the hiding of income from taxation authorities, the lack of interest in installing and maintaining proper safety and security systems, and the raiding of competitors' staffs are common practices,” he offers, with the reasoning that if these problems can occur within ranks, then there is going to be a negative backlash in the public/client arena as well. Haywood divides the key principles of his thesis statement - ethics, value systems, and professionalism – into specific elements, and then continues to broaden the scope of each element. Promotion, product/service, and pricing are additional key components in Haywood’s discussion, and he addresses each with verve and vitality. Haywood references the four character types - craftsmen, jungle fighters, company men, and gamesmen – via a citation to Michael Maccoby, in the portion of the discussion dedicated to morality and success. Haywood closes with a series of questions derived from Lawrence Miller's American Spirit, Visions of a New Corporate Culture, each question designed to focus, shape, and organize management's attention to the values that Miller sets forth in his piece.
Resumo:
Understanding habitat selection and movement remains a key question in behavioral ecology. Yet, obtaining a sufficiently high spatiotemporal resolution of the movement paths of organisms remains a major challenge, despite recent technological advances. Observing fine-scale movement and habitat choice decisions in the field can prove to be difficult and expensive, particularly in expansive habitats such as wetlands. We describe the application of passive integrated transponder (PIT) systems to field enclosures for tracking detailed fish behaviors in an experimental setting. PIT systems have been applied to habitats with clear passageways, at fixed locations or in controlled laboratory and mesocosm settings, but their use in unconfined habitats and field-based experimental setups remains limited. In an Everglades enclosure, we continuously tracked the movement and habitat use of PIT-tagged centrarchids across three habitats of varying depth and complexity using multiple flatbed antennas for 14 days. Fish used all three habitats, with marked species-specific diel movement patterns across habitats, and short-lived movements that would be likely missed by other tracking techniques. Findings suggest that the application of PIT systems to field enclosures can be an insightful approach for gaining continuous, undisturbed and detailed movement data in unconfined habitats, and for experimentally manipulating both internal and external drivers of these behaviors.
Resumo:
Why and under what conditions have the Kurds become agents of change in the Middle East in terms of democratization? Why did the Kurds' role as democratic agents become particularly visible in the 1990s? How does the Kurdish movement's turn to democratic discourse affect the political systems of Turkey, Iran, Iraq and Syria? What are the implications of the Kurds' adoption of "democratic discourse" for the transnational aspect of the Kurdish movement? Since the early 1990s, Kurdish national movements in Turkey, Iran, Iraq and Syria have undergone important political and ideological transformations. As a result of the Kurds' growing role in shaping the debates on human rights and democratization in these four countries, the Kurdish national movement has acquired a dual character: an ethno-cultural struggle for the recognition of Kurdish identity, and a democratization movement that seeks to redefine the concepts of governance and citizenship in Turkey, Iran, Iraq and Syria. The process transformation has affected relations between the Kurdish movements and their respective central governments in significant ways. On the basis of face-to-face interviews and archival research conducted in Turkey, Iraq and parts of Europe, the present work challenges the current narrative of Kurdish nationalism, which is predominantly drawn from a statist interpretation of Kurdish nationalist goals, and argues instead that the Kurdish question is no longer a problem of statelessness but a problem of democracy in Turkey, Iran, Iraq and Syria. The main contributions of this work are three fold. First, the research unfolds the reasons behind the growing emphasis of the Kurdish movement on the concepts of democracy, human rights, and political participation, which started in the early 1990s. Second, the findings challenge the existing scholarship that explains Kurdish nationalism as a problem of statelessness and shifts the focus to the transformative potentials of the Kurdish national movement in Turkey, Iran, Iraq and Syria through a comparative lens. Third, this work explores the complex transnational coordination and negotiations between the Kurdish movements across borders and explains the regional repercussions of this process.
Resumo:
In Brazil, since the 1990, the country has been operating an extensive transformation in the education system. This situation is justified, among other reasons, the search for answers to the new demands that modern society gives to school, to new technologies and information and communication systems. To better put the Brazilian school that context, the federal government, through the Ministerial Decree No. 17/2007, created the More Education Program as a measure to combat low levels of development of basic education in the capital cities and metropolitan areas, aiming at the implementation of comprehensive education in schools. After the first year since the implementation of the program, there are advances in the permanence of students within the school. But as for the teachers, there is a gap on ownership and consequent acceptance. Considering the reality of the state system of schools of Basic Education in Natal - RN, this study investigates the training needs of teachers in institutions working with More Education Program, envisioning the educational relationship with Macrocampos contained in this national project. Answering this question allows us to absorb the concept of such needs in teachers, as well as to check priority in relation to the continuing education of teachers for this new pedagogical reality.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
Despite the long-lasting concern for food security in China at the national level, policy attempts to cope with this issue have often resulted to be ineffective. More importantly, they have rarely addressed the question from a local perspective. International experiences of urban food strategies proved to be quite efficacious in enhancing the local provision of food and improving the overall city sustainability by shortening the supply chain, preserving peri-urban areas and improving the nutrition of citizens. By reviewing existing practices of city farming in China, mainly ascribable to urban agriculture experiences, the intention of this paper is to reflect upon the challenges of implementing more comprehensive local food systems. In the conclusion the paper argues that, given the current institutional, socio-economic, and environmental constrains of Chinese cities there is a need of introducing holistic planning tool to assess local food systems in order to ensure the building of real healthy cities.
Resumo:
Intelligent Tutoring Systems (ITSs) are computerized systems for learning-by-doing. These systems provide students with immediate and customized feedback on learning tasks. An ITS typically consists of several modules that are connected to each other. This research focuses on the distribution of the ITS module that provides expert knowledge services. For the distribution of such an expert knowledge module we need to use an architectural style because this gives a standard interface, which increases the reusability and operability of the expert knowledge module. To provide expert knowledge modules in a distributed way we need to answer the research question: ‘How can we compare and evaluate REST, Web services and Plug-in architectural styles for the distribution of the expert knowledge module in an intelligent tutoring system?’. We present an assessment method for selecting an architectural style. Using the assessment method on three architectural styles, we selected the REST architectural style as the style that best supports the distribution of expert knowledge modules. With this assessment method we also analyzed the trade-offs that come with selecting REST. We present a prototype and architectural views based on REST to demonstrate that the assessment method correctly scores REST as an appropriate architectural style for the distribution of expert knowledge modules.
Resumo:
Our key contribution is a flexible, automated marking system that adds desirable functionality to existing E-Assessment systems. In our approach, any given E-Assessment system is relegated to a data-collection mechanism, whereas marking and the generation and distribution of personalised per-student feedback is handled separately by our own system. This allows content-rich Microsoft Word feedback documents to be generated and distributed to every student simultaneously according to a per-assessment schedule.
The feedback is adaptive in that it corresponds to the answers given by the student and provides guidance on where they may have gone wrong. It is not limited to simple multiple choice which are the most prescriptive question type offered by most E-Assessment Systems and as such most straightforward to mark consistently and provide individual per-alternative feedback strings. It is also better equipped to handle the use of mathematical symbols and images within the feedback documents which is more flexible than existing E-Assessment systems, which can only handle simple text strings.
As well as MCQs the system reliably and robustly handles Multiple Response, Text Matching and Numeric style questions in a more flexible manner than Questionmark: Perception and other E-Assessment Systems. It can also reliably handle multi-part questions where the response to an earlier question influences the answer to a later one and can adjust both scoring and feedback appropriately.
New question formats can be added at any time provided a corresponding marking method conforming to certain templates can also be programmed. Indeed, any question type for which a programmatic method of marking can be devised may be supported by our system. Furthermore, since the student’s response to each is question is marked programmatically, our system can be set to allow for minor deviations from the correct answer, and if appropriate award partial marks.
Resumo:
The lack of flexibility in logistic systems currently on the market leads to the development of new innovative transportation systems. In order to find the optimal configuration of such a system depending on the current goal functions, for example minimization of transport times and maximization of the throughput, various mathematical methods of multi-criteria optimization are applicable. In this work, the concept of a complex transportation system is presented. Furthermore, the question of finding the optimal configuration of such a system through mathematical methods of optimization is considered.
Resumo:
Background Patient safety is concerned with preventable harm in healthcare, a subject that became a focus for study in the UK in the late 1990s. How to improve patient safety, presented both a practical and a research challenge in the early 2000s, leading to the eleven publications presented in this thesis. Research question The overarching research question was: What are the key organisational and systems factors that impact on patient safety, and how can these best be researched? Methods Research was conducted in over 40 acute care organisations in the UK and Europe between 2006 and 2013. The approaches included surveys, interviews, documentary analysis and non-participant observation. Two studies were longitudinal. Results The findings reveal the nature and extent of poor systems reliability and its effect on patient safety; the factors underpinning cases of patient harm; the cultural issues impacting on safety and quality; and the importance of a common language for quality and safety across an organisation. Across the publications, nine key organisational and systems factors emerged as important for patient safety improvement. These include leadership stability; data infrastructure; measurement capability; standardisation of clinical systems; and creating an open and fair collective culture where poor safety is challenged. Conclusions and contribution to knowledge The research presented in the publications has provided a more complete understanding of the organisation and systems factors underpinning safer healthcare. Lessons are drawn to inform methods for future research, including: how to define success in patient safety improvement studies; how to take into account external influences during longitudinal studies; and how to confirm meaning in multi-language research. Finally, recommendations for future research include assessing the support required to maintain a patient safety focus during periods of major change or austerity; the skills needed by healthcare leaders; and the implications of poor data infrastructure.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
This paper focuses on teaching boys, male teachers and the question of gendered pedagogies in neoliberal and postfeminist times of the proliferation of new forms of capitalism, multi-mediated technologies and the influence of globalization. It illustrates how a politics of re-masculinization and its reconstitution needs to be understood as set against changing economic and social conditions in which gender equity comes to be re-focused on boys as the ‚new disadvantaged‘. This re-framing of gender equity, it is argued, has been fuelled by both a media-inspired backlash discourse about ‚failing boys‘ and a neo-positivist emphasis on numbers derived primarily from standardized testing regimes at both global and national levels. A media-focused analysis of the proliferation of discourses about ‚failing boys‘ vis-a-vis the problem of encroaching feminization in the school system is provided to illuminate how certain truths about the influence of male teachers come to define how the terms of ensuring gender equity are delimited and reduced to a question of gendered pedagogies as grounded in sexed bodies. Historical accounts of the feminization of teaching in the North American context are also provided as a basis for building a more informed understanding of the present, particularly as it relates to the contextualization of policy articulation and enactment regarding the problem of teaching boys. In light of such historically informed and critical media analysis, it is argued that what is needed is a more informed, evidenced based policy articulation of the problem of teaching boys and a more gender sensitive reflection on the politics of masculinities in postfeminist times. (DIPF/Orig.)
Collection-Level Subject Access in Aggregations of Digital Collections: Metadata Application and Use
Resumo:
Problems in subject access to information organization systems have been under investigation for a long time. Focusing on item-level information discovery and access, researchers have identified a range of subject access problems, including quality and application of metadata, as well as the complexity of user knowledge required for successful subject exploration. While aggregations of digital collections built in the United States and abroad generate collection-level metadata of various levels of granularity and richness, no research has yet focused on the role of collection-level metadata in user interaction with these aggregations. This dissertation research sought to bridge this gap by answering the question “How does collection-level metadata mediate scholarly subject access to aggregated digital collections?” This goal was achieved using three research methods: • in-depth comparative content analysis of collection-level metadata in three large-scale aggregations of cultural heritage digital collections: Opening History, American Memory, and The European Library • transaction log analysis of user interactions, with Opening History, and • interview and observation data on academic historians interacting with two aggregations: Opening History and American Memory. It was found that subject-based resource discovery is significantly influenced by collection-level metadata richness. The richness includes such components as: 1) describing collection’s subject matter with mutually-complementary values in different metadata fields, and 2) a variety of collection properties/characteristics encoded in the free-text Description field, including types and genres of objects in a digital collection, as well as topical, geographic and temporal coverage are the most consistently represented collection characteristics in free-text Description fields. Analysis of user interactions with aggregations of digital collections yields a number of interesting findings. Item-level user interactions were found to occur more often than collection-level interactions. Collection browse is initiated more often than search, while subject browse (topical and geographic) is used most often. Majority of collection search queries fall within FRBR Group 3 categories: object, concept, and place. Significantly more object, concept, and corporate body searches and less individual person, event and class of persons searches were observed in collection searches than in item searches. While collection search is most often satisfied by Description and/or Subjects collection metadata fields, it would not retrieve a significant proportion of collection records without controlled-vocabulary subject metadata (Temporal Coverage, Geographic Coverage, Subjects, and Objects), and free-text metadata (the Description field). Observation data shows that collection metadata records in Opening History and American Memory aggregations are often viewed. Transaction log data show a high level of engagement with collection metadata records in Opening History, with the total page views for collections more than 4 times greater than item page views. Scholars observed viewing collection records valued descriptive information on provenance, collection size, types of objects, subjects, geographic coverage, and temporal coverage information. They also considered the structured display of collection metadata in Opening History more useful than the alternative approach taken by other aggregations, such as American Memory, which displays only the free-text Description field to the end-user. The results extend the understanding of the value of collection-level subject metadata, particularly free-text metadata, for the scholarly users of aggregations of digital collections. The analysis of the collection metadata created by three large-scale aggregations provides a better understanding of collection-level metadata application patterns and suggests best practices. This dissertation is also the first empirical research contribution to test the FRBR model as a conceptual and analytic framework for studying collection-level subject access.