55 resultados para Conceptualizing and Measuring
Resumo:
An experiment to quantify intra- and interobserver error in anatomical measurements found that interobserver measurements can vary by over 14% of mean specimen length; disparity in measurement increases logarithmically with the number of contributors; instructions did not reduce variation or measurement disparity; scale of the specimen influenced the precision of measurement (relative error increasing with specimen size); different methods of taking a measurement yielded different results, although they did not differ in terms of precision, and topographical complexity of the elements being considered may potentially influence error (error increasing with complexity). These results highlight concerns about introduction of noise and potential bias that should be taken into account when compiling composite datasets and meta-analyses.
Resumo:
Although a substantial corpus of digital materials is now available to scholarship across the disciplines, objective evidence of their use, impact, and value, based on a robust assessment, is sparse. Traditional methods of assessment of impact in the humanities, notably citation in scholarly publications, are not an effective way of assessing impact of digital content. These issues are problematic in the field of Digital Humanities where there is a need to effectively assess impact to justify its continued funding and existence. A number of qualitative and quantitative methods exist that can be used to monitor the use of digital resources in various contexts although they have yet to be applied widely. These have been made available to the creators, managers, and funders of digital content in an accessible form through the TIDSR (Toolkit for the Impact of Digital Scholarly Resources) developed by the Oxford Internet Institute. In 2011, the authors of this article developed the SPHERE project (Stormont Parliamentary Hansards: Embedded in Research and Education) specifically to use TIDSR to evaluate the use and impact of The Stormont Papers, a digital collection of the Hansards of the Stormont Northern Irish Parliament from 1921 to 1972. This article presents the methodology, findings, and analysis of the project. The authors argue that TIDSR is a useful and, critically, transferrable method to understand and increase the impact of digital resources. The findings of the project are modified into a series of wider recommendations on protecting the investment in digital resources by increasing their use, value, and impact. It is reasonable to suggest that effectively showing the impact of Digital Humanities is critical to its survival.
Resumo:
There is extensive theoretical work on measures of inconsistency for arbitrary formulae in knowledge bases. Many of these are defined in terms of the set of minimal inconsistent subsets (MISes) of the base. However, few have been implemented or experimentally evaluated to support their viability, since computing all MISes is intractable in the worst case. Fortunately, recent work on a related problem of minimal unsatisfiable sets of clauses (MUSes) offers a viable solution in many cases. In this paper, we begin by drawing connections between MISes and MUSes through algorithms based on a MUS generalization approach and a new optimized MUS transformation approach to finding MISes. We implement these algorithms, along with a selection of existing measures for flat and stratified knowledge bases, in a tool called mimus. We then carry out an extensive experimental evaluation of mimus using randomly generated arbitrary knowledge bases. We conclude that these measures are viable for many large and complex random instances. Moreover, they represent a practical and intuitive tool for inconsistency handling.
Resumo:
Coastal systems, such as rocky shores, are among the most heavily anthropogenically-impacted marine ecosystems and are also among the most productive in terms of ecosystem functioning. One of the greatest impacts on coastal ecosystems is nutrient enrichment from human activities such as agricultural run-off and discharge of sewage. The aim of this study was to identify and characterise potential effects of sewage discharges on the biotic diversity of rocky shores and to test current tools for assessing the ecological status of rocky shores in line with the EU Water Framework Directive (WFD). A sampling strategy was designed to test for effects of sewage outfalls on rocky shore assemblages on the east coast of Ireland and to identify the scale of the putative impact. In addition, a separate sampling programme based on the Reduced algal Species List (RSL), the current WFD monitoring tool for rocky shores in Ireland and the UK, was also completed by identifying algae and measuring percent cover in replicate samples on rocky shores during Summer. There was no detectable effect of sewage outfalls on benthic taxon diversity or assemblage structure. However, spatial variability of assemblages was greater at sites proximal or adjacent to sewage outfalls compared to shores without sewage outfalls present. Results based on the RSL, show that algal assemblages were not affected by the presence of sewage outfalls, except when classed into functional groups when variability was greater at the sites with sewage outfalls. A key finding of both surveys, was the prevalence of spatial and temporal variation of assemblages. It is recommended that future metrics of ecological status are based on quantified sampling designs, incorporate changes in variability of assemblages (indicative of community stability), consider shifts in assemblage structure and include both benthic fauna and flora to assess the status of rocky shores.
Resumo:
Objectives This paper describes the methods used in the International Cancer Benchmarking Partnership Module 4 Survey (ICBPM4) which examines time intervals and routes to cancer diagnosis in 10 jurisdictions. We present the study design with defining and measuring time intervals, identifying patients with cancer, questionnaire development, data management and analyses.
Design and setting Recruitment of participants to the ICBPM4 survey is based on cancer registries in each jurisdiction. Questionnaires draw on previous instruments and have been through a process of cognitive testing and piloting in three jurisdictions followed by standardised translation and adaptation. Data analysis focuses on comparing differences in time intervals and routes to diagnosis in the jurisdictions.
Participants Our target is 200 patients with symptomatic breast, lung, colorectal and ovarian cancer in each jurisdiction. Patients are approached directly or via their primary care physician (PCP). Patients’ PCPs and cancer treatment specialists (CTSs) are surveyed, and ‘data rules’ are applied to combine and reconcile conflicting information. Where CTS information is unavailable, audit information is sought from treatment records and databases.
Main outcomes Reliability testing of the patient questionnaire showed that agreement was complete (κ=1) in four items and substantial (κ=0.8, 95% CI 0.333 to 1) in one item. The identification of eligible patients is sufficient to meet the targets for breast, lung and colorectal cancer. Initial patient and PCP survey response rates from the UK and Sweden are comparable with similar published surveys. Data collection was completed in early 2016 for all cancer types.
Conclusion An international questionnaire-based survey of patients with cancer, PCPs and CTSs has been developed and launched in 10 jurisdictions. ICBPM4 will help to further understand international differences in cancer survival by comparing time intervals and routes to cancer diagnosis.
Resumo:
Using a speed-matching task, we measured the speed tuning of the dynamic motion aftereVect (MAE). The results of our Wrst experiment, in which we co-varied dot speed in the adaptation and test stimuli, revealed a speed tuning function. We sought to tease apart what contribution, if any, the test stimulus makes towards the observed speed tuning. This was examined by independently manipulating dot speed in the adaptation and test stimuli, and measuring the eVect this had on the perceived speed of the dynamic MAE. The results revealed that the speed tuning of the dynamic MAE is determined, not by the speed of the adaptation stimulus, but by the local motion characteristics of the dynamic test stimulus. The role of the test stimulus in determining the perceived speed of the dynamic MAE was conWrmed by showing that, if one uses a test stimulus containing two sources of local speed information, observers report seeing a transparent MAE; this is despite the fact that adaptation is induced using a single-speed stimulus. Thus while the adaptation stimulus necessarily determines perceived direction of the dynamic MAE, its perceived speed is determined by the test stimulus. This dissociation of speed and direction supports the notion that the processing of these two visual attributes may be partially independent.
Resumo:
The use of strong-field (i.e. intensities in excess of 10(13) Wcm(-2)) few-cycle ultrafast (durations of 10 femtoseconds or less) laser pulses to create, manipulate and image vibrational wavepackets is investigated. Quasi-classical modelling of the initial superposition through tunnel ionization, wavepacket modification by nonadiabatically altering the nuclear environment via the transition dipole and the Stark effect, and measuring the control outcome by fragmenting the molecule is detailed. The influence of the laser intensity on strong-field ultrafast wavepacket control is discussed in detail: by modifying the distribution of laser intensities imaged, we show that focal conditions can be created that give preference to this three-pulse technique above processes induced by the pulses alone. An experimental demonstration is presented, and the nuclear dynamics inferred by the quasi-classical model discussed. Finally, we present the results of a systematic investigation of a dual-control pulse scheme, indicating that single vibrational states should be observable with high fidelity, and the populated state defined by varying the arrival time of the two control pulses. The relevance of such strong-field coherent control methods to the manipulation of electron localization and attosecond science is discussed.
Resumo:
Currently there is extensive theoretical work on inconsistencies in logic-based systems. Recently, algorithms for identifying inconsistent clauses in a single conjunctive formula have demonstrated that practical application of this work is possible. However, these algorithms have not been extended for full knowledge base systems and have not been applied to real-world knowledge. To address these issues, we propose a new algorithm for finding the inconsistencies in a knowledge base using existing algorithms for finding inconsistent clauses in a formula. An implementation of this algorithm is then presented as an automated tool for finding inconsistencies in a knowledge base and measuring the inconsistency of formulae. Finally, we look at a case study of a network security rule set for exploit detection (QRadar) and suggest how these automated tools can be applied.
Resumo:
Using high-energy (∼0.5 GeV) electron beams generated by laser wakefield acceleration (LWFA), bremsstrahlung radiation was created by interacting these beams with various solid targets. Secondary processes generate high-energy electrons, positrons, and neutrons, which can be measured shot-to-shot using magnetic spectrometers, short half-life activation, and Compton scattering. Presented here are proof-of-principle results from a high-resolution, high-energy gamma-ray spectrometer capable of single-shot operation, and high repetition rate activation diagnostics. We describe the techniques used in these measurements and their potential applications in diagnosing LWFA electron beams and measuring high-energy radiation from laser-plasma interactions.