42 resultados para Kerridge’s inaccuracy measure
em Aston University Research Archive
Resumo:
Jaccard has been the choice similarity metric in ecology and forensic psychology for comparison of sites or offences, by species or behaviour. This paper applies a more powerful hierarchical measure - taxonomic similarity (s), recently developed in marine ecology - to the task of behaviourally linking serial crime. Forensic case linkage attempts to identify behaviourally similar offences committed by the same unknown perpetrator (called linked offences). s considers progressively higher-level taxa, such that two sites show some similarity even without shared species. We apply this index by analysing 55 specific offence behaviours classified hierarchically. The behaviours are taken from 16 sexual offences by seven juveniles where each offender committed two or more offences. We demonstrate that both Jaccard and s show linked offences to be significantly more similar than unlinked offences. With up to 20% of the specific behaviours removed in simulations, s is equally or more effective at distinguishing linked offences than where Jaccard uses a full data set. Moreover, s retains significant difference between linked and unlinked pairs, with up to 50% of the specific behaviours removed. As police decision-making often depends upon incomplete data, s has clear advantages and its application may extend to other crime types. Copyright © 2007 John Wiley & Sons, Ltd.
Resumo:
Background: The MacDQoL is an individualised measure of the impact of macular degeneration (MD) on quality of life (QoL). There is preliminary evidence of its psychometric properties and sensitivity to severity of MD. The aim of this study was to carry out further psychometric evaluation with a larger sample and investigate the measure's sensitivity to MD severity. Methods: Patients with MD (n = 156: 99 women, 57 men, mean age 79 ± 13 years), recruited from eye clinics (one NHS, one private) completed the MacDQoL by telephone interview and later underwent a clinic vision assessment including near and distance visual acuity (VA), comfortable near VA, contrast sensitivity, colour recognition, recovery from glare and presence or absence of distortion or scotoma in the central 10° of the visual field. Results: The completion rate for the MacDQoL items was 99.8%. Of the 26 items, three were dropped from the measure due to redundancy. A fourth was retained in the questionnaire but excluded when computing the scale score. Principal components analysis and Cronbach's alpha (0.944) supported combining the remaining 22 items in a single scale. Lower MacDQoL scores, indicating more negative impact of MD on QoL, were associated with poorer distance VA (better eye r = -0.431 p < 0.001; worse eye r = -0.350 p < 0.001; binocular vision r = -0.419 p < 0.001) and near VA (better eye r -0.326 p < 0.001; worse eye r = -0.226 p < 0.001; binocular vision r = -0.326 p < 0.001). Poorer MacDQoL scores were associated with poorer contrast sensitivity (better eye r = 0.392 p < 0.001; binocular vision r = 0.423 p < 0.001), poorer colour recognition (r = 0.417 p < 0.001) and poorer comfortable near VA (r = -0.283, p < 0.001). The MacDQoL differentiated between those with and without binocular scotoma (U = 1244 p < 0.001). Conclusion: The MacDQoL 22-item scale has excellent internal consistency reliability and a single-factor structure. The measure is acceptable to respondents and the generic QoL item, MD-specific QoL item and average weighted impact score are related to several measures of vision. The MacDQoL demonstrates that MD has considerable negative impact on many aspects of QoL, particularly independence, leisure activities, dealing with personal affairs and mobility. The measure may be valuable for use in clinical trials and routine clinical care. © 2005 Mitchell et al; licensee BioMed Central Ltd.
Resumo:
Background: To evaluate the accuracy of an open-field autorefractor compared with subjective refraction in pseudophakes and hence its ability to assess objective eye focus with intraocular lenses (IOLs). Methods: Objective refraction was measured at 6 m using the Shin-Nippon NVision-K 5001/Grand Seiko WR-5100K open-field autorefractor (five repeats) and by subjective refraction on 141 eyes implanted with a spherical (Softec1 n=53), aspherical (SoftecHD n=37) or accommodating (1CU n=22; Tetraflex n=29) IOL. Autorefraction was repeated 2 months later. Results: The autorefractor prescription was similar (average difference: 0.09±0.53 D; p=0.19) to that found by subjective refraction, with ~71% within ±0.50 D. The horizontal cylindrical components were similar (difference: 0.00±0.39 D; p=0.96), although the oblique (J45) autorefractor cylindrical vector was slightly more negative (by -0.06±0.25 D; p=0.06) than the subjective refraction. The results were similar for each of the IOL designs except for the spherical IOL, where the mean spherical equivalent difference between autorefraction and subjective was more hypermetropic than the Tetraflex accommodating IOL (F=2.77, p=0.04). The intrasession repeatability was
Resumo:
The concept of entropy rate is well defined in dynamical systems theory but is impossible to apply it directly to finite real world data sets. With this in mind, Pincus developed Approximate Entropy (ApEn), which uses ideas from Eckmann and Ruelle to create a regularity measure based on entropy rate that can be used to determine the influence of chaotic behaviour in a real world signal. However, this measure was found not to be robust and so an improved formulation known as the Sample Entropy (SampEn) was created by Richman and Moorman to address these issues. We have developed a new, related, regularity measure which is not based on the theory provided by Eckmann and Ruelle and proves a more well-behaved measure of complexity than the previous measures whilst still retaining a low computational cost.
Resumo:
Conventional differential scanning calorimetry (DSC) techniques are commonly used to quantify the solubility of drugs within polymeric-controlled delivery systems. However, the nature of the DSC experiment, and in particular the relatively slow heating rates employed, limit its use to the measurement of drug solubility at the drug's melting temperature. Here, we describe the application of hyper-DSC (HDSC), a variant of DSC involving extremely rapid heating rates, to the calculation of the solubility of a model drug, metronidazole, in silicone elastomer, and demonstrate that the faster heating rates permit the solubility to be calculated under non-equilibrium conditions such that the solubility better approximates that at the temperature of use. At a heating rate of 400°C/min (HDSC), metronidazole solubility was calculated to be 2.16 mg/g compared with 6.16 mg/g at 20°C/min. © 2005 Elsevier B.V. All rights reserved.
Resumo:
Analysis of the production efficiency of industrialized countries, questioning whether certain countries perform better than others in producing more output with the same or less inputs, is an example of the importance of estimating production relationships. In order to estimate efficiency one needs an appropriate model for the two major inputs into production activity, namely labour and capital. A physical asset once installed is capable of contributing several years of output. This implies that investments made in previous years must be taken into account in order to produce a measure of the efficiency and productivity for any given year. The purpose of this article is to introduce a dynamic efficiency model and compare the results with previous work on the analysis of efficiency and productivity of OECD countries. The article proposes that dynamic models capture efficiency better than static models.
Resumo:
Purpose - This paper provides a deeper examination of the fundamentals of commonly-used techniques - such as coefficient alpha and factor analysis - in order to more strongly link the techniques used by marketing and social researchers to their underlying psychometric and statistical rationale. Design/methodology approach - A wide-ranging review and synthesis of psychometric and other measurement literature both within and outside the marketing field is used to illuminate and reconsider a number of misconceptions which seem to have evolved in marketing research. Findings - The research finds that marketing scholars have generally concentrated on reporting what are essentially arbitrary figures such as coefficient alpha, without fully understanding what these figures imply. It is argued that, if the link between theory and technique is not clearly understood, use of psychometric measure development tools actually runs the risk of detracting from the validity of the measures rather than enhancing it. Research limitations/implications - The focus on one stage of a particular form of measure development could be seen as rather specialised. The paper also runs the risk of increasing the amount of dogma surrounding measurement, which runs contrary to the spirit of this paper. Practical implications - This paper shows that researchers may need to spend more time interpreting measurement results. Rather than simply referring to precedence, one needs to understand the link between measurement theory and actual technique. Originality/value - This paper presents psychometric measurement and item analysis theory in easily understandable format, and offers an important set of conceptual tools for researchers in many fields. © Emerald Group Publishing Limited.
Resumo:
The following thesis instigates the discussion on corporate social responsibility (CSR) through a review of literature on the conceptualisation, determinants, and remunerations of organisational CSR engagement. The case is made for the need to draw attention to the micro-levels of CSR, and consequently focus on employee social responsibility at multiple levels of analysis. In order to further research efforts in this area, the prerequisite of an employee social responsibility behavioural measurement tool is acknowledged. Accordingly, the subsequent chapters outline the process of scale development and validation, resulting in a robust, reliable and valid employee social responsibility scale. This scale is then put to use in a field study, and the noteworthy roles of the antecedent and boundary conditions of transformational leadership, assigned CSR priority, and CSR climate are confirmed at the group and individual level. Directionality of these relationships is subsequently alluded to in a time-lagged investigation, set within a simulated business environment. The thesis collates and discusses the contributions of the findings from the research series, which highlight a consistent three-way interaction effect of transformational leadership, assigned CSR priority and CSR climate. Specifically, efforts are made to outline various avenues for future research, given the infancy of the micro-level study of employee social responsibility.
Resumo:
In for-profit organizations efficiency measurement with reference to the potential for profit augmentation is particularly important as is its decomposition into technical, and allocative components. Different profit efficiency approaches can be found in the literature to measure and decompose overall profit efficiency. In this paper, we highlight some problems within existing approaches and propose a new measure of profit efficiency based on a geometric mean of input/output adjustments needed for maximizing profits. Overall profit efficiency is calculated through this efficiency measure and is decomposed into its technical and allocative components. Technical efficiency is calculated based on a non-oriented geometric distance function (GDF) that is able to incorporate all the sources of inefficiency, while allocative efficiency is retrieved residually. We also define a measure of profitability efficiency which complements profit efficiency in that it makes it possible to retrieve the scale efficiency of a unit as a component of its profitability efficiency. In addition, the measure of profitability efficiency allows for a dual profitability interpretation of the GDF measure of technical efficiency. The concepts introduced in the paper are illustrated using a numerical example.
Resumo:
This paper describes the development and validation of a multidimensional measure of organizational climate, the Organizational Climate Measure (OCM), based upon Quinn and Rohrbaugh's Competing Values model. A sample of 6869 employees across 55 manufacturing organizations completed the questionnaire. The 17 scales contained within the measure had acceptable levels of reliability and were factorially distinct. Concurrent validity was measured by correlating employees' ratings with managers' and interviewers' descriptions of managerial practices and organizational characteristics. Predictive validity was established using measures of productivity and innovation. The OCM also discriminated effectively between organizations, demonstrating good discriminant validity. The measure offers researchers a relatively comprehensive and flexible approach to the assessment of organizational members' experience and promises applied and theoretical benefits. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
Innovation has long been an area of interest to social scientists, and particularly to psychologists working in organisational settings. The team climate inventory (TCI) is a facet-specific measure of team climate for innovation that provides a picture of the level and quality of teamwork in a unit using a series of Likert scales. This paper describes its Italian validation in 585 working group members employed in health-related and other contexts. The data were evaluated by means of factorial analysis (including an analysis of the internal consistency of the scales) and Pearson’s product moment correlations. The results show the internal consistency of the scales and the satisfactory factorial structure of the inventory, despite some variations in the factorial structure mainly due to cultural differences and the specific nature of Italian organisational systems.