37 resultados para downloading of data
em Aston University Research Archive
Resumo:
The book aims to introduce the reader to DEA in the most accessible manner possible. It is specifically aimed at those who have had no prior exposure to DEA and wish to learn its essentials, how it works, its key uses, and the mechanics of using it. The latter will include using DEA software. Students on degree or training courses will find the book especially helpful. The same is true of practitioners engaging in comparative efficiency assessments and performance management within their organisation. Examples are used throughout the book to help the reader consolidate the concepts covered. Table of content: List of Tables. List of Figures. Preface. Abbreviations. 1. Introduction to Performance Measurement. 2. Definitions of Efficiency and Related Measures. 3. Data Envelopment Analysis Under Constant Returns to Scale: Basic Principles. 4. Data Envelopment Analysis under Constant Returns to Scale: General Models. 5. Using Data Envelopment Analysis in Practice. 6. Data Envelopment Analysis under Variable Returns to Scale. 7. Assessing Policy Effectiveness and Productivity Change Using DEA. 8. Incorporating Value Judgements in DEA Assessments. 9. Extensions to Basic DEA Models. 10. A Limited User Guide for Warwick DEA Software. Author Index. Topic Index. References.
Resumo:
The explosive growth in microprocessor technology and the increasing use of computers to store information has increased the demand for data communication channels. Because of this, data communication to mobile vehicles is increasing rapidly. In addition, data communication is seen as a method of relieving the current congestion of mobile radio telephone bands in the U.K. Highly reliable data communication over mobile radio channels is particularly difficult to achieve, primarily due to fading caused by multipath interference. In this thesis a data communication system is described for use over radio channels impaired by multipath interference. The thesis first describes radio communication in general, and multipath interference In particular. The practical aspects of fading channels are stressed because of their importance in the development of the system. The current U.K. land mobile radio scene is then reviewed, with particular emphasis on the use of existing mobile radio equipment for data communication purposes. The development of the data communication system is then described. This system is microprocessor based and uses an advanced form of automatic request repeat (ARQ) operation. It can be configured to use either existing radio-telephone equipment, totally new equipment specifically designed for data communication, or any combination of the two. Due to its adaptability, the system can automatically optimise itself for use over any channel, even if the channel parameters are changing rapidly. Results obtained from a particular implementation of the system, which is described in full, are presented. These show how the operation of the system has to change to accomodate changes in the channel. Comparisons are made between the practical results and the theoretical limits of the system.
Resumo:
There may be circumstances where it is necessary for microbiologists to compare variances rather than means, e,g., in analysing data from experiments to determine whether a particular treatment alters the degree of variability or testing the assumption of homogeneity of variance prior to other statistical tests. All of the tests described in this Statnote have their limitations. Bartlett’s test may be too sensitive but Levene’s and the Brown-Forsythe tests also have problems. We would recommend the use of the variance-ratio test to compare two variances and the careful application of Bartlett’s test if there are more than two groups. Considering that these tests are not particularly robust, it should be remembered that the homogeneity of variance assumption is usually the least important of those considered when carrying out an ANOVA. If there is concern about this assumption and especially if the other assumptions of the analysis are also not likely to be met, e.g., lack of normality or non additivity of treatment effects then it may be better either to transform the data or to carry out a non-parametric test on the data.
Resumo:
This article explains first, the reasons why a knowledge of statistics is necessary and describes the role that statistics plays in an experimental investigation. Second, the normal distribution is introduced which describes the natural variability shown by many measurements in optometry and vision sciences. Third, the application of the normal distribution to some common statistical problems including how to determine whether an individual observation is a typical member of a population and how to determine the confidence interval for a sample mean is described.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
This thesis describes the development of a complete data visualisation system for large tabular databases, such as those commonly found in a business environment. A state-of-the-art 'cyberspace cell' data visualisation technique was investigated and a powerful visualisation system using it was implemented. Although allowing databases to be explored and conclusions drawn, it had several drawbacks, the majority of which were due to the three-dimensional nature of the visualisation. A novel two-dimensional generic visualisation system, known as MADEN, was then developed and implemented, based upon a 2-D matrix of 'density plots'. MADEN allows an entire high-dimensional database to be visualised in one window, while permitting close analysis in 'enlargement' windows. Selections of records can be made and examined, and dependencies between fields can be investigated in detail. MADEN was used as a tool for investigating and assessing many data processing algorithms, firstly data-reducing (clustering) methods, then dimensionality-reducing techniques. These included a new 'directed' form of principal components analysis, several novel applications of artificial neural networks, and discriminant analysis techniques which illustrated how groups within a database can be separated. To illustrate the power of the system, MADEN was used to explore customer databases from two financial institutions, resulting in a number of discoveries which would be of interest to a marketing manager. Finally, the database of results from the 1992 UK Research Assessment Exercise was analysed. Using MADEN allowed both universities and disciplines to be graphically compared, and supplied some startling revelations, including empirical evidence of the 'Oxbridge factor'.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
Purpose – The purpose of this editorial is to stimulate debate and discussion amongst marketing scholarship regarding the implications for scientific research of increasingly large amounts of data and sophisticated data analytic techniques. Design/methodology/approach – The authors respond to a recent editorial in WIRED magazine which heralds the demise of the scientific method in the face of the vast data sets now available. Findings – The authors propose that more data makes theory more important, not less. They differentiate between raw prediction and scientific knowledge – which is aimed at explanation. Research limitations/implications – These thoughts are preliminary and intended to spark thinking and debate, not represent editorial policy. Due to space constraints, the coverage of many issues is necessarily brief. Practical implications – Marketing researchers should find these thoughts at the very least stimulating, and may wish to investigate these issues further. Originality/value – This piece should provide some interesting food for thought for marketing researchers.
Resumo:
Data quality is a difficult notion to define precisely, and different communities have different views and understandings of the subject. This causes confusion, a lack of harmonization of data across communities and omission of vital quality information. For some existing data infrastructures, data quality standards cannot address the problem adequately and cannot full all user needs or cover all concepts of data quality. In this paper we discuss some philosophical issues on data quality. We identify actual user needs on data quality, review existing standards and specification on data quality, and propose an integrated model for data quality in the eld of Earth observation. We also propose a practical mechanism for applying the integrated quality information model to large number of datasets through metadata inheritance. While our data quality management approach is in the domain of Earth observation, we believe the ideas and methodologies for data quality management can be applied to wider domains and disciplines to facilitate quality-enabled scientific research.
Resumo:
Purpose - Measurements obtained from the right and left eye of a subject are often correlated whereas many statistical tests assume observations in a sample are independent. Hence, data collected from both eyes cannot be combined without taking this correlation into account. Current practice is reviewed with reference to articles published in three optometry journals, viz., Ophthalmic and Physiological Optics (OPO), Optometry and Vision Science (OVS), Clinical and Experimental Optometry (CEO) during the period 2009–2012. Recent findings - Of the 230 articles reviewed, 148/230 (64%) obtained data from one eye and 82/230 (36%) from both eyes. Of the 148 one-eye articles, the right eye, left eye, a randomly selected eye, the better eye, the worse or diseased eye, or the dominant eye were all used as selection criteria. Of the 82 two-eye articles, the analysis utilized data from: (1) one eye only rejecting data from the adjacent eye, (2) both eyes separately, (3) both eyes taking into account the correlation between eyes, or (4) both eyes using one eye as a treated or diseased eye, the other acting as a control. In a proportion of studies, data were combined from both eyes without correction. Summary - It is suggested that: (1) investigators should consider whether it is advantageous to collect data from both eyes, (2) if one eye is studied and both are eligible, then it should be chosen at random, and (3) two-eye data can be analysed incorporating eyes as a ‘within subjects’ factor.
Resumo:
We demonstrate a novel phase noise estimation scheme for CO-OFDM, in which pilot subcarriers are deliberately correlated to the data subcarriers. This technique reduces the overhead by a factor of 2. © OSA 2014.
Resumo:
The Electronic Product Code Information Service (EPCIS) is an EPCglobal standard, that aims to bridge the gap between the physical world of RFID1 tagged artifacts, and information systems that enable their tracking and tracing via the Electronic Product Code (EPC). Central to the EPCIS data model are "events" that describe specific occurrences in the supply chain. EPCIS events, recorded and registered against EPC tagged artifacts, encapsulate the "what", "when", "where" and "why" of these artifacts as they flow through the supply chain. In this paper we propose an ontological model for representing EPCIS events on the Web of data. Our model provides a scalable approach for the representation, integration and sharing of EPCIS events as linked data via RESTful interfaces, thereby facilitating interoperability, collaboration and exchange of EPC related data across enterprises on a Web scale.
Resumo:
Non-parametric methods for efficiency evaluation were designed to analyse industries comprising multi-input multi-output producers and lacking data on market prices. Education is a typical example. In this chapter, we review applications of DEA in secondary and tertiary education, focusing on the opportunities that this offers for benchmarking at institutional level. At secondary level, we investigate also the disaggregation of efficiency measures into pupil-level and school-level effects. For higher education, while many analyses concern overall institutional efficiency, we examine also studies that take a more disaggregated approach, centred either around the performance of specific functional areas or that of individual employees.
Resumo:
The breadth and depth of available clinico-genomic information, present an enormous opportunity for improving our ability to study disease mechanisms and meet the individualised medicine needs. A difficulty occurs when the results are to be transferred 'from bench to bedside'. Diversity of methods is one of the causes, but the most critical one relates to our inability to share and jointly exploit data and tools. This paper presents a perspective on current state-of-the-art in the analysis of clinico-genomic data and its relevance to medical decision support. It is an attempt to investigate the issues related to data and knowledge integration. Copyright © 2010 Inderscience Enterprises Ltd.