909 resultados para measurement error models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to determine whether an ophthalmophakometric technique could offer a feasible means of investigating ocular component contributions to residual astigmatism in human eyes. Current opinion was gathered on the prevalence, magnitude and source of residual astigmatism. It emerged that a comprehensive evaluation of the astigmatic contributions of the eye's internal ocular surfaces and their respective axial separations (effectivity) had not been carried out to date. An ophthalmophakometric technique was developed to measure astigmatism arising from the internal ocular components. Procedures included the measurement of refractive error (infra-red autorefractometry), anterior corneal surface power (computerised video keratography), axial distances (A-scan ultrasonography) and the powers of the posterior corneal surface in addition to both surfaces of the crystalline lens (multi-meridional still flash ophthalmophakometry). Computing schemes were developed to yield the required biometric data. These included (1) calculation of crystalline lens surface powers in the absence of Purkinje images arising from its anterior surface, (2) application of meridional analysis to derive spherocylindrical surface powers from notional powers calculated along four pre-selected meridians, (3) application of astigmatic decomposition and vergence analysis to calculate contributions to residual astigmatism of ocular components with obliquely related cylinder axes, (4) calculation of the effect of random experimental errors on the calculated ocular component data. A complete set of biometric measurements were taken from both eyes of 66 undergraduate students. Effectivity due to corneal thickness made the smallest cylinder power contribution (up to 0.25DC) to residual astigmatism followed by contributions of the anterior chamber depth (up to 0.50DC) and crystalline lens thickness (up to 1.00DC). In each case astigmatic contributions were predominantly direct. More astigmatism arose from the posterior corneal surface (up to 1.00DC) and both crystalline lens surfaces (up to 2.50DC). The astigmatic contributions of the posterior corneal and lens surfaces were found to be predominantly inverse whilst direct astigmatism arose from the anterior lens surface. Very similar results were found for right versus left eyes and males versus females. Repeatability was assessed on 20 individuals. The ophthalmophakometric method was found to be prone to considerable accumulated experimental errors. However, these errors are random in nature so that group averaged data were found to be reasonably repeatable. A further confirmatory study was carried out on 10 individuals which demonstrated that biometric measurements made with and without cycloplegia did not differ significantly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research described here concerns the development of metrics and models to support the development of hybrid (conventional/knowledge based) integrated systems. The thesis argues from the point that, although it is well known that estimating the cost, duration and quality of information systems is a difficult task, it is far from clear what sorts of tools and techniques would adequately support a project manager in the estimation of these properties. A literature review shows that metrics (measurements) and estimating tools have been developed for conventional systems since the 1960s while there has been very little research on metrics for knowledge based systems (KBSs). Furthermore, although there are a number of theoretical problems with many of the `classic' metrics developed for conventional systems, it also appears that the tools which such metrics can be used to develop are not widely used by project managers. A survey was carried out of large UK companies which confirmed this continuing state of affairs. Before any useful tools could be developed, therefore, it was important to find out why project managers were not using these tools already. By characterising those companies that use software cost estimating (SCE) tools against those which could but do not, it was possible to recognise the involvement of the client/customer in the process of estimation. Pursuing this point, a model of the early estimating and planning stages (the EEPS model) was developed to test exactly where estimating takes place. The EEPS model suggests that estimating could take place either before a fully-developed plan has been produced, or while this plan is being produced. If it were the former, then SCE tools would be particularly useful since there is very little other data available from which to produce an estimate. A second survey, however, indicated that project managers see estimating as being essentially the latter at which point project management tools are available to support the process. It would seem, therefore, that SCE tools are not being used because project management tools are being used instead. The issue here is not with the method of developing an estimating model or tool, but; in the way in which "an estimate" is intimately tied to an understanding of what tasks are being planned. Current SCE tools are perceived by project managers as targetting the wrong point of estimation, A model (called TABATHA) is then presented which describes how an estimating tool based on an analysis of tasks would thus fit into the planning stage. The issue of whether metrics can be usefully developed for hybrid systems (which also contain KBS components) is tested by extending a number of "classic" program size and structure metrics to a KBS language, Prolog. Measurements of lines of code, Halstead's operators/operands, McCabe's cyclomatic complexity, Henry & Kafura's data flow fan-in/out and post-release reported errors were taken for a set of 80 commercially-developed LPA Prolog programs: By re~defining the metric counts for Prolog it was found that estimates of program size and error-proneness comparable to the best conventional studies are possible. This suggests that metrics can be usefully applied to KBS languages, such as Prolog and thus, the development of metncs and models to support the development of hybrid information systems is both feasible and useful.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Recent studies indicate that ocular and scleral rigidity is pertinent to our understanding of glaucoma, age related macular degeneration and the development and pathogenesis of myopia. The principal method of measuring ocular rigidity is by extrapolation of data from corneal indentation tonometry (Ko) using Friedenwald’s transformation algorithms. Using scleral indentation (Schiotz tonometry) we assess whether regional variations in resistance to indentation occur in vivo across the human anterior globe directly, with reference to the deflection of Schiotz scale readings. Methods: Data were collected from both eyes of 26 normal young adult subjects with a range of refractive error (mean spherical equivalent ± S.D. of -1.77 D ± 3.28 D, range -10.56 to +4.38 D). Schiotz tonometry (5.5 g & 7.5 g) was performed on the cornea and four scleral quadrants; supero-temporal (ST) and -nasal (SN), infero-temporal (IT) and -nasal (IN) approximately 8 mm posterior to the limbus. Results: Values of Ko (mm3)-1 were consistent with those previously reported (mean 0.0101 ± 0.0082, range 0.0019–0.0304). In regards to the sclera, significant differences (p < 0.001) were found across quadrants with indentation readings for both loads between means for the cornea and ST; ST and SN; ST and IT, ST and IN. Mean (±S.D.) scale readings for 5.5 g were: cornea 5.93 ± 1.14, ST 8.05 ± 1.58, IT 7.03 ± 1.86, SN 6.25 ± 1.10, IN 6.02 ± 1.49; and 7.5 g: cornea 9.26 ± 1.27, ST 11.56 ± 1.65, IT 10.31 ± 1.74, SN 9.91 ± 1.20, IN 9.50 ± 1.56. Conclusions: Significant regional variation was found in the resistance of the anterior sclera to indentation produced by the Schiotz tonometer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Formative measurement has seen increasing acceptance in organizational research since the turn of the 21st Century. However, in more recent times, a number of criticisms of the formative approach have appeared. Such work argues that formatively-measured constructs are empirically ambiguous and thus flawed in a theory-testing context. The aim of the present paper is to examine the underpinnings of formative measurement theory in light of theories of causality and ontology in measurement in general. In doing so, a thesis is advanced which draws a distinction between reflective, formative, and causal theories of latent variables. This distinction is shown to be advantageous in that it clarifies the ontological status of each type of latent variable, and thus provides advice on appropriate conceptualization and application. The distinction also reconciles in part both recent supportive and critical perspectives on formative measurement. In light of this, advice is given on how most appropriately to model formative composites in theory-testing applications, placing the onus on the researcher to make clear their conceptualization and operationalisation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the use of different direct detection modulation formats in a wavelength switched optical network. We find the minimum time it takes a tunable sampled grating distributed Bragg reflector laser to recover after switching from one wavelength channel to another for different modulation formats. The recovery time is investigated utilizing a field programmable gate array which operates as a time resolved bit error rate detector. The detector offers 93 ps resolution operating at 10.7 Gb/s and allows for all the data received to contribute to the measurement, allowing low bit error rates to be measured at high speed. The recovery times for 10.7 Gb/s non-return-to-zero on–off keyed modulation, 10.7 Gb/s differentially phase shift keyed signal and 21.4 Gb/s differentially quadrature phase shift keyed formats can be as low as 4 ns, 7 ns and 40 ns, respectively. The time resolved phase noise associated with laser settling is simultaneously measured for 21.4 Gb/s differentially quadrature phase shift keyed data and it shows that the phase noise coupled with frequency error is the primary limitation on transmitting immediately after a laser switching event.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rationale for carrying out this research was to address the clear lack of knowledge surrounding the measurement of public hospital performance in Ireland. The objectives of this research were to develop a comprehensive model for measuring hospital performance and using this model to measure the performance of public acute hospitals in Ireland in 2007. Having assessed the advantages and disadvantages of various measurement models the Data Envelopment Analysis (DEA) model was chosen for this research. DEA was initiated by Charnes, Cooper and Rhodes in 1978 and further developed by Fare et al. (1983) and Banker et al. (1984). The method used to choose relevant inputs and outputs to be included in the model followed that adopted by Casu et al. (2005) which included the use of focus groups. The main conclusions of the research are threefold. Firstly, it is clear that each stakeholder group has differing opinions on what constitutes good performance. It is therefore imperative that any performance measurement model would be designed within parameters that are clearly understood by any intended audience. Secondly, there is a lack of publicly available qualitative information in Ireland that inhibits detailed analysis of hospital performance. Thirdly, based on available qualitative and quantitative data the results indicated a high level of efficiency among the public acute hospitals in Ireland in their staffing and non pay costs, averaging 98.5%. As DEA scores are sensitive to the number of input and output variables as well as the size of the sample it should be borne in mind that a high level of efficiency could be as a result of using DEA with too many variables compared to the number of hospitals. No hospital was deemed to be scale efficient in any of the models even though the average scale efficiency for all of the hospitals was relatively high at 90.3%. Arising from this research the main recommendations would be that information on medical outcomes, survival rates and patient satisfaction should be made publicly available in Ireland; that despite a high average efficiency level that many individual hospitals need to focus on improving their technical and scale efficiencies, and that performance measurement models should be developed that would include more qualitative data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Performance evaluation in conventional data envelopment analysis (DEA) requires crisp numerical values. However, the observed values of the input and output data in real-world problems are often imprecise or vague. These imprecise and vague data can be represented by linguistic terms characterised by fuzzy numbers in DEA to reflect the decision-makers' intuition and subjective judgements. This paper extends the conventional DEA models to a fuzzy framework by proposing a new fuzzy additive DEA model for evaluating the efficiency of a set of decision-making units (DMUs) with fuzzy inputs and outputs. The contribution of this paper is threefold: (1) we consider ambiguous, uncertain and imprecise input and output data in DEA, (2) we propose a new fuzzy additive DEA model derived from the a-level approach and (3) we demonstrate the practical aspects of our model with two numerical examples and show its comparability with five different fuzzy DEA methods in the literature. Copyright © 2011 Inderscience Enterprises Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing intensity of global competition has led organizations to utilize various types of performance measurement tools for improving the quality of their products and services. Data envelopment analysis (DEA) is a methodology for evaluating and measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. All the data in the conventional DEA with input and/or output ratios assumes the form of crisp numbers. However, the observed values of data in real-world problems are sometimes expressed as interval ratios. In this paper, we propose two new models: general and multiplicative non-parametric ratio models for DEA problems with interval data. The contributions of this paper are fourfold: (1) we consider input and output data expressed as interval ratios in DEA; (2) we address the gap in DEA literature for problems not suitable or difficult to model with crisp values; (3) we propose two new DEA models for evaluating the relative efficiencies of DMUs with interval ratios, and (4) we present a case study involving 20 banks with three interval ratios to demonstrate the applicability and efficacy of the proposed models where the traditional indicators are mostly financial ratios. © 2011 Elsevier Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Guest editorial Ali Emrouznejad is a Senior Lecturer at the Aston Business School in Birmingham, UK. His areas of research interest include performance measurement and management, efficiency and productivity analysis as well as data mining. He has published widely in various international journals. He is an Associate Editor of IMA Journal of Management Mathematics and Guest Editor to several special issues of journals including Journal of Operational Research Society, Annals of Operations Research, Journal of Medical Systems, and International Journal of Energy Management Sector. He is in the editorial board of several international journals and co-founder of Performance Improvement Management Software. William Ho is a Senior Lecturer at the Aston University Business School. Before joining Aston in 2005, he had worked as a Research Associate in the Department of Industrial and Systems Engineering at the Hong Kong Polytechnic University. His research interests include supply chain management, production and operations management, and operations research. He has published extensively in various international journals like Computers & Operations Research, Engineering Applications of Artificial Intelligence, European Journal of Operational Research, Expert Systems with Applications, International Journal of Production Economics, International Journal of Production Research, Supply Chain Management: An International Journal, and so on. His first authored book was published in 2006. He is an Editorial Board member of the International Journal of Advanced Manufacturing Technology and an Associate Editor of the OR Insight Journal. Currently, he is a Scholar of the Advanced Institute of Management Research. Uses of frontier efficiency methodologies and multi-criteria decision making for performance measurement in the energy sector This special issue aims to focus on holistic, applied research on performance measurement in energy sector management and for publication of relevant applied research to bridge the gap between industry and academia. After a rigorous refereeing process, seven papers were included in this special issue. The volume opens with five data envelopment analysis (DEA)-based papers. Wu et al. apply the DEA-based Malmquist index to evaluate the changes in relative efficiency and the total factor productivity of coal-fired electricity generation of 30 Chinese administrative regions from 1999 to 2007. Factors considered in the model include fuel consumption, labor, capital, sulphur dioxide emissions, and electricity generated. The authors reveal that the east provinces were relatively and technically more efficient, whereas the west provinces had the highest growth rate in the period studied. Ioannis E. Tsolas applies the DEA approach to assess the performance of Greek fossil fuel-fired power stations taking undesirable outputs into consideration, such as carbon dioxide and sulphur dioxide emissions. In addition, the bootstrapping approach is deployed to address the uncertainty surrounding DEA point estimates, and provide bias-corrected estimations and confidence intervals for the point estimates. The author revealed from the sample that the non-lignite-fired stations are on an average more efficient than the lignite-fired stations. Maethee Mekaroonreung and Andrew L. Johnson compare the relative performance of three DEA-based measures, which estimate production frontiers and evaluate the relative efficiency of 113 US petroleum refineries while considering undesirable outputs. Three inputs (capital, energy consumption, and crude oil consumption), two desirable outputs (gasoline and distillate generation), and an undesirable output (toxic release) are considered in the DEA models. The authors discover that refineries in the Rocky Mountain region performed the best, and about 60 percent of oil refineries in the sample could improve their efficiencies further. H. Omrani, A. Azadeh, S. F. Ghaderi, and S. Abdollahzadeh presented an integrated approach, combining DEA, corrected ordinary least squares (COLS), and principal component analysis (PCA) methods, to calculate the relative efficiency scores of 26 Iranian electricity distribution units from 2003 to 2006. Specifically, both DEA and COLS are used to check three internal consistency conditions, whereas PCA is used to verify and validate the final ranking results of either DEA (consistency) or DEA-COLS (non-consistency). Three inputs (network length, transformer capacity, and number of employees) and two outputs (number of customers and total electricity sales) are considered in the model. Virendra Ajodhia applied three DEA-based models to evaluate the relative performance of 20 electricity distribution firms from the UK and the Netherlands. The first model is a traditional DEA model for analyzing cost-only efficiency. The second model includes (inverse) quality by modelling total customer minutes lost as an input data. The third model is based on the idea of using total social costs, including the firm’s private costs and the interruption costs incurred by consumers, as an input. Both energy-delivered and number of consumers are treated as the outputs in the models. After five DEA papers, Stelios Grafakos, Alexandros Flamos, Vlasis Oikonomou, and D. Zevgolis presented a multiple criteria analysis weighting approach to evaluate the energy and climate policy. The proposed approach is akin to the analytic hierarchy process, which consists of pairwise comparisons, consistency verification, and criteria prioritization. In the approach, stakeholders and experts in the energy policy field are incorporated in the evaluation process by providing an interactive mean with verbal, numerical, and visual representation of their preferences. A total of 14 evaluation criteria were considered and classified into four objectives, such as climate change mitigation, energy effectiveness, socioeconomic, and competitiveness and technology. Finally, Borge Hess applied the stochastic frontier analysis approach to analyze the impact of various business strategies, including acquisition, holding structures, and joint ventures, on a firm’s efficiency within a sample of 47 natural gas transmission pipelines in the USA from 1996 to 2005. The author finds that there were no significant changes in the firm’s efficiency by an acquisition, and there is a weak evidence for efficiency improvements caused by the new shareholder. Besides, the author discovers that parent companies appear not to influence a subsidiary’s efficiency positively. In addition, the analysis shows a negative impact of a joint venture on technical efficiency of the pipeline company. To conclude, we are grateful to all the authors for their contribution, and all the reviewers for their constructive comments, which made this special issue possible. We hope that this issue would contribute significantly to performance improvement of the energy sector.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE. The purpose of this study was to evaluate the potential of the portable Grand Seiko FR-5000 autorefractor to allow objective, continuous, open-field measurement of accommodation and pupil size for the investigation of the visual response to real-world environments and changes in the optical components of the eye. METHODS. The FR-5000 projects a pair of infrared horizontal and vertical lines on either side of fixation, analyzing the separation of the bars in the reflected image. The measurement bars were turned on permanently and the video output of the FR-5000 fed into a PC for real-time analysis. The calibration between infrared bar separation and the refractive error was assessed over a range of 10.0 D with a model eye. Tolerance to longitudinal instrument head shift was investigated over a ±15 mm range and to eye alignment away from the visual axis over eccentricities up to 25.0°. The minimum pupil size for measurement was determined with a model eye. RESULTS. The separation of the measurement bars changed linearly (r = 0.99), allowing continuous online analysis of the refractive state at 60 Hz temporal and approximately 0.01 D system resolution with pupils >2 mm. The pupil edge could be analyzed on the diagonal axes at the same rate with a system resolution of approximately 0.05 mm. The measurement of accommodation and pupil size were affected by eccentricity of viewing and instrument focusing inaccuracies. CONCLUSIONS. The small size of the instrument together with its resolution and temporal properties and ability to measure through a 2 mm pupil make it useful for the measurement of dynamic accommodation and pupil responses in confined environments, although good eye alignment is important. Copyright © 2006 American Academy of Optometry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natural language understanding is to specify a computational model that maps sentences to their semantic mean representation. In this paper, we propose a novel framework to train the statistical models without using expensive fully annotated data. In particular, the input of our framework is a set of sentences labeled with abstract semantic annotations. These annotations encode the underlying embedded semantic structural relations without explicit word/semantic tag alignment. The proposed framework can automatically induce derivation rules that map sentences to their semantic meaning representations. The learning framework is applied on two statistical models, the conditional random fields (CRFs) and the hidden Markov support vector machines (HM-SVMs). Our experimental results on the DARPA communicator data show that both CRFs and HM-SVMs outperform the baseline approach, previously proposed hidden vector state (HVS) model which is also trained on abstract semantic annotations. In addition, the proposed framework shows superior performance than two other baseline approaches, a hybrid framework combining HVS and HM-SVMs and discriminative training of HVS, with a relative error reduction rate of about 25% and 15% being achieved in F-measure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To reveal the moisture migration mechanism of the unsaturated red clays, which are sensitive to water content change and widely distributed in South China, and then rationally use them as a filling material for highway embankments, a method to measure the water content of red clay cylinders using X-ray computed tomography (CT) was proposed and verified. Then, studies on the moisture migrations in the red clays under the rainfall and ground water level were performed at different degrees of compaction. The results show that the relationship between dry density, water content, and CT value determined from X-ray CT tests can be used to nondestructively measure the water content of red clay cylinders at different migration time, which avoids the error reduced by the sample-to-sample variation. The rainfall, ground water level, and degree of compaction are factors that can significantly affect the moisture migration distance and migration rate. Some techniques, such as lowering groundwater table and increasing degree of compaction of the red clays, can be used to prevent or delay the moisture migration in highway embankments filled with red clays.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present and analyze three different online algorithms for learning in discrete Hidden Markov Models (HMMs) and compare their performance with the Baldi-Chauvin Algorithm. Using the Kullback-Leibler divergence as a measure of the generalization error we draw learning curves in simplified situations and compare the results. The performance for learning drifting concepts of one of the presented algorithms is analyzed and compared with the Baldi-Chauvin algorithm in the same situations. A brief discussion about learning and symmetry breaking based on our results is also presented. © 2006 American Institute of Physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reviews the state of the art in measuring, modeling, and managing clogging in subsurface-flow treatment wetlands. Methods for measuring in situ hydraulic conductivity in treatment wetlands are now available, which provide valuable insight into assessing and evaluating the extent of clogging. These results, paired with the information from more traditional approaches (e.g., tracer testing and composition of the clog matter) are being incorporated into the latest treatment wetland models. Recent finite element analysis models can now simulate clogging development in subsurface-flow treatment wetlands with reasonable accuracy. Various management strategies have been developed to extend the life of clogged treatment wetlands, including gravel excavation and/or washing, chemical treatment, and application of earthworms. These strategies are compared and available cost information is reported. © 2012 Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An effective mathematical method of new knowledge obtaining on the structure of complex objects with required properties is developed. The method comprehensively takes into account information on the properties and relations of primary objects, composing the complex objects. It is based on measurement of distances between the predicate groups with some interpretation of them. The optimal measure for measurement of these distances with the maximal discernibleness of different groups of predicates is constructed. The method is tested on solution of the problem of obtaining of new compound with electro-optical properties.