38 resultados para In-cylinder Pressure Analysis
em Aston University Research Archive
Resumo:
For more than a century it has been known that the eye is not a perfect optical system, but rather a system that suffers from aberrations beyond conventional prescriptive descriptions of defocus and astigmatism. Whereas traditional refraction attempts to describe the error of the eye with only two parameters, namely sphere and cylinder, measurements of wavefront aberrations depict the optical error with many more parameters. What remains questionable is the impact these additional parameters have on visual function. Some authors have argued that higher-order aberrations have a considerable effect on visual function and in certain cases this effect is significant enough to induce amblyopia. This has been referred to as ‘higher-order aberration-associated amblyopia’. In such cases, correction of higher-order aberrations would not restore visual function. Others have reported that patients with binocular asymmetric aberrations display an associated unilateral decrease in visual acuity and, if the decline in acuity results from the aberrations alone, such subjects may have been erroneously diagnosed as amblyopes. In these cases, correction of higher-order aberrations would restore visual function. This refractive entity has been termed ‘aberropia’. In order to investigate these hypotheses, the distribution of higher-order aberrations in strabismic, anisometropic and idiopathic amblyopes, and in a group of visual normals, was analysed both before and after wavefront-guided laser refractive correction. The results show: (i) there is no significant asymmetry in higher-order aberrations between amblyopic and fixing eyes prior to laser refractive treatment; (ii) the mean magnitude of higher-order aberrations is similar within the amblyopic and visually normal populations; (iii) a significant improvement in visual acuity can be realised for adult amblyopic patients utilising wavefront-guided laser refractive surgery and a modest increase in contrast sensitivity was observed for the amblyopic eye of anisometropes following treatment (iv) an overall trend towards increased higher-order aberrations following wavefront-guided laser refractive treatment was observed for both visually normal and amblyopic eyes. In conclusion, while the data do not provide any direct evidence for the concepts of either ‘aberropia’ or ‘higher-order aberration-associated amblyopia’, it is clear that gains in visual acuity and contrast sensitivity may be realised following laser refractive treatment of the amblyopic adult eye. Possible mechanisms by which these gains are realised are discussed.
Resumo:
Are the perceptions of professional economists on transaction costs consistent with make-or-buy decisions made within firms? The answer may have important implications for transaction cost research. Data on firms' outsourcing during the new product development process are taken from a largescale survey of UK, German and Irish manufacturing plants, and we test the consistency of these outsourcing decisions with the predictions derived from the transaction cost perceptions of a panel of economists. Little consistency is evident between actual outsourcing patterns and the predictions of the (Williamsonian) transactions cost model derived from the panel of economists. There is, however, evidence of a systematic pattern to the differences, suggesting that a competence or resource-based approach may be relevant to understanding firm outsourcing, and that firms are adopting a strategic approach to managing their external relationships. © Cambridge Political Economy Society 2005; all rights reserved.
Resumo:
In some applications of data envelopment analysis (DEA) there may be doubt as to whether all the DMUs form a single group with a common efficiency distribution. The Mann-Whitney rank statistic has been used to evaluate if two groups of DMUs come from a common efficiency distribution under the assumption of them sharing a common frontier and to test if the two groups have a common frontier. These procedures have subsequently been extended using the Kruskal-Wallis rank statistic to consider more than two groups. This technical note identifies problems with the second of these applications of both the Mann-Whitney and Kruskal-Wallis rank statistics. It also considers possible alternative methods of testing if groups have a common frontier, and the difficulties of disaggregating managerial and programmatic efficiency within a non-parametric framework. © 2007 Springer Science+Business Media, LLC.
Resumo:
Data envelopment analysis defines the relative efficiency of a decision making unit (DMU) as the ratio of the sum of its weighted outputs to the sum of its weighted inputs allowing the DMUs to freely allocate weights to their inputs/outputs. However, this measure may not reflect a DMU's true efficiency as some inputs/outputs may not contribute reasonably to the efficiency measure. Traditionally, to overcome this problem weights restrictions have been imposed. This paper offers a new approach to this problem where DMUs operate a constant returns to scale technology in a single input multi-output context. The approach is based on introducing unobserved DMUs, created by adjusting the output levels of certain observed relatively efficient DMUs, reflecting a combination of technical information of feasible production levels and the DM's value judgments. Its main advantage is that the information conveyed by the DM is local, with reference to a specific observed DMU. The approach is illustrated on a real life application. © 2003 Elsevier B.V. All rights reserved.
Resumo:
In some contexts data envelopment analysis (DEA) gives poor discrimination on the performance of units. While this may reflect genuine uniformity of performance between units, it may also reflect lack of sufficient observations or other factors limiting discrimination on performance between units. In this paper, we present an overview of the main approaches that can be used to improve the discrimination of DEA. This includes simple methods such as the aggregation of inputs or outputs, the use of longitudinal data, more advanced methods such as the use of weight restrictions, production trade-offs and unobserved units, and a relatively new method based on the use of selective proportionality between the inputs and outputs. © 2007 Springer Science+Business Media, LLC.
Resumo:
It is still debatable whether scientific diversity is a virtue or a disadvantage for the development of a discipline. Nonetheless, diversity among scientists with respect to their journal quality perceptions plays an important role in hiring and promotion decisions. In this article we examine the degree of diversity within economics based on the journal quality perceptions of 2,103 AEA economists worldwide. Specifically, we empirically test for factors that might explain differences in an economist's journal quality perceptions. These factors include an economist's geographic origin, school of thought, journal affiliation, field of specialization and research orientation. Indeed, we find that a significant degree of diversity in journal quality perceptions exists between economists that belong in different subgroups. These results might explain the frequent debates in tenure and promotion committees where journal standings are used for the evaluation of a researcher's output.
Resumo:
This chapter demonstrates diversity in the activity of authorship and the corresponding diversity of forensic authorship analysis questions and techniques. Authorship is discussed in terms of Love’s (2002) multifunctional description of precursory, executive, declarative and revisionary authorship activities and the implications of this distinction for forensic problem solving. Four different authorship questions are considered. These are ‘How was the text produced?’, ‘How many people wrote the text?’, ‘What kind of person wrote the text?’ and ‘What is the relationship of a queried text with comparison texts?’ Different approaches to forensic authorship analysis are discussed in terms of their appropriateness to answering different authorship questions. The conclusion drawn is that no one technique will ever be appropriate to all problems.
Resumo:
Urine proteomics is emerging as a powerful tool for biomarker discovery. The purpose of this study is the development of a well-characterized "real life" sample that can be used as reference standard in urine clinical proteomics studies.
Resumo:
This paper introduces a new mathematical method for improving the discrimination power of data envelopment analysis and to completely rank the efficient decision-making units (DMUs). Fuzzy concept is utilised. For this purpose, first all DMUs are evaluated with the CCR model. Thereafter, the resulted weights for each output are considered as fuzzy sets and are then converted to fuzzy numbers. The introduced model is a multi-objective linear model, endpoints of which are the highest and lowest of the weighted values. An added advantage of the model is its ability to handle the infeasibility situation sometimes faced by previously introduced models.
Resumo:
This thesis demonstrates that the use of finite elements need not be confined to space alone, but that they may also be used in the time domain, It is shown that finite element methods may be used successfully to obtain the response of systems to applied forces, including, for example, the accelerations in a tall structure subjected to an earthquake shock. It is further demonstrated that at least one of these methods may be considered to be a practical alternative to more usual methods of solution. A detailed investigation of the accuracy and stability of finite element solutions is included, and methods of applications to both single- and multi-degree of freedom systems are described. Solutions using two different temporal finite elements are compared with those obtained by conventional methods, and a comparison of computation times for the different methods is given. The application of finite element methods to distributed systems is described, using both separate discretizations in space and time, and a combined space-time discretization. The inclusion of both viscous and hysteretic damping is shown to add little to the difficulty of the solution. Temporal finite elements are also seen to be of considerable interest when applied to non-linear systems, both when the system parameters are time-dependent and also when they are functions of displacement. Solutions are given for many different examples, and the computer programs used for the finite element methods are included in an Appendix.
Resumo:
The aim of this project was to develop the education work of an environmental pressure group. The research devised and implemented a project to produce multi-media teaching packs on the urban environment. Whilst this involved understanding environmental education it was necessary to research beyond this to include the various structural and dynamic constraints on change in the field. This presented a number of methodological difficulties; from the resolution of which a model of the research process involved in this project has been developed. It is argued that research oriented towards practical change requires the insights of an experienced practitioner to be combined with the rigours of controlled systematic enquiry. Together these function as a model-building process encompassing intuition, induction and deduction. Model testing is carried out through repeated intervention in the field; thus an interplay between researcher and client ensues such that the project develops in a mutually acceptable direction. In practice, this development will be both unpredictable and erratic. Although the conclusions reached here are based on a single case study they address general methodological issues likely to be encountered in different field settings concerned with different practical problems.
Resumo:
The main advantage of Data Envelopment Analysis (DEA) is that it does not require any priori weights for inputs and outputs and allows individual DMUs to evaluate their efficiencies with the input and output weights that are only most favorable weights for calculating their efficiency. It can be argued that if DMUs are experiencing similar circumstances, then the pricing of inputs and outputs should apply uniformly across all DMUs. That is using of different weights for DMUs makes their efficiencies unable to be compared and not possible to rank them on the same basis. This is a significant drawback of DEA; however literature observed many solutions including the use of common set of weights (CSW). Besides, the conventional DEA methods require accurate measurement of both the inputs and outputs; however, crisp input and output data may not relevant be available in real world applications. This paper develops a new model for the calculation of CSW in fuzzy environments using fuzzy DEA. Further, a numerical example is used to show the validity and efficacy of the proposed model and to compare the results with previous models available in the literature.
Resumo:
The diagnosis of ocular disease is increasingly important in optometric practice and there is a need for cost effective point of care assays to assist in that. Although tears are a potentially valuable source of diagnostic information difficulties associated with sample collection and limited sample size together with sample storage and transport have proved major limitations. Progressive developments in electronics and fibre optics together with innovation in sensing technology mean that the construction of inexpensive point of care fibre optic sensing devices is now possible. Tear electrolytes are an obvious family of target analytes, not least to complement the availability of devices that make the routine measurement of tear osmolarity possible in the clinic. In this paper we describe the design, fabrication and calibration of a fibre-optic based electrolyte sensor for the quantification of potassium in tears using the ex vivo contact lens as the sample source. The technology is generic and the same principles can be used in the development of calcium and magnesium sensors. An important objective of this sensor technology development is to provide information at the point of routine optometric examination, which would provide supportive evidence of tear abnormality.
Resumo:
4-Hydroxy-2-nonenal (HNE) is one of the most studied products of phospholipid peroxidation, owing to its reactivity and cytotoxicity. It can be formed by several radical-dependent oxidative routes involving the formation of hydroperoxides, alkoxyl radicals, epoxides, and fatty acyl cross-linking reactions. Cleavage of the oxidized fatty acyl chain results in formation of HNE from the methyl end, and 9-oxo-nonanoic acid from the carboxylate or esterified end of the chain, although many other products are also possible. HNE can be metabolized in tissues by a variety of pathways, leading to detoxification and excretion. HNE-adducts to proteins have been detected in inflammatory situations such as atherosclerotic lesions using polyclonal and monoclonal antibodies, which have also been applied in ELISAs and western blotting. However, in order to identify the proteins modified and the exact sites and nature of the modifications, mass spectrometry approaches are required. Combinations of enrichment strategies with targetted mass spectrometry routines such as neutral loss scanning are now facilitating detection of HNE-modified proteins in complex biological samples. This is important for characterizing the interactions of HNE with redox sensitive cell signalling proteins and understanding how it may modulate their activities either physiologically or in disease. © 2013 The Author.