53 resultados para Satisfaction Measurement Methods
Resumo:
PURPOSE: To determine whether letter sequences and/or lens-presentation order should be randomized when measuring defocus curves and to assess the most appropriate criterion for calculating the subjective amplitude of accommodation (AoA) from defocus curves. SETTING: Eye Clinic, School of Life & Health Sciences, Aston University, Birmingham, United Kingdom. METHODS: Defocus curves (from +3.00 diopters [D] to -3.00 D in 0.50 D steps) for 6 possible combinations of randomized or nonrandomized letter sequences and/or lens-presentation order were measured in a random order in 20 presbyopic subjects. Subjective AoA was calculated from the defocus curves by curve fitting using various published criteria, and each was correlated to subjective push-up AoA. Objective AoA was measured for comparison of blur tolerance and pupil size. RESULTS: Randomization of lens-presentation order and/or letter sequences, or lack of, did not affect the measured defocus curves (P>.05, analysis of variance). The range of defocus that maintains highest achievable visual acuity (allowing for variability of repeated measurement) was better correlated to (r = 0.84) and agreed best with ( 0.50 D) subjective push-up AoA than any other relative or absolute acuity criterion used in previous studies. CONCLUSIONS: Nonrandomized letters and lens presentation on their own did not affect subjective AoA measured by defocus curves, although their combination should be avoided. Quantification of subjective AoA from defocus curves should be standardized to the range of defocus that maintains the best achievable visual acuity.
Resumo:
Changes in the radial growth rate (RGR mm/yr) through life were studied in thalli of the foliose lichen Parmelia conspersa by two methods: (1) a cross-sectional study (Study A) in which the RGR was measured in 60 thalli from 0.2 to 13 cm in diameter, and (2) by radial growth measurements over 4.5 years of fragments, consisting of a single major lobe, which were removed from large thalli and glued to pieces of slate (Study B). Both studies suggested there was a phase of increasing RGR in small thalli followed by a more constant phase, the latter beginning at approximately a thallus radius of 6-8 mm. However, in Study B significantly increased RGR was observed during the second 6-month growth period. This phase of growth was more likely to be due to an increase in lobe width than to an effect of climate. In addition, a lobe in a large thallus with both adjacent lobes removed significantly increased in width over 1 year compared with control lobes. These results suggest that (1) mean lobe width in a thallus may be determined by the intensity of marginal competition between adjacent lobes, and (2) changes in lobe width during the life of a lichen thallus may be a factor determining the establishment of the linear phase of growth in foliose lichens. © 1992.
Resumo:
Two energy grass species, switch grass, a North American tuft grass, and reed canary grass, a European native, are likely to be important sources of biomass in Western Europe for the production of biorenewable energy. Matching chemical composition to conversion efficiency is a primary goal for improvement programmes and for determining the quality of biomass feed-stocks prior to use and there is a need for methods which allow cost effective characterisation of chemical composition at high rates of sample through-put. In this paper we demonstrate that nitrogen content and alkali index, parameters greatly influencing thermal conversion efficiency, can be accurately predicted in dried samples of these species grown under a range of agronomic conditions by partial least square regression of Fourier transform infrared spectra (R2 values for plots of predicted vs. measured values of 0.938 and 0.937, respectively). We also discuss the prediction of carbon and ash content in these samples and the application of infrared based predictive methods for the breeding improvement of energy grasses.
Resumo:
1. Pearson's correlation coefficient only tests whether the data fit a linear model. With large numbers of observations, quite small values of r become significant and the X variable may only account for a minute proportion of the variance in Y. Hence, the value of r squared should always be calculated and included in a discussion of the significance of r. 2. The use of r assumes that a bivariate normal distribution is present and this assumption should be examined prior to the study. If Pearson's r is not appropriate, then a non-parametric correlation coefficient such as Spearman's rs may be used. 3. A significant correlation should not be interpreted as indicating causation especially in observational studies in which there is a high probability that the two variables are correlated because of their mutual correlations with other variables. 4. In studies of measurement error, there are problems in using r as a test of reliability and the ‘intra-class correlation coefficient’ should be used as an alternative. A correlation test provides only limited information as to the relationship between two variables. Fitting a regression line to the data using the method known as ‘least square’ provides much more information and the methods of regression and their application in optometry will be discussed in the next article.
Resumo:
Customer satisfaction and service quality are two important concepts in the marketing literature. However, there has been some confusion about the conceptualisation and measurement of these two concepts and the nature of the relationship between them. The primary objective of this research was to develop a more thorough understanding of these concepts, and a model that could help to explain the links between them and their relationships with post-purchase behaviour. A preliminary theoretical model was developed, based on an exhaustive review of the literature. Following exploratory research, the model was revised by incorporating "Perceived Value" and "Perceived Sacrifice" to help explain customer's post-purchase behaviour. A longitudinal survey was conducted in the context of the restaurant industry, and the data were analysed using structural equation modelling. The results provided evidence to support the main research hypotheses. However, the effect of "Normative Expectations" on "Encounter Quality" was insignificant, and "Perceived Value" had a direct effect on "Behavioural Intentions" despite expectations that such an effect would be mediated through "Customer Satisfaction". It was also found that "Normative Expectations" were relatively more stable than "Predictive Expectations". It is argued that the present research significantly contributes to the marketing literature, and in particular the role of perceived value in the formation of customers' post-purchase behaviour. Further research efforts in this area are warranted.
Resumo:
The state of the art in productivity measurement and analysis shows a gap between simple methods having little relevance in practice and sophisticated mathematical theory which is unwieldy for strategic and tactical planning purposes, -particularly at company level. An extension is made in this thesis to the method of productivity measurement and analysis based on the concept of added value, appropriate to those companies in which the materials, bought-in parts and services change substantially and a number of plants and inter-related units are involved in providing components for final assembly. Reviews and comparisons of productivity measurement dealing with alternative indices and their problems have been made and appropriate solutions put forward to productivity analysis in general and the added value method in particular. Based on this concept and method, three kinds of computerised models two of them deterministic, called sensitivity analysis and deterministic appraisal, and the third one, stochastic, called risk simulation, have been developed to cope with the planning of productivity and productivity growth with reference to the changes in their component variables, ranging from a single value 'to• a class interval of values of a productivity distribution. The models are designed to be flexible and can be adjusted according to the available computer capacity expected accuracy and 'presentation of the output. The stochastic model is based on the assumption of statistical independence between individual variables and the existence of normality in their probability distributions. The component variables have been forecasted using polynomials of degree four. This model is tested by comparisons of its behaviour with that of mathematical model using real historical data from British Leyland, and the results were satisfactory within acceptable levels of accuracy. Modifications to the model and its statistical treatment have been made as required. The results of applying these measurements and planning models to the British motor vehicle manufacturing companies are presented and discussed.
Resumo:
This study is concerned with several proposals concerning multiprocessor systems and with the various possible methods of evaluating such proposals. After a discussion of the advantages and disadvantages of several performance evaluation tools, the author decides that simulation is the only tool powerful enough to develop a model which would be of practical use, in the design, comparison and extension of systems. The main aims of the simulation package developed as part of this study are cost effectiveness, ease of use and generality. The methodology on which the simulation package is based is described in detail. The fundamental principles are that model design should reflect actual systems design, that measuring procedures should be carried out alongside design that models should be well documented and easily adaptable and that models should be dynamic. The simulation package itself is modular, and in this way reflects current design trends. This approach also aids documentation and ensures that the model is easily adaptable. It contains a skeleton structure and a library of segments which can be added to or directly swapped with segments of the skeleton structure, to form a model which fits a user's requirements. The study also contains the results of some experimental work carried out using the model, the first part of which tests• the model's capabilities by simulating a large operating system, the ICL George 3 system; the second part deals with general questions and some of the many proposals concerning multiprocessor systems.
Resumo:
The research is concerned with the measurement of residents' evaluations of the environmental quality of residential areas. The research reflects the increased attention being given to residents' values in planning decisions affecting the residential environment. The work was undertaken in co-operation with a local authority which was in the process of revising its housing strategy, and in particular the priorities for improvement action. The study critically examines the existing evidence on environmental values and their relationship to the environment and points to a number of methodological and conceptual deficiencies. The research strategy developed on the basis of the research review was constrained by the need to keep any survey methods simple so that they could easily be repeated, when necessary, by the sponsoring authority. A basic perception model was assumed, and a social survey carried out to measure residents' responses to different environmental conditions. The data was only assumed to have ordinal properties, necessitating the extensive use of non-parametric statistics. Residents' expressions of satisfaction with the component elements of the environment (ranging from convenience to upkeep and privacy) were successfully related to 'objective' measures of the environment. However the survey evidence did not justify the use of the 'objective' variables as environmental standards. A method of using the social survey data directly as an aid to decision-making is discussed. Alternative models of the derivation of overall satisfaction with the environment are tested, and the values implied by the additive model compared with residents' preferences as measured directly in the survey. Residents' overall satisfactions with the residential environment were most closely related to their satisfactions with the "Appearance" and the "Reputation" of their areas. By contrast the most important directly measured preference was "Friendliness of area". The differences point to the need to define concepts used in social research clearly in operational terms, and to take care in the use of values 'measured' by different methods.
Resumo:
The objective of this thesis is to investigate, through an empirical study, the different functions of the highways maintenance departments and to suggest methods by means of which road maintenance work could be carried out in a more efficient way by utilising its resources of men, material and plant to the utmost advantage. This is particularly important under the present circumstances of national financial difficulties which have resulted in continuous cuts in public expenditure. In order to achieve this objective, the researcher carried out a survey among several Highways Authorities by means of questionnaire and interview. The information so collected was analysed in order to understand the actual, practical situation within highways manintenance departments, and highlight any existing problems, and try to answer the question of how they could become more efficient. According to the results obtained by the questionnaire and the interview, and the analysis of these results, the researcher concludes that it is the management system where least has been done, and where problems exist and are most complex. The management of highways maintenance departments argue that the reasons for their problems include both financial and organisational difficulties, apart from the political aspect and nature of the activities undertaken. The researcher believes that this ought to necessitate improving the management's analytical tools and techniques in order to achieve the most effective way of performing each activity. To this end the researcher recommends several related procedures to be adopted by the management of the highways maintenance departments. These recommendations, arising from the study, involve the technical, practical and human aspects. These are essential factors of which management should be aware - and certainly should not neglect - in order to achieve its objectives of improved productivity in the highways maintenance departments.
Resumo:
Purpose: Recent studies indicate that ocular and scleral rigidity is pertinent to our understanding of glaucoma, age related macular degeneration and the development and pathogenesis of myopia. The principal method of measuring ocular rigidity is by extrapolation of data from corneal indentation tonometry (Ko) using Friedenwald’s transformation algorithms. Using scleral indentation (Schiotz tonometry) we assess whether regional variations in resistance to indentation occur in vivo across the human anterior globe directly, with reference to the deflection of Schiotz scale readings. Methods: Data were collected from both eyes of 26 normal young adult subjects with a range of refractive error (mean spherical equivalent ± S.D. of -1.77 D ± 3.28 D, range -10.56 to +4.38 D). Schiotz tonometry (5.5 g & 7.5 g) was performed on the cornea and four scleral quadrants; supero-temporal (ST) and -nasal (SN), infero-temporal (IT) and -nasal (IN) approximately 8 mm posterior to the limbus. Results: Values of Ko (mm3)-1 were consistent with those previously reported (mean 0.0101 ± 0.0082, range 0.0019–0.0304). In regards to the sclera, significant differences (p < 0.001) were found across quadrants with indentation readings for both loads between means for the cornea and ST; ST and SN; ST and IT, ST and IN. Mean (±S.D.) scale readings for 5.5 g were: cornea 5.93 ± 1.14, ST 8.05 ± 1.58, IT 7.03 ± 1.86, SN 6.25 ± 1.10, IN 6.02 ± 1.49; and 7.5 g: cornea 9.26 ± 1.27, ST 11.56 ± 1.65, IT 10.31 ± 1.74, SN 9.91 ± 1.20, IN 9.50 ± 1.56. Conclusions: Significant regional variation was found in the resistance of the anterior sclera to indentation produced by the Schiotz tonometer.
Resumo:
An initial aim of this project was to evaluate the conventional techniques used in the analysis of newly prepared environmentally friendly water-borne automotive coatings and compare them with solvent-borne coatings having comparable formulations. The investigation was carried out on microtuned layers as well as on complete automotive multi-layer paint systems. Methods used included the very traditional methods of gloss and hardness and the commonly used photo-oxidation index (from FTIR spectral analysis). All methods enabled the durability to weathering of the automotive coatings to be initially investigated. However, a primary aim of this work was to develop methods for analysing the early stages of chemical and property changes in both the solvent-borne and water-borne coating systems that take place during outdoor natural weathering exposures and under accelerated artificial exposures. This was achieved by using dynamic mechanical analysis (DMA), in both tension mode on the microtomed films (on all depths of the coating systems from the uppermost clear-coat right down to the electron-coat) and bending mode of the full (unmicrotomed) systems, as well as MALDI-Tof analysis on the movement of the stabilisers in the full systems. Changes in glass transition temperature and relative cross-link density were determined after weathering and these were related to changes in the chemistries of the binder systems of the coatings after weathering. Concentration profiles of the UV-stabilisers (UVA and HALS) in the coating systems were analysed as a consequence of migration in the coating systems in separate microtomed layers of the paint samples (depth profiling) after weathering and diffusion co-efficient and solubility parameters were determined for the UV stabilisers in the coating systems. The methods developed were used to determine the various physical and chemical changes that take place during weathering of the different (water-borne and solvent-borne) systems (photoxidation). The solvent-borne formulations showed less changes after weathering (both natural and accelerated) than the corresponding water-borne formulations due to the lower level of cross-links in the binders of the water-borne systems. The silver systems examined were more durable than the blue systems due to the reflecting power of the aluminium and the lower temperature of the silver coatings.
Resumo:
The trend in modal extraction algorithms is to use all the available frequency response functions data to obtain a global estimate of the natural frequencies, damping ratio and mode shapes. Improvements in transducer and signal processing technology allow the simultaneous measurement of many hundreds of channels of response data. The quantity of data available and the complexity of the extraction algorithms make considerable demands on the available computer power and require a powerful computer or dedicated workstation to perform satisfactorily. An alternative to waiting for faster sequential processors is to implement the algorithm in parallel, for example on a network of Transputers. Parallel architectures are a cost effective means of increasing computational power, and a larger number of response channels would simply require more processors. This thesis considers how two typical modal extraction algorithms, the Rational Fraction Polynomial method and the Ibrahim Time Domain method, may be implemented on a network of transputers. The Rational Fraction Polynomial Method is a well known and robust frequency domain 'curve fitting' algorithm. The Ibrahim Time Domain method is an efficient algorithm that 'curve fits' in the time domain. This thesis reviews the algorithms, considers the problems involved in a parallel implementation, and shows how they were implemented on a real Transputer network.
Resumo:
A combination of experimental methods was applied at a clogged, horizontal subsurface flow (HSSF) municipal wastewater tertiary treatment wetland (TW) in the UK, to quantify the extent of surface and subsurface clogging which had resulted in undesirable surface flow. The three dimensional hydraulic conductivity profile was determined, using a purpose made device which recreates the constant head permeameter test in-situ. The hydrodynamic pathways were investigated by performing dye tracing tests with Rhodamine WT and a novel multi-channel, data-logging, flow through Fluorimeter which allows synchronous measurements to be taken from a matrix of sampling points. Hydraulic conductivity varied in all planes, with the lowest measurement of 0.1 md1 corresponding to the surface layer at the inlet, and the maximum measurement of 1550 md1 located at a 0.4m depth at the outlet. According to dye tracing results, the region where the overland flow ceased received five times the average flow, which then vertically short-circuited below the rhizosphere. The tracer break-through curve obtained from the outlet showed that this preferential flow-path accounted for approximately 80% of the flow overall and arrived 8 h before a distinctly separate secondary flow-path. The overall volumetric efficiencyof the clogged system was 71% and the hydrology was simulated using a dual-path, dead-zone storage model. It is concluded that uneven inlet distribution, continuous surface loading and high rhizosphere resistance is responsible for the clog formation observed in this system. The average inlet hydraulic conductivity was 2 md1, suggesting that current European design guidelines, which predict that the system will reach an equilibrium hydraulic conductivity of 86 md1, do not adequately describe the hydrology of mature systems.
Resumo:
Lipid peroxidation is recognized to be an important contributor to many chronic diseases, especially those of an inflammatory pathology. In addition to their value as markers of oxidative damage, lipid peroxidation products have also been shown to have a wide variety of biological and cell signalling effects. In view of this, accurate and sensitive methods for the measurement of lipid peroxidation products are essential. Although some assays have been described for many years, improvements in protocols are continually being reported and, with recent advances in instrumentation and technology, highly specialized and informative techniques are increasingly used. This article gives an overview of the most currently used methods and then addresses the recent advances in some specific approaches. The focus is on analysis of oxysterols, F(2)-isoprostanes and oxidized phospholipids by gas chromatography or liquid chromatography mass spectrometry techniques and immunoassays for the detection of 4-hydroxynonenal.
Resumo:
E-satisfaction as a construct has gained increasing importance in the marketing literature in recent times. The examination of consumer satisfaction in an online context follows the growing consensus that in Internet retailing, as in traditional retailing, consumer satisfaction is not only a critical performance outcome, but also a primary predictor of customer loyalty and thus, the Internet retailer's endurance and success. The current study replicates the initial examination of e-satisfaction within the U.S. by [Szymanski, David M., & Richard T. Hise (2000). E-satisfaction: An initial examination. Journal of Retailing, 76(3), 309–322] among a sample of online consumers drawn from Germany. The replication was extended to two contexts—consumer satisfaction with Internet retail shopping and consumer satisfaction with Internet financial services sites. The results yield rich insights into the validity of extending the measurement and predictors of e-satisfaction to a trans-national context.