40 resultados para I32 - Measurement and Analysis of Poverty
em Aston University Research Archive
Resumo:
Productivity at the macro level is a complex concept but also arguably the most appropriate measure of economic welfare. Currently, there is limited research available on the various approaches that can be used to measure it and especially on the relative accuracy of said approaches. This thesis has two main objectives: firstly, to detail some of the most common productivity measurement approaches and assess their accuracy under a number of conditions and secondly, to present an up-to-date application of productivity measurement and provide some guidance on selecting between sometimes conflicting productivity estimates. With regards to the first objective, the thesis provides a discussion on the issues specific to macro-level productivity measurement and on the strengths and weaknesses of the three main types of approaches available, namely index-number approaches (represented by Growth Accounting), non-parametric distance functions (DEA-based Malmquist indices) and parametric production functions (COLS- and SFA-based Malmquist indices). The accuracy of these approaches is assessed through simulation analysis, which provided some interesting findings. Probably the most important were that deterministic approaches are quite accurate even when the data is moderately noisy, that no approaches were accurate when noise was more extensive, that functional form misspecification has a severe negative effect in the accuracy of the parametric approaches and finally that increased volatility in inputs and prices from one period to the next adversely affects all approaches examined. The application was based on the EU KLEMS (2008) dataset and revealed that the different approaches do in fact result in different productivity change estimates, at least for some of the countries assessed. To assist researchers in selecting between conflicting estimates, a new, three step selection framework is proposed, based on findings of simulation analyses and established diagnostics/indicators. An application of this framework is also provided, based on the EU KLEMS dataset.
Resumo:
Congenital nystagmus (CN) is an ocular-motor disorder characterised by involuntary, conjugated ocular oscillations, that can arise since the first months of life. Pathogenesis of congenital nystagmus is still under investigation. In general, CN patients show a considerable decrease of their visual acuity: image fixation on the retina is disturbed by nystagmus continuous oscillations, mainly horizontal. However, image stabilisation is still achieved during the short periods in which eye velocity slows down while the target image is placed onto the fovea (called foveation intervals). To quantify the extent of nystagmus, eye movement recording are routinely employed, allowing physicians to extract and analyse nystagmus main features such as shape, amplitude and frequency. Using eye movement recording, it is also possible to compute estimated visual acuity predictors: analytical functions which estimates expected visual acuity using signal features such as foveation time and foveation position variability. Use of those functions add information to typical visual acuity measurement (e.g. Landolt C test) and could be a support for therapy planning or monitoring. This study focus on robust detection of CN patients' foveations. Specifically, it proposes a method to recognize the exact signal tracts in which a subject foveates, This paper also analyses foveation sequences. About 50 eyemovement recordings, either infrared-oculographic or electrooculographic, from different CN subjects were acquired. Results suggest that an exponential interpolation for the slow phases of nystagmus could improve foveation time computing and reduce influence of breaking saccades and data noise. Moreover a concise description of foveation sequence variability can be achieved using non-fitting splines. © 2009 Springer Berlin Heidelberg.
Resumo:
Since the original Data Envelopment Analysis (DEA) study by Charnes et al. [Measuring the efficiency of decision-making units. European Journal of Operational Research 1978;2(6):429–44], there has been rapid and continuous growth in the field. As a result, a considerable amount of published research has appeared, with a significant portion focused on DEA applications of efficiency and productivity in both public and private sector activities. While several bibliographic collections have been reported, a comprehensive listing and analysis of DEA research covering its first 30 years of history is not available. This paper thus presents an extensive, if not nearly complete, listing of DEA research covering theoretical developments as well as “real-world” applications from inception to the year 2007. A listing of the most utilized/relevant journals, a keyword analysis, and selected statistics are presented.
Resumo:
The key to the correct application of ANOVA is careful experimental design and matching the correct analysis to that design. The following points should therefore, be considered before designing any experiment: 1. In a single factor design, ensure that the factor is identified as a 'fixed' or 'random effect' factor. 2. In more complex designs, with more than one factor, there may be a mixture of fixed and random effect factors present, so ensure that each factor is clearly identified. 3. Where replicates can be grouped or blocked, the advantages of a randomised blocks design should be considered. There should be evidence, however, that blocking can sufficiently reduce the error variation to counter the loss of DF compared with a randomised design. 4. Where different treatments are applied sequentially to a patient, the advantages of a three-way design in which the different orders of the treatments are included as an 'effect' should be considered. 5. Combining different factors to make a more efficient experiment and to measure possible factor interactions should always be considered. 6. The effect of 'internal replication' should be taken into account in a factorial design in deciding the number of replications to be used. Where possible, each error term of the ANOVA should have at least 15 DF. 7. Consider carefully whether a particular factorial design can be considered to be a split-plot or a repeated measures design. If such a design is appropriate, consider how to continue the analysis bearing in mind the problem of using post hoc tests in this situation.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
This dissertation studies the process of operations systems design within the context of the manufacturing organization. Using the DRAMA (Design Routine for Adopting Modular Assembly) model as developed by a team from the IDOM Research Unit at Aston University as a starting point, the research employed empirically based fieldwork and a survey to investigate the process of production systems design and implementation within four UK manufacturing industries: electronics assembly, electrical engineering, mechanical engineering and carpet manufacturing. The intention was to validate the basic DRAMA model as a framework for research enquiry within the electronics industry, where the initial IDOM work was conducted, and then to test its generic applicability, further developing the model where appropriate, within the other industries selected. The thesis contains a review of production systems design theory and practice prior to presenting thirteen industrial case studies of production systems design from the four industry sectors. The results and analysis of the postal survey into production systems design are then presented. The strategic decisions of manufacturing and their relationship to production systems design, and the detailed process of production systems design and operation are then discussed. These analyses are used to develop the generic model of production systems design entitled DRAMA II (Decision Rules for Analysing Manufacturing Activities). The model contains three main constituent parts: the basic DRAMA model, the extended DRAMA II model showing the imperatives and relationships within the design process, and a benchmark generic approach for the design and analysis of each component in the design process. DRAMA II is primarily intended for use by researchers as an analytical framework of enquiry, but is also seen as having application for manufacturing practitioners.
Resumo:
The work reported in this thesis is concerned with the improvement and expansion of the assistance given to the designer by the computer in the design of cold formed sections. The main contributions have been in four areas, which have consequently led to the fifth, the development of a methodology to optimise designs. This methodology can be considered an `Expert Design System' for cold formed sections. A different method of determining section properties of profiles was introduced, using the properties of line and circular elements. Graphics were introduced to show the outline of the profile on screen. The analysis of beam loading has been expanded to beam loading conditions where the number of supports, point loads, and uniform distributive loads can be specified by the designer. The profile can then be checked for suitability for the specified type of loading. Artificial Intelligence concepts have been introduced to give the designer decision support from the computer, in combination with the computer aided design facilities. The more complex decision support was adopted through the use of production rules. All the support was based on the British standards. A method has been introduced, by which the appropriate use of stiffeners can be determined and consequently designed by the designer. Finally, the methodology by which the designer is given assistance from the computer, without constraining the designer, was developed. This methodology gives advice to the designer on possible methods of improving the design, but allows the designer to reject that option, and analyse the profile accordingly. The methodology enables optimisation to be achieved by the designer, designing variety of profiles for a particular loading, and determining which one is best suited.
Resumo:
The potential for nonlinear optical processes in nematic-liquid-crystal cells is great due to the large phase changes resulting from reorientation of the nematic-liquid-crystal director. Here the combination of diffraction and self-diffraction effects are studied simultaneously by the use of a pair of focused laser beams which are coincident on a homeotropically aligned liquid-crystal cell. The result is a complicated diffraction pattern in the far field. This is analyzed in terms of the continuum theory for liquid crystals, using a one-elastic-constant approximation to solve the reorientation profile. Very good comparison between theory and experiment is obtained. An interesting transient grating, existing due to the viscosity of the liquid-crystal material, is observed in theory and practice for large cell-tilt angles.
Resumo:
This thesis was concerned with investigating methods of improving the IOP pulse’s potential as a measure of clinical utility. There were three principal sections to the work. 1. Optimisation of measurement and analysis of the IOP pulse. A literature review, covering the years 1960 – 2002 and other relevant scientific publications, provided a knowledge base on the IOP pulse. Initial studies investigated suitable instrumentation and measurement techniques. Fourier transformation was identified as a promising method of analysing the IOP pulse and this technique was developed. 2. Investigation of ocular and systemic variables that affect IOP pulse measurements In order to recognise clinically important changes in IOP pulse measurement, studies were performed to identify influencing factors. Fourier analysis was tested against traditional parameters in order to assess its ability to detect differences in IOP pulse. In addition, it had been speculated that the waveform components of the IOP pulse contained vascular characteristic analogous to those components found in arterial pulse waves. Validation studies to test this hypothesis were attempted. 3. The nature of the intraocular pressure pulse in health and disease and its relation to systemic cardiovascular variables. Fourier analysis and traditional parameters were applied to the IOP pulse measurements taken on diseased and healthy eyes. Only the derived parameter, pulsatile ocular blood flow (POBF) detected differences in diseased groups. The use of an ocular pressure-volume relationship may have improved the POBF measure’s variance in comparison to the measurement of the pulse’s amplitude or Fourier components. Finally, the importance of the driving force of pulsatile blood flow, the arterial pressure pulse, is highlighted. A method of combining the measurements of pulsatile blood flow and pulsatile blood pressure to create a measure of ocular vascular impedance is described along with its advantages for future studies.
Resumo:
This research sets out to compare the values in British and German political discourse, especially the discourse of social policy, and to analyse their relationship to political culture through an analysis of the values of health care reform. The work proceeds from the hypothesis that the known differences in political culture between the two countries will be reflected in the values of political discourse, and takes a comparison of two major recent legislative debates on health care reform as a case study. The starting point in the first chapter is a brief comparative survey of the post-war political cultures of the two countries, including a brief account of the historical background to their development and an overview of explanatory theoretical models. From this are developed the expected contrasts in values in accordance with the hypothesis. The second chapter explains the basis for selecting the corpus texts and the contextual information which needs to be recorded to make a comparative analysis, including the context and content of the reform proposals which comprise the case study. It examines any contextual factors which may need to be taken into account in the analysis. The third and fourth chapters explain the analytical method, which is centred on the use of definition-based taxonomies of value items and value appeal methods to identify, on a sentence-by-sentence basis, the value items in the corpus texts and the methods used to make appeals to those value items. The third chapter is concerned with the classification and analysis of values, the fourth with the classification and analysis of value appeal methods. The fifth chapter will present and explain the results of the analysis, and the sixth will summarize the conclusions and make suggestions for further research.
Resumo:
Coke oven liquor is a toxic wastewater produced in large quantities by the Iron and Steel, and Coking Industries, and gives rise to major effluent treatment problems in those industries. Conscious of the potentially serious environmental impact of the discharge of such wastes, pollution control agencies in many countries have made progressively more stringent quality requirements for the discharge of the treated waste. The most common means of treating the waste is the activated sludge process. Problems with achieving consistently satisfactory treatment by this process have been experienced in the past. The need to improve the quality of the discharge of the treated waste prompted attempts by TOMLINS to model the process using Adenosine Triphosophnte (ATP) as a measure of biomass, but these were unsuccessful. This thesis describes work that was carried out to determine the significance of ATP in the activated sludge treatment of the waste. The use of ATP measurements in wastewater treatment were reviewed. Investigations were conducted into the ATP behaviour of the batch activated sludge treatment of two major components of the waste, phenol, and thiocyanate, and the continuous activated sludge treatment of the liquor itself, using laboratory scale apparatus. On the basis of these results equations were formulated to describe the significance of ATP as a measured activity and biomass in the treatment system. These were used as the basis for proposals to use ATP as a control parameter in the activated sludge treatment of coke oven liquor, and wastewaters in general. These had relevance both to the treatment of the waste in the reactor and to the settlement of the sludge produced in the secondary settlement stage of the treatment process.
Resumo:
Purpose – Research on the relationship between customer satisfaction and customer loyalty has advanced to a stage that requires a more thorough examination of moderator variables. Limited research shows how moderators influence the relationship between customer satisfaction and customer loyalty in a service context; this article aims to present empirical evidence of the conditions in which the satisfaction-loyalty relationship becomes stronger or weaker. Design/methodology/approach – Using a sample of more than 700 customers of DIY retailers and multi-group structural equation modelling, the authors examine moderating effects of several firm-related variables, variables that result from firm/employee-customer interactions and individual-level variables (i.e. loyalty cards, critical incidents, customer age, gender, income, expertise). Findings – The empirical results suggest that not all of the moderators considered influence the satisfaction-loyalty link. Specifically, critical incidents and income are important moderators of the relationship between customer satisfaction and customer loyalty. Practical implications – Several of the moderator variables considered in this study are manageable variables. Originality/value – This study should prove valuable to academic researchers as well as service and retailing managers. It systematically analyses the moderating effect of firm-related and individual-level variables on the relationship between customer satisfaction and loyalty. It shows the differential effect of different types of moderator variables on the satisfaction-loyalty link.