18 resultados para Fundamental techniques of localization
em Aston University Research Archive
Resumo:
We investigate experimentally the fundamental characteristics of space-charge waves excited in a photorefractive crystal of Bi12SiO20. Features such as their transient rise and decay as well as their steady-state frequency response are investigated. Based on this, we find the dependence of the space-charge waves' quality factor on spatial frequency and electric-field biasing. The experimental findings are compared with the linear space-charge wave theory developed previously by Sturman et al. [J. Opt. Sec. Am. B 10, 1919 (1993)].
Resumo:
This work attempts to create a systemic design framework for man-machine interfaces which is self consistent, compatible with other concepts, and applicable to real situations. This is tackled by examining the current architecture of computer applications packages. The treatment in the main is philosophical and theoretical and analyses the origins, assumptions and current practice of the design of applications packages. It proposes that the present form of packages is fundamentally contradictory to the notion of packaging itself. This is because as an indivisible ready-to-implement solution, current package architecture displays the following major disadvantages. First, it creates problems as a result of user-package interactions, in which the designer tries to mould all potential individual users, no matter how diverse they are, into one model. This is worsened by the minute provision, if any, of important properties such as flexibility, independence and impartiality. Second, it displays rigid structure that reduces the variety and/or multi-use of the component parts of such a package. Third, it dictates specific hardware and software configurations which probably results in reducing the number of degrees of freedom of its user. Fourth, it increases the dependence of its user upon its supplier through inadequate documentation and understanding of the package. Fifth, it tends to cause a degeneration of the expertise of design of the data processing practitioners. In view of this understanding an alternative methodological design framework which is both consistent with systems approach and the role of a package in its likely context is proposed. The proposition is based upon an extension of the identified concept of the hierarchy of holons* which facilitates the examination of the complex relationships of a package with its two principal environments. First, the user characteristics and his decision making practice and procedures; implying an examination of the user's M.I.S. network. Second, the software environment and its influence upon a package regarding support, control and operation of the package. The framework is built gradually as discussion advances around the central theme of a compatible M.I.S., software and model design. This leads to the formation of the alternative package architecture that is based upon the design of a number of independent, self-contained small parts. Such is believed to constitute the nucleus around which not only packages can be more effectively designed, but is also applicable to many man-machine systems design.
Resumo:
We investigate experimentally the fundamental characteristics of space-charge waves excited in a photorefractive crystal of Bi12SiO20. Features such as their transient rise and decay as well as their steady-state frequency response are investigated. Based on this, we find the dependence of the space-charge waves' quality factor on spatial frequency and electric-field biasing. The experimental findings are compared with the linear space-charge wave theory developed previously by Sturman et al. [J. Opt. Sec. Am. B 10, 1919 (1993)].
Resumo:
Enantioselective catalysis is an increasingly important method of providing enantiomeric compounds for the pharmaceutical and agrochemical industries. To date, heterogeneous catalysts have failed to match the industrial impact achieved by homogeneous systems. One successful approach to the creation of heterogeneous enantioselective catalysts has involved the modification of conventional metal particle catalysts by the adsorption of chiral molecules. This article examines the contribution of effects such as chiral recognition and amplification to these types of system and how insight provided by surface science model studies may be exploited in the design of more effective catalysts.
Resumo:
This article considers the role of accounting in organisational decision making. It challenges the rational nature of decisions made in organisations through the use of accounting models and the problems of predicting the future through the use of such models. The use of accounting in this manner is evaluated from an epochal postmodern stance. Issues raised by chaos theory and the uncertainty principle are used to demonstrate problems with the predictive ability of accounting models. The authors argue that any consideration of the predictive value of accounting needs to change to incorporate a recognition of the turbulent external environment, if it is to be of use for organisational decision making. Thus it is argued that the role of accounting as a mechanism for knowledge creation regarding the future is fundamentally flawed. We take this as a starting-point to argue for the real purpose of the use of the predictive techniques of accounting, using its ritualistic role in the context of myth creation to argue for the cultural benefits of the use of such flawed techniques.
Resumo:
Measurements were carried out to determine local coefficients of heat transfer in short lengths of horizontal pipe, and in the region of an discontinuity in pipe diameter. Laminar, transitional and turbulent flow regimes were investigated, and mixtures of propylene glycol and water were used in the experiments to give a range of viscous fluids. Theoretical and empirical analyses were implemented to find how the fundamental mechanism of forced convection was modified by the secondary effects of free convection, temperature dependent viscosity, and viscous dissipation. From experiments with the short tube it was possible to determine simple empirical relationships describing the axial distribution of the local 1usselt number and its dependence on the Reynolds and Prandtl numbers. Small corrections were made to account for the secondary effects mentioned above. Two different entrance configurations were investigated to demonstrate how conditions upstream could influence the heat transfer coefficients measured downstream In experiments with a sudden contraction in pipe diameter the distribution of local 1u3se1t number depended on the Prandtl number of the fluid in a complicated way. Graphical data is presented describing this dependence for a range of fluids indicating how the local Nusselt number varied with the diameter-ratio. Ratios up to 3.34:1 were considered. With a sudden divergence in pipe diameter, it was possible to derive the axial distribution of the local Nusse1t number for a range of Reynolds and Prandtl numbers in a similar way to the convergence experiments. Difficulty was encountered in explaining some of the measurements obtained at low Reynolds numbers, and flow visualization techniques wore used to determine the complex flow patterns which could lead to the anomalous results mentioned. Tests were carried out with divergences up to 1:3.34 to find the way in which the local Nusselt number varied with the diameter ratio, and a few experiments were carried out with very large ratios up .to 14.4. A limited amount of theoretical analysis of the 'divergence' system was carried out to substantiate certain explanations of the heat transfer mechanisms postulated.
Resumo:
This thesis presents details on both theoretical and experimental aspects of UV written fibre gratings. The main body of the thesis deals with the design, fabrication and testing of telecommunication optical fibre grating devices, but also an accurate theoretical analysis of intra-core fibre gratings is presented. Since more than a decade, fibre gratings have been extensively used in the telecommunication field (as filters, dispersion compensators, and add/drop multiplexers for instance). Gratings for telecommunication should conform to very high fabrication standards as the presence of any imperfection raises the noise level in the transmission system compromising its ability of transmitting intelligible sequence of bits to the receiver. Strong side lobes suppression and high and sharp reflection profile are then necessary characteristics. A fundamental part of the theoretical and experimental work reported in this thesis is about apodisation. The physical principle of apodisation is introduced and a number of apodisation techniques, experimental results and numerical optimisation of the shading functions and all the practical parameters involved in the fabrication are detailed. The measurement of chromatic dispersion in fibres and FBGs is detailed and an estimation of its accuracy is given. An overview on the possible methods that can be implemented for the fabrication of tunable fibre gratings is given before detailing a new dispersion compensator device based on the action of a distributed strain onto a linearly chirped FBG. It is shown that tuning of second and third order dispersion of the grating can be obtained by the use of a specially designed multipoint bending rig. Experiments on the recompression of optical pulses travelling long distances are detailed for 10 Gb/s and 40 Gb/s. The characterisation of a new kind of double section LPG fabricated on a metal-clad coated fibre is reported. The fabrication of the device is made easier by directly writing the grating through the metal coating. This device may be used to overcome the recoating problems associated with standard LPGs written in step-index fibre. Also, it can be used as a sensor for simultaneous measurements of temperature and surrounding medium refractive index.
Resumo:
An introduction to the theory and practice of optometry in one succinct volume. From the fundamental science of vision to clinical techniques and the management of common ocular conditions, this book encompasses the essence of contemporary optometric practice. Now in full colour and featuring over 400 new illustrations, this popular text which will appeal to both students and practitioners wishing to keep up to date has been revised significantly. The new edition incorporates recent advances in technology and a complete overview of clinical procedures to improve and update everyday patient care. Contributions from well-known international experts deliver a broad perspective and understanding of current optometric practice. A useful aid for students and the newly qualified practitioner, while providing a rapid reference guide for the more experienced clinician.
Resumo:
This thesis describes work carried out to improve the fundamental modelling of liquid flows on distillation trays. A mathematical model is presented based on the principles of computerised fluid dynamics. It models the liquid flow in the horizontal directions allowing for the effects of the vapour through the use of an increased liquid turbulence, modelled by an eddy viscosity, and a resistance to liquid flow caused by the vapour being accelerated horizontally by the liquid. The resultant equations are similar to the Navier-Stokes equations with the addition of a resistance term.A mass-transfer model is used to calculate liquid concentration profiles and tray efficiencies. A heat and mass transfer analogy is used to compare theoretical concentration profiles to experimental water-cooling data obtained from a 2.44 metre diameter air-water distillation simulation rig. The ratios of air to water flow rates are varied in order to simulate three pressures: vacuum, atmospheric pressure and moderate pressure.For simulated atmospheric and moderate pressure distillation, the fluid mechanical model constantly over-predicts tray efficiencies with an accuracy of between +1.7% and +11.3%. This compares to -1.8% to -10.9% for the stagnant regions model (Porter et al. 1972) and +12.8% to +34.7% for the plug flow plus back-mixing model (Gerster et al. 1958). The model fails to predict the flow patterns and tray efficiencies for vacuum simulation due to the change in the mechanism of liquid transport, from a liquid continuous layer to a spray as the liquid flow-rate is reduced. This spray is not taken into account in the development of the fluid mechanical model. A sensitivity analysis carried out has shown that the fluid mechanical model is relatively insensitive to the prediction of the average height of clear liquid, and a reduction in the resistance term results in a slight loss of tray efficiency. But these effects are not great. The model is quite sensitive to the prediction of the eddy viscosity term. Variations can produce up to a 15% decrease in tray efficiency. The fluid mechanical model has been incorporated into a column model so that statistical optimisation techniques can be employed to fit a theoretical column concentration profile to experimental data. Through the use of this work mass-transfer data can be obtained.
Resumo:
The thesis presents an experimentally validated modelling study of the flow of combustion air in an industrial radiant tube burner (RTB). The RTB is used typically in industrial heat treating furnaces. The work has been initiated because of the need for improvements in burner lifetime and performance which are related to the fluid mechanics of the com busting flow, and a fundamental understanding of this is therefore necessary. To achieve this, a detailed three-dimensional Computational Fluid Dynamics (CFD) model has been used, validated with experimental air flow, temperature and flue gas measurements. Initially, the work programme is presented and the theory behind RTB design and operation in addition to the theory behind swirling flows and methane combustion. NOx reduction techniques are discussed and numerical modelling of combusting flows is detailed in this section. The importance of turbulence, radiation and combustion modelling is highlighted, as well as the numerical schemes that incorporate discretization, finite volume theory and convergence. The study first focuses on the combustion air flow and its delivery to the combustion zone. An isothermal computational model was developed to allow the examination of the flow characteristics as it enters the burner and progresses through the various sections prior to the discharge face in the combustion area. Important features identified include the air recuperator swirler coil, the step ring, the primary/secondary air splitting flame tube and the fuel nozzle. It was revealed that the effectiveness of the air recuperator swirler is significantly compromised by the need for a generous assembly tolerance. Also, there is a substantial circumferential flow maldistribution introduced by the swirier, but that this is effectively removed by the positioning of a ring constriction in the downstream passage. Computations using the k-ε turbulence model show good agreement with experimentally measured velocity profiles in the combustion zone and proved the use of the modelling strategy prior to the combustion study. Reasonable mesh independence was obtained with 200,000 nodes. Agreement was poorer with the RNG k-ε and Reynolds Stress models. The study continues to address the combustion process itself and the heat transfer process internal to the RTB. A series of combustion and radiation model configurations were developed and the optimum combination of the Eddy Dissipation (ED) combustion model and the Discrete Transfer (DT) radiation model was used successfully to validate a burner experimental test. The previously cold flow validated k-ε turbulence model was used and reasonable mesh independence was obtained with 300,000 nodes. The combination showed good agreement with temperature measurements in the inner and outer walls of the burner, as well as with flue gas composition measured at the exhaust. The inner tube wall temperature predictions validated the experimental measurements in the largest portion of the thermocouple locations, highlighting a small flame bias to one side, although the model slightly over predicts the temperatures towards the downstream end of the inner tube. NOx emissions were initially over predicted, however, the use of a combustion flame temperature limiting subroutine allowed convergence to the experimental value of 451 ppmv. With the validated model, the effectiveness of certain RTB features identified previously is analysed, and an analysis of the energy transfers throughout the burner is presented, to identify the dominant mechanisms in each region. The optimum turbulence-combustion-radiation model selection was then the baseline for further model development. One of these models, an eccentrically positioned flame tube model highlights the failure mode of the RTB during long term operation. Other models were developed to address NOx reduction and improvement of the flame profile in the burner combustion zone. These included a modified fuel nozzle design, with 12 circular section fuel ports, which demonstrates a longer and more symmetric flame, although with limited success in NOx reduction. In addition, a zero bypass swirler coil model was developed that highlights the effect of the stronger swirling combustion flow. A reduced diameter and a 20 mm forward displaced flame tube model shows limited success in NOx reduction; although the latter demonstrated improvements in the discharge face heat distribution and improvements in the flame symmetry. Finally, Flue Gas Recirculation (FGR) modelling attempts indicate the difficulty of the application of this NOx reduction technique in the Wellman RTB. Recommendations for further work are made that include design mitigations for the fuel nozzle and further burner modelling is suggested to improve computational validation. The introduction of fuel staging is proposed, as well as a modification in the inner tube to enhance the effect of FGR.
Resumo:
Substantial altimetry datasets collected by different satellites have only become available during the past five years, but the future will bring a variety of new altimetry missions, both parallel and consecutive in time. The characteristics of each produced dataset vary with the different orbital heights and inclinations of the spacecraft, as well as with the technical properties of the radar instrument. An integral analysis of datasets with different properties offers advantages both in terms of data quantity and data quality. This thesis is concerned with the development of the means for such integral analysis, in particular for dynamic solutions in which precise orbits for the satellites are computed simultaneously. The first half of the thesis discusses the theory and numerical implementation of dynamic multi-satellite altimetry analysis. The most important aspect of this analysis is the application of dual satellite altimetry crossover points as a bi-directional tracking data type in simultaneous orbit solutions. The central problem is that the spatial and temporal distributions of the crossovers are in conflict with the time-organised nature of traditional solution methods. Their application to the adjustment of the orbits of both satellites involved in a dual crossover therefore requires several fundamental changes of the classical least-squares prediction/correction methods. The second part of the thesis applies the developed numerical techniques to the problems of precise orbit computation and gravity field adjustment, using the altimetry datasets of ERS-1 and TOPEX/Poseidon. Although the two datasets can be considered less compatible that those of planned future satellite missions, the obtained results adequately illustrate the merits of a simultaneous solution technique. In particular, the geographically correlated orbit error is partially observable from a dataset consisting of crossover differences between two sufficiently different altimetry datasets, while being unobservable from the analysis of altimetry data of both satellites individually. This error signal, which has a substantial gravity-induced component, can be employed advantageously in simultaneous solutions for the two satellites in which also the harmonic coefficients of the gravity field model are estimated.
Resumo:
The study investigated the potential applications and the limitations of non-standard techniques of visual field investigation utilizing automated perimetry. Normal subjects exhibited a greater sensitivity to kinetic stimuli than to static stimuli of identical size. The magnitude of physiological SKD was found to be largely independent of age, stimulus size, meridian and eccentricity. The absence of a dependency on stimulus size indicated that successive lateral spatial summation could not totally account for the underlying mechanism of physiological SKD. The visual field indices MD and LV exhibited a progressive deterioration during the time course of a conventional central visual field examination both for normal subjects and for ocular hypertensive patients. The fatigue effect was more pronounced in the latter stages and for the second eye tested. The confidence limits for the definition of abnormality should reflect the greater effect of fatigue on the second eye. A 330 cdm-2 yellow background was employed for blue-on-yellow perimetry. Instrument measurement range was preserved by positioning a concave mirror behind the stimulus bulb to increase the light output by 60% . The mean magnitude of SWS pathway isolation was approximately 1.4 log units relative to a 460nm stimulus filter. The absorption spectra of the ocular media exhibited an exponential increase with increase in age, whilst that of the macular pigment showed no systematic trend. The magnitude of ocular media absorption was demonstrated to reduce with increase in wavelength. Ocular media absorption was significantly greater in diabetic patients than in normal subjects. Five diabetic patients with either normal or borderline achromatic sensitivity exhibited an abnormal blue-on-yellow sensitivity; two of these patients showed no signs of retinopathy. A greater vulnerability of the SWS pathway to the diabetic disease process was hypothesized.
Resumo:
Manufacturing planning and control systems are fundamental to the successful operations of a manufacturing organisation. 10 order to improve their business performance, significant investment is made by companies into planning and control systems; however, not all companies realise the benefits sought Many companies continue to suffer from high levels of inventory, shortages, obsolete parts, poor resource utilisation and poor delivery performance. This thesis argues that the fit between the planning and control system and the manufacturing organisation is a crucial element of success. The design of appropriate control systems is, therefore, important. The different approaches to the design of manufacturing planning and control systems are investigated. It is concluded that there is no provision within these design methodologies to properly assess the impact of a proposed design on the manufacturing facility. Consequently, an understanding of how a new (or modified) planning and control system will perform in the context of the complete manufacturing system is unlikely to be gained until after the system has been implemented and is running. There are many modelling techniques available, however discrete-event simulation is unique in its ability to model the complex dynamics inherent in manufacturing systems, of which the planning and control system is an integral component. The existing application of simulation to manufacturing control system issues is limited: although operational issues are addressed, application to the more fundamental design of control systems is rarely, if at all, considered. The lack of a suitable simulation-based modelling tool does not help matters. The requirements of a simulation tool capable of modelling a host of different planning and control systems is presented. It is argued that only through the application of object-oriented principles can these extensive requirements be achieved. This thesis reports on the development of an extensible class library called WBS/Control, which is based on object-oriented principles and discrete-event simulation. The functionality, both current and future, offered by WBS/Control means that different planning and control systems can be modelled: not only the more standard implementations but also hybrid systems and new designs. The flexibility implicit in the development of WBS/Control supports its application to design and operational issues. WBS/Control wholly integrates with an existing manufacturing simulator to provide a more complete modelling environment.