49 resultados para application technique
Resumo:
The primary objective of this research has been to determine the potential of fluorescence spectroscopy as a method for analysis of surface deposition on contact lenses. In order to achieve this it was first necessary to ascertain whether fluorescence analysis would be able to detect and distinguish between protein and lipid deposited on a lens surface. In conjunction with this it was important to determine the specific excitation wavelengths at which these deposited species were detected with the greatest sensitivity. Experimental observations showed that an excitation wavelength of 360nm would detect lipid deposited on a lens surface, and an excitation wavelength of 280nm would detect and distinguish between protein and lipid deposited on a contact lens. It was also very important to determine whether clean unspoilt lenses showed significant levels of fluorescence themselves. Fluorescence spectra recorded from a variety of unworn contact lenses at excitation wavelengths of 360nm and 280nm indicated that most contact lens materials do not fluoresce themselves to any great extent. Following these initial experiments various clinically and laboratory based studies were performed using fluorescence spectroscopy as a method of analysing contact lens deposition levels. The clinically based studies enabled analysis of contact lenses with known wear backgrounds to be rapidly and individually analysed following discontinuation of wear. Deposition levels in the early stages of lens wear were determined for various lens materials. The effect of surfactant cleaning on deposition levels was also investigated. The laboratory based studies involved comparing some of the in vivo results with those of identical lenses that had been spoilt using an in vitro method. Finally, an examination of lysosyme migration into and out of stored ionic high water contact lenses was made.
Resumo:
We propose a novel technique for optical liquid level sensing. The technique takes advantage of an optical spectrum spreading technique and directly measures liquid level with a digital format. The performance of the sensor does not suffer from changes of environmental variables and system variables. Due to its distinct measurement principle both high resolution and a large measurement range can be achieved simultaneously.
Resumo:
Some critical aspects of a new kind of on-line measurement technique for micro and nanoscale surface measurements are described. This attempts to use spatial light-wave scanning to replace mechanical stylus scanning, and an optical fibre interferometer to replace optically bulky interferometers for measuring the surfaces. The basic principle is based on measuring the phase shift of a reflected optical signal. Wavelength-division-multiplexing and fibre Bragg grating techniques are used to carry out wavelength-to-field transformation and phase-to-depth detection, allowing a large dynamic measurement ratio (range/resolution) and high signal-to-noise ratio with remote access. In effect the paper consists of two parts: multiplexed fibre interferometry and remote on-machine surface detection sensor (an optical dispersive probe). This paper aims to investigate the metrology properties of a multiplexed fibre interferometer and to verify its feasibility by both theoretical and experimental studies. Two types of optical probes, using a dispersive prism and a blazed grating, respectively, are introduced to realize wavelength-to-spatial scanning.
Resumo:
This thesis describes the development of a complete data visualisation system for large tabular databases, such as those commonly found in a business environment. A state-of-the-art 'cyberspace cell' data visualisation technique was investigated and a powerful visualisation system using it was implemented. Although allowing databases to be explored and conclusions drawn, it had several drawbacks, the majority of which were due to the three-dimensional nature of the visualisation. A novel two-dimensional generic visualisation system, known as MADEN, was then developed and implemented, based upon a 2-D matrix of 'density plots'. MADEN allows an entire high-dimensional database to be visualised in one window, while permitting close analysis in 'enlargement' windows. Selections of records can be made and examined, and dependencies between fields can be investigated in detail. MADEN was used as a tool for investigating and assessing many data processing algorithms, firstly data-reducing (clustering) methods, then dimensionality-reducing techniques. These included a new 'directed' form of principal components analysis, several novel applications of artificial neural networks, and discriminant analysis techniques which illustrated how groups within a database can be separated. To illustrate the power of the system, MADEN was used to explore customer databases from two financial institutions, resulting in a number of discoveries which would be of interest to a marketing manager. Finally, the database of results from the 1992 UK Research Assessment Exercise was analysed. Using MADEN allowed both universities and disciplines to be graphically compared, and supplied some startling revelations, including empirical evidence of the 'Oxbridge factor'.
Resumo:
During the last decade the use of randomised gene libraries has had an enormous impact in the field of protein engineering. Such libraries comprise many variations of a single gene in which codon replacements are used to substitute key residues of the encoded protein. The expression of such libraries generates a library of randomised proteins which can subsequently be screened for desired or novel activities. Randomisation in this fashion has predominantly been achieved by the inclusion of the codons NNN or NNGCor T, in which N represents any of the four bases A,C,G, or T. The use of thesis codons however, necessities the cloning of redundant codons at each position of randomisation, in addition to those required to encode the twenty possible amino acid substitutions. As degenerate codons must be included at each position of randomisation, this results in a progressive loss of randomisation efficiency as the number of randomised positions is increased. The ratio of genes to proteins in these libraries rises exponentially with each position of randomisation, creating large gene libraries, which generate protein libraries of limited diversity upon expression. In addition to these problems of library size, the cloning of redundant codons also results in the generation of protein libraries in which substituted amino acids are unevenly represented. As several of the randomised codons may encode the same amino acid, for example serine which is encoded six time using the codon NNN, an inherent bias may be introduced into the resulting protein library during the randomisation procedure. The work outlined here describes the development of a novel randomisation technique aimed at a eliminating codon redundancy from randomised gene libraries, thus addressing the problems of library size and bias, associated with the cloning of redundant codons.
Resumo:
Randomisation of DNA using conventional methodology requires an excess of genes to be cloned, since with randomised codons NNN or NNG/T 64 genes or 32 genes must be cloned to encode 20 amino acids respectively. Thus, as the number of randomised codons increases, the number of genes required to encode a full set of proteins increases exponentially. Various methods have been developed that address the problems associated with excess of genes that occurs due to the degeneracy of the genetic code. These range from chemical methodologies to biological methods. These all involve the replacement, insertion or deletion of codon(s) rather than individual nucleotides. The biological methods are however limited to random insertion/deletion or replacement. Recent work by Hughes et al., (2003) has randomised three binding residues of a zinc finger gene. The drawback with this is the fact that consecutive codons cannot undergo saturation mutagenesis. This thesis describes the development of a method of saturation mutagenesis that can be used to randomise any number of consecutive codons in a DNA strand. The method makes use of “MAX” oligonucleotides coding for each of the 20 amino acids that are ligated to a conserved sequence of DNA using T4 DNA ligase. The “MAX” oligonucleotides were synthesised in such a way, with an MlyI restriction site, that restriction of the oligonucleotides occurred after the three nucleotides coding for the amino acids. This use of the MlyI site and the restrict, purify, ligate and amplify method allows the insertion of “MAX” codons at any position in the DNA. This methodology reduces the number of clones that are required to produce a representative library and has been demonstrated to be effective to 7 amino acid positions.
Resumo:
The study of surfactant monolayers is certainly not a new technique, but the application of monolayer studies to elucidate controlling factors in liposome design remains an underutilised resource. Using a Langmuir-Blodgett trough, pure and mixed lipid monolayers can be investigated, both for their interactions within the monolayer, and for interfacial interactions with drugs in the aqueous sub-phase. Despite these monolayers effectively being only half a bilayer, with a flat rather than curved structure, information from these studies can be effectively translated into liposomal systems. Here we outline the background, general protocols and application of Langmuir studies with a focus on their application in liposomal systems. A range of case studies are discussed which show how the system can be used to support its application in the development of liposome drug delivery. Examples include investigations into the effect of cholesterol within the liposome bilayer, understanding effective lipid packaging within the bilayer to promote water soluble and poorly soluble drug retention, the effect of alkyl chain length on lipid packaging, and drug-monolayer electrostatic interactions that promote bilayer repackaging.
Resumo:
Numerical techniques have been finding increasing use in all aspects of fracture mechanics, and often provide the only means for analyzing fracture problems. The work presented here, is concerned with the application of the finite element method to cracked structures. The present work was directed towards the establishment of a comprehensive two-dimensional finite element, linear elastic, fracture analysis package. Significant progress has been made to this end, and features which can now be studied include multi-crack tip mixed-mode problems, involving partial crack closure. The crack tip core element was refined and special local crack tip elements were employed to reduce the element density in the neighbourhood of the core region. The work builds upon experience gained by previous research workers and, as part of the general development, the program was modified to incorporate the eight-node isoparametric quadrilateral element. Also. a more flexible solving routine was developed, and provided a very compact method of solving large sets of simultaneous equations, stored in a segmented form. To complement the finite element analysis programs, an automatic mesh generation program has been developed, which enables complex problems. involving fine element detail, to be investigated with a minimum of input data. The scheme has proven to be versati Ie and reasonably easy to implement. Numerous examples are given to demonstrate the accuracy and flexibility of the finite element technique.
Resumo:
This study is primarily concerned with the problem of break-squeal in disc brakes, using moulded organic disc pads. Moulded organic friction materials are complex composites and due to this complexity it was thought that they are unlikely to be of uniform composition. Variation in composition would under certain conditions of the braking system, cause slight changes in its vibrational characteristics thus causing resonance in the high audio-frequency range. Dynamic mechanical propertes appear the most likely parameters to be related to a given composition's tendency to promote squeal. Since it was necessary to test under service conditions a review was made of all the available commercial test instruments but as none were suitable it was necessary to design and develop a new instrument. The final instrument design, based on longitudinal resonance, enabled modulus and damping to be determined over a wide range of temperatures and frequencies. This apparatus has commercial value since it is not restricted to friction material testing. Both used and unused pads were tested and although the cause of brake squeal was not definitely established, the results enabled formulation of a tentative theory of the possible conditions for brake squeal. The presence of a temperature of minimum damping was indicated which may be of use to braking design engineers. Some auxilIary testing was also performed to establish the effect of water, oil and brake fluid and also to determine the effect of the various components of friction materials.
Resumo:
The absence of a definitive approach to the design of manufacturing systems signifies the importance of a control mechanism to ensure the timely application of relevant design techniques. To provide effective control, design development needs to be continually assessed in relation to the required system performance, which can only be achieved analytically through computer simulation. The technique providing the only method of accurately replicating the highly complex and dynamic interrelationships inherent within manufacturing facilities and realistically predicting system behaviour. Owing to the unique capabilities of computer simulation, its application should support and encourage a thorough investigation of all alternative designs. Allowing attention to focus specifically on critical design areas and enabling continuous assessment of system evolution. To achieve this system analysis needs to efficient, in terms of data requirements and both speed and accuracy of evaluation. To provide an effective control mechanism a hierarchical or multi-level modelling procedure has therefore been developed, specifying the appropriate degree of evaluation support necessary at each phase of design. An underlying assumption of the proposal being that evaluation is quick, easy and allows models to expand in line with design developments. However, current approaches to computer simulation are totally inappropriate to support the hierarchical evaluation. Implementation of computer simulation through traditional approaches is typically characterized by a requirement for very specialist expertise, a lengthy model development phase, and a correspondingly high expenditure. Resulting in very little and rather inappropriate use of the technique. Simulation, when used, is generally only applied to check or verify a final design proposal. Rarely is the full potential of computer simulation utilized to aid, support or complement the manufacturing system design procedure. To implement the proposed modelling procedure therefore the concept of a generic simulator was adopted, as such systems require no specialist expertise, instead facilitating quick and easy model creation, execution and modification, through simple data inputs. Previously generic simulators have tended to be too restricted, lacking the necessary flexibility to be generally applicable to manufacturing systems. Development of the ATOMS manufacturing simulator, however, has proven that such systems can be relevant to a wide range of applications, besides verifying the benefits of multi-level modelling.
Resumo:
The preThe present work is a study of the optical properties of some surfaces, in order to determine their applications in solar energy utilisation. An attempt has been made to investigate and measure the optical properties of two systems of surface moderately selective surfaces like thermally grown oxide of titanium, titanium oxide en aluminium and thermally grown oxides of stainless steel; and, selective surfaces of five different coloured stainless at (INCO surfaces) and of black nickel foil. A calorimetric instrument based on the steady state method for measuring directly the total emittance has been designed. Chapter 1 is an introductory survey of selective surface. It also includes a brief review of various preparation techniques in use since 1955. Chapter 2 investigates the theory of selective surfaces, defining their optical properties and their figures of merit. It also outlines the method of computing the optical properties (i.e. absorptance, a, and emittance, a) which have been adopted for the present work. Chapter 3 describes the measuring technique and the modes of operation of the equipment used in the experimental work carried out. Chapter 4 gives the results of the experimental work to measure the optical properties, the life testing and chemical composition of the surfaces under study. Chapter 5 deals with the experimentation leading to the design of a calorimetric instrument for measuring the total emmitance directly. Chapter 6 presents concluding remarks about the outcome of the present work and some suggestions for further work. sent work is a study of the optical properties of some surfaces, in order to determine their applications in solar energy utilisation. An attempt has been made to investigate and measure the optical properties of two systems of surface moderately selective surfaces like thermally grown oxide of titanium, titanium oxide en aluminium and thermally grown oxides of stainless steel; and, selective surfaces of five different coloured stainless at (INCO surfaces) and of black nickel foil. A calorimetric instrument based on the steady state method for measuring directly the total emittance has been designed. Chapter 1 is an introductory survey of selective surface. It also includes a brief review of various preparation techniques in use since 1955. Chapter 2 investigates the theory of selective surfaces, defining their optical properties and their figures of merit. It also outlines the method of computing the optical properties (i.e. absorptance, a, and emittance, a) which have been adopted for the present work. Chapter 3 describes the measuring technique and the modes of operation of the equipment used in the experimental work carried out. Chapter 4 gives the results of the experimental work to measure the optical properties, the life testing and chemical composition of the surfaces under study. Chapter 5 deals with the experimentation leading to the design of a calorimetric instrument for measuring the total emmitance directly. Chapter 6 presents concluding remarks about the outcome of the present work and some suggestions for further work.
Resumo:
A re-examination of fundamental concepts and a formal structuring of the waveform analysis problem is presented in Part I. eg. the nature of frequency is examined and a novel alternative to the classical methods of detection proposed and implemented which has the advantage of speed and independence from amplitude. Waveform analysis provides the link between Parts I and II. Part II is devoted to Human Factors and the Adaptive Task Technique. The Historical, Technical and Intellectual development of the technique is traced in a review which examines the evidence of its advantages relative to non-adaptive fixed task methods of training, skill assessment and man-machine optimisation. A second review examines research evidence on the effect of vibration on manual control ability. Findings are presented in terms of percentage increment or decrement in performance relative to performance without vibration in the range 0-0.6Rms'g'. Primary task performance was found to vary by as much as 90% between tasks at the same Rms'g'. Differences in task difficulty accounted for this difference. Within tasks vibration-added-difficulty accounted for the effects of vibration intensity. Secondary tasks were found to be largely insensitive to vibration except secondaries which involved fine manual adjustment of minor controls. Three experiments are reported next in which an adaptive technique was used to measure the % task difficulty added by vertical random and sinusoidal vibration to a 'Critical Compensatory Tracking task. At vibration intensities between 0 - 0.09 Rms 'g' it was found that random vibration added (24.5 x Rms'g')/7.4 x 100% to the difficulty of the control task. An equivalence relationship between Random and Sinusoidal vibration effects was established based upon added task difficulty. Waveform Analyses which were applied to the experimental data served to validate Phase Plane analysis and uncovered the development of a control and possibly a vibration isolation strategy. The submission ends with an appraisal of subjects mentioned in the thesis title.
Resumo:
A series of in-line curvature sensors on a garment are used to monitor the thoracic and abdominal movements of a human during respiration. These results are used to obtain volumetric tidal changes of the human torso in agreement with a spirometer used simultaneously at the mouth. The curvature sensors are based on long-period gratings (LPGs) written in a progressive three-layered fiber to render the LPGs insensitive to the refractive index external to the fiber. A curvature sensor consists of the fiber long-period grating laid on a carbon fiber ribbon, which is then encapsulated in a low-temperature curing silicone rubber. The sensors have a spectral sensitivity to curvature, d lambda/dR from similar to 7-nm m to similar to 9-nm m. The interrogation technique is borrowed from derivative spectroscopy and monitors the changes in the transmission spectral profile of the LPG's attenuation band due to curvature. The multiplexing of the sensors is achieved by spectrally matching a series of distributed feedback (DFB) lasers to the LPGs. The versatility of this sensing garment is confirmed by it being used on six other human subjects covering a wide range of body mass indices. Just six fully functional sensors are required to obtain a volumetric error of around 6%. (C) 2007 Society of Photo-Optical Instrumentation Engineers.
Resumo:
Distributed Brillouin sensing of strain and temperature works by making spatially resolved measurements of the position of the measurand-dependent extremum of the resonance curve associated with the scattering process in the weakly nonlinear regime. Typically, measurements of backscattered Stokes intensity (the dependent variable) are made at a number of predetermined fixed frequencies covering the design measurand range of the apparatus and combined to yield an estimate of the position of the extremum. The measurand can then be found because its relationship to the position of the extremum is assumed known. We present analytical expressions relating the relative error in the extremum position to experimental errors in the dependent variable. This is done for two cases: (i) a simple non-parametric estimate of the mean based on moments and (ii) the case in which a least squares technique is used to fit a Lorentzian to the data. The question of statistical bias in the estimates is discussed and in the second case we go further and present for the first time a general method by which the probability density function (PDF) of errors in the fitted parameters can be obtained in closed form in terms of the PDFs of the errors in the noisy data.
Resumo:
The work described in this thesis concerns the application of radar altimetry, collected from the ERS-1 and TOPEX/POSEIDON missions, to precise satellite orbits computed at Aston University. The data is analysed in a long arc fashion to determine range biases, time tag biases, sea surface topographies and to assess the radial accuracy of the generated orbits through crossover analysis. A sea surface variability study is carried out for the North Sea using repeat altimeter profiles from ERS-1 and TOPEX/POSEIDON in order to verify two local U.K. models for ocean tide and storm surge effects. An on-side technique over the English Channel is performed to compute the ERS-1, TOPEX and POSEIDON altimeter range biases by using a combination of altimetry, precise orbits determined by short arc methods, tide gauge data, GPS measurements, geoid, ocean tide and storm surge models. The remaining part of the thesis presents some techniques for the short arc correction of long arc orbits. Validation of this model is achieved by way of comparison with actual SEASAT short arcs. Simulations are performed for the ERS-1 microwave tracking system, PRARE, using the range data to determine time dependent orbit corrections. Finally, a brief chapter is devoted to the recovery of errors in station coordinates by the use of multiple short arcs.