946 resultados para Application techniques
Resumo:
The advent of personal communication systems within the last decade has depended upon the utilization of advanced digital schemes for source and channel coding and for modulation. The inherent digital nature of the communications processing has allowed the convenient incorporation of cryptographic techniques to implement security in these communications systems. There are various security requirements, of both the service provider and the mobile subscriber, which may be provided for in a personal communications system. Such security provisions include the privacy of user data, the authentication of communicating parties, the provision for data integrity, and the provision for both location confidentiality and party anonymity. This thesis is concerned with an investigation of the private-key and public-key cryptographic techniques pertinent to the security requirements of personal communication systems and an analysis of the security provisions of Second-Generation personal communication systems is presented. Particular attention has been paid to the properties of the cryptographic protocols which have been employed in current Second-Generation systems. It has been found that certain security-related protocols implemented in the Second-Generation systems have specific weaknesses. A theoretical evaluation of these protocols has been performed using formal analysis techniques and certain assumptions made during the development of the systems are shown to contribute to the security weaknesses. Various attack scenarios which exploit these protocol weaknesses are presented. The Fiat-Sharmir zero-knowledge cryptosystem is presented as an example of how asymmetric algorithm cryptography may be employed as part of an improved security solution. Various modifications to this cryptosystem have been evaluated and their critical parameters are shown to be capable of being optimized to suit a particular applications. The implementation of such a system using current smart card technology has been evaluated.
Resumo:
Purpose: Surfactant proteins A, B, C and D complex with (phospho)lipids to produce surfactants which provide low interfacial tensions. It is likely that similar complexation occurs in the tear film and contributes to its low surface tension. Synthetic protein-phospholipid complexes, with styrene maleic anhydrides (SMAs) as the protein analogue, have been shown to have similarly low surface tensions. This study investigates the potential of modified SMAs and/or SMA-phospholipid complexes, which form under physiological conditions, to supplement natural tear film surfactants. Method: SMAs were modified to provide structural variants which can form complexes under varying conditions. Infrared spectroscopy and Nuclear Magnetic Resonance were used to confirm SMA structure. Interfacial behaviour of the SMA and SMA-phospholipid complexes was studied using Langmuir trough, du Nûoy ring and pulsating bubblemethods. Factors which affect SMA-phospholipid complex formation, such as temperature and pH, were also investigated. Results: Structural manipulation of SMAs allows control over complex formation, including under physiological conditions (e.g. partial SMAesterfication allowed complexation with dimyristoylphosphatidylcholine, at pH7). The low surface tensions of the SMAs (42mN/m for static (du Nûoy ring) and 34mN/m for dynamic (Langmuir) techniques) demonstrate their surface activity at the air-aqueous interface. SMA-phospholipid complexes provide even lower surface tensions (~2 mN/m), approaching that of lung surfactant, as measured by the pulsating bubblemethod. Conclusions: Design of the molecular architecture of SMAs allows control over their surfactant properties. These SMAs could be used as novel tear films supplements, either alone to complex with native tear film phospholipids or delivered as synthetic protein-phospholipid complexes.
Resumo:
This paper investigates how existing software engineering techniques can be employed, adapted and integrated for the development of systems of systems. Starting from existing system-of-systems (SoS) studies, we identify computing paradigms and techniques that have the potential to help address the challenges associated with SoS development, and propose an SoS development framework that combines these techniques in a novel way. This framework addresses the development of a class of IT systems of systems characterised by high variability in the types of interactions between their component systems, and by relatively small numbers of such interactions. We describe how the framework supports the dynamic, automated generation of the system interfaces required to achieve these interactions, and present a case study illustrating the development of a data-centre SoS using the new framework.
Resumo:
This study is concerned with the analysis of tear proteins, paying particular attention to the state of the tears (e.g. non-stimulated, reflex, closed), created during sampling, and to assess their interactions with hydrogel contact lenses. The work has involved the use of a variety of biochemical and immunological analytical techniques for the measurement of proteins, (a), in tears, (b), on the contact lens, and (c), in the eluate of extracted lenses. Although a diverse range of tear components may contribute to contact lens spoilation, proteins were of particular interest in this study because of their theoretical potential for producing immunological reactions. Although normal host proteins in their natural state are generally not treated as dangerous or non-self, those which undergo denaturation or suffer a conformational change may provoke an excessive and unnecessary immune response. A novel on-lens cell based assay has been developed and exploited in order to study the role of the ubiquitous cell adhesion glycoprotein, vitronectin, in tears and contact lens wear under various parameters. Vitronectin, whose levels are known to increase in the closed eye environment and shown here to increase during contact lens wear, is an important immunoregulatory protein and may be a prominent marker of inflammatory activity. Immunodiffusion assays were developed and optimised for use in tear analysis, and in a series of subsequent studies used for example in the measurement of albumin, lactoferrin, IgA and IgG. The immunodiffusion assays were then applied in the estimation of the closed eye environment; an environment which has been described as sustaining a state of sub-clinical inflammation. The role and presence of a lesser understood and investigated protein, kininogen, was also estimated, in particular, in relation to contact lens wear. Difficulties arise when attempting to extract proteins from the contact lens in order to examine the individual nature of the proteins involved. These problems were partly alleviated with the use of the on-lens cell assay and a UV spectrophotometry assay, which can analyse the lens surface and bulk respectively, the latter yielding only total protein values. Various lens extraction methods were investigated to remove protein from the lens and the most efficient was employed in the analysis of lens extracts. Counter immunoelectrophoresis, an immunodiffusion assay, was then applied to the analysis of albumin, lactoferrin, IgA and IgG in the resultant eluates.
Resumo:
A typical liquid state NMR spectrum is composed of a number of discrete absorptions which can be readily interpreted to yield detailed information about the chemical environment of the nuclei found within the sample. The same cannot be said about the spectra of solid samples. For these the absorptions are typically broad, featureless and yield little information directly. This situation may be further exacerbated by the characteristically long T1 values of nuclei bound within a solid lattice which, consequently, require long inter-sequence delays that necessitate lengthy experiments. This work attempts to address both of these inherent problems. Classically, the resolution of the broad-line spectra of solids into discrete resonances has been achieved by imparting to the sample coherent rotation about specific axes in relation to the polarising magnetic field, as implemented in the magic-angle spinning (MAS) [1], dynamic angle spinning (DAS) [2] and double rotation (DOR) [3] NMR experiments. Recently, an alternative method, sonically induced narrowing of the NMR spectra of solids (SINNMR) [4], has been reported which yields the same well resolved solid-state spectra as the classic solid-state NMR experiments, but which achieves the resolution of the broad-line spectra through the promotion of incoherent motion in a suspension of solid particles. The first part of this work examines SINNMR and, in particular, concentrates on ultrasonically induced evaluation, a phenomenon which is thought to be essential to the incoherent averaging mechanism. The second part of this work extends the principle of incoherent motion, implicit in SINNMR, to a new genre of particulate systems, air fluidized beds, and examines the feasibility of such systems to provide well resolved solid state NMR spectra. Samples of trisodium phosphate dodecahydrate and of aluminium granules are examined using the new method with partially resolved spectra being reported in the case of the latter.
Resumo:
The purpose of this study is to increase our knowledge of the nature of the surface properties of polymeric materials and improve our understanding of how these factors influence the deposition of proteins to form a reactive biological/synthetic interface. A number of surface analytical techniques were identified as being of potential benefit to this investigation and included in a multidisciplinary research program. Cell adhesion in culture was the primary biological sensor of surface properties, and it showed that the cell response to different materials can be modified by adhesion promoting protein layers: cell adhesion is a protein-mediated event. A range of surface rugosity can be produced on polystyrene, and the results presented here show that surface rugosity does not play a major role in determining a material's cell adhesiveness. Contact angle measurements showed that surface energy (specifically the polar fraction) is important in promoting cell spreading on surfaces. The immunogold labelling technique indicated that there were small, but noticeable differences, between the distribution of proteins on a range of surfaces. This study has shown that surface analysis techniques have different sensitivities in terms of detection limits and depth probed, and these are important in determining the usefulness of the information obtained. The techniques provide information on differing aspects of the biological/synthetic interface, and the consequence of this is that a range of techniques is needed in any full study of such a complex field as the biomaterials area.
Resumo:
Soft contact lens wear has become a common phenomenon in recent times. The contact lens when placed in the eye rapidly undergoes change. A film of biological material builds up on and in the lens matrix. The long term wear characteristics of the lens ultimately depend on this process. With time distinct structures made up of biological material have been found to build up on the lens. A fuller understanding of this process and how it relates to the lens chemistry could lead to contact lenses that are better tolerated by the eye. The tear film is a complex biological fluid, it is this fluid that bathes the lens during wear. It is reasonable to suppose that it is material derived from this source that accumulates on the lens. To understand this phenomenon it was decided to investigate the make up and conformation of the protein species that are found on and in the lens. As inter individual variations in tear fluid composition have been found it is important to be able to study the proteins on a single lens. Many of the analytical techniques used in bio research are not suitable for this study because of the lack of sensitivity. Work with poly acrylamide electrophoresis showed the possibility of analyzing the proteins extracted from a single lens. The development of a biotin avidin electro-blot and an enzyme linked aniibody electro-blot, lead to the high sensitivity detection and identification of the proteins present. The extraction of proteins from a lens is always incomplete. A method that analyses the proteins in situ would be a great advancement. Fourier transform infra red microscopy was developed to a point where a thin section of a contact lens could yield information about the proteins present and their conformation. The three dimensional structure of the gross macroscopic structures termed white spots was investigated using confocal laser microscopy.
Resumo:
This thesis seeks to describe the development of an inexpensive and efficient clustering technique for multivariate data analysis. The technique starts from a multivariate data matrix and ends with graphical representation of the data and pattern recognition discriminant function. The technique also results in distances frequency distribution that might be useful in detecting clustering in the data or for the estimation of parameters useful in the discrimination between the different populations in the data. The technique can also be used in feature selection. The technique is essentially for the discovery of data structure by revealing the component parts of the data. lhe thesis offers three distinct contributions for cluster analysis and pattern recognition techniques. The first contribution is the introduction of transformation function in the technique of nonlinear mapping. The second contribution is the us~ of distances frequency distribution instead of distances time-sequence in nonlinear mapping, The third contribution is the formulation of a new generalised and normalised error function together with its optimal step size formula for gradient method minimisation. The thesis consists of five chapters. The first chapter is the introduction. The second chapter describes multidimensional scaling as an origin of nonlinear mapping technique. The third chapter describes the first developing step in the technique of nonlinear mapping that is the introduction of "transformation function". The fourth chapter describes the second developing step of the nonlinear mapping technique. This is the use of distances frequency distribution instead of distances time-sequence. The chapter also includes the new generalised and normalised error function formulation. Finally, the fifth chapter, the conclusion, evaluates all developments and proposes a new program. for cluster analysis and pattern recognition by integrating all the new features.
Resumo:
This thesis describes the development of a complete data visualisation system for large tabular databases, such as those commonly found in a business environment. A state-of-the-art 'cyberspace cell' data visualisation technique was investigated and a powerful visualisation system using it was implemented. Although allowing databases to be explored and conclusions drawn, it had several drawbacks, the majority of which were due to the three-dimensional nature of the visualisation. A novel two-dimensional generic visualisation system, known as MADEN, was then developed and implemented, based upon a 2-D matrix of 'density plots'. MADEN allows an entire high-dimensional database to be visualised in one window, while permitting close analysis in 'enlargement' windows. Selections of records can be made and examined, and dependencies between fields can be investigated in detail. MADEN was used as a tool for investigating and assessing many data processing algorithms, firstly data-reducing (clustering) methods, then dimensionality-reducing techniques. These included a new 'directed' form of principal components analysis, several novel applications of artificial neural networks, and discriminant analysis techniques which illustrated how groups within a database can be separated. To illustrate the power of the system, MADEN was used to explore customer databases from two financial institutions, resulting in a number of discoveries which would be of interest to a marketing manager. Finally, the database of results from the 1992 UK Research Assessment Exercise was analysed. Using MADEN allowed both universities and disciplines to be graphically compared, and supplied some startling revelations, including empirical evidence of the 'Oxbridge factor'.
Resumo:
The satellite ERS-1 was launched in July 1991 in a period of high solar activity. Sparse laser tracking and the failure of the experimental microwave system (PRARE) compounded the orbital errors which resulted from mismodelling of atmospheric density and hence surface forces. Three attempts are presented here to try and refine the coarse laser orbits of ERS-1, made prior to the availability of the full altimetric dataset. The results of the first attempt indicate that by geometrically modelling the satellite shape some improvement in orbital precision may be made for any satellite; especially one where no area tables already exist. The second and third refinement attempts are based on the introduction of data from some second satellite; in these examples SPOT-2 and TOPEX/Poseidon are employed. With SPOT-2 the method makes use of the orbital similarities to produce along-track corrections for the more fully tracked SPOT-2. Transferring these corrections to ERS-1 produces improvements in the precise orbits thus determined. With TOPEX/Poseidon the greater altitude results in a more precise orbit (gravity field and atmospheric errors are of less importance). Thus, by computing height differences at crossover points of the TOPEX/Poseidon and ERS-1 ground tracks the poorer orbit of ERS-1 may be improved by the addition of derived radial corrections. In the positive light of all three results several potential modification are suggested and some further avenues of investigation indicated.
Resumo:
Measurements of the sea surface obtained by satellite borne radar altimetry are irregularly spaced and contaminated with various modelling and correction errors. The largest source of uncertainty for low Earth orbiting satellites such as ERS-1 and Geosat may be attributed to orbital modelling errors. The empirical correction of such errors is investigated by examination of single and dual satellite crossovers, with a view to identifying the extent of any signal aliasing: either by removal of long wavelength ocean signals or introduction of additional error signals. From these studies, it was concluded that sinusoidal approximation of the dominant one cycle per revolution orbit error over arc lengths of 11,500 km did not remove a significant mesoscale ocean signal. The use of TOPEX/Poseidon dual crossovers with ERS-1 was shown to substantially improve the radial accuracy of ERS-1, except for some absorption of small TOPEX/Poseidon errors. The extraction of marine geoid information is of great interest to the oceanographic community and was the subject of the second half of this thesis. Firstly through determination of regional mean sea surfaces using Geosat data, it was demonstrated that a dataset with 70cm orbit error contamination could produce a marine geoid map which compares to better than 12cm with an accurate regional high resolution gravimetric geoid. This study was then developed into Optimal Fourier Transform Interpolation, a technique capable of analysing complete altimeter datasets for the determination of consistent global high resolution geoid maps. This method exploits the regular nature of ascending and descending data subsets thus making possible the application of fast Fourier transform algorithms. Quantitative assessment of this method was limited by the lack of global ground truth gravity data, but qualitative results indicate good signal recovery from a single 35-day cycle.
Resumo:
The concept of a task is fundamental to the discipline of ergonomics. Approaches to the analysis of tasks began in the early 1900's. These approaches have evolved and developed to the present day, when there is a vast array of methods available. Some of these methods are specific to particular contexts or applications, others more general. However, whilst many of these analyses allow tasks to be examined in detail, they do not act as tools to aid the design process or the designer. The present thesis examines the use of task analysis in a process control context, and in particular the use of task analysis to specify operator information and display requirements in such systems. The first part of the thesis examines the theoretical aspect of task analysis and presents a review of the methods, issues and concepts relating to task analysis. A review of over 80 methods of task analysis was carried out to form a basis for the development of a task analysis method to specify operator information requirements in industrial process control contexts. Of the methods reviewed Hierarchical Task Analysis was selected to provide such a basis and developed to meet the criteria outlined for such a method of task analysis. The second section outlines the practical application and evolution of the developed task analysis method. Four case studies were used to examine the method in an empirical context. The case studies represent a range of plant contexts and types, both complex and more simple, batch and continuous and high risk and low risk processes. The theoretical and empirical issues are drawn together and a method developed to provide a task analysis technique to specify operator information requirements and to provide the first stages of a tool to aid the design of VDU displays for process control.
Resumo:
Numerical techniques have been finding increasing use in all aspects of fracture mechanics, and often provide the only means for analyzing fracture problems. The work presented here, is concerned with the application of the finite element method to cracked structures. The present work was directed towards the establishment of a comprehensive two-dimensional finite element, linear elastic, fracture analysis package. Significant progress has been made to this end, and features which can now be studied include multi-crack tip mixed-mode problems, involving partial crack closure. The crack tip core element was refined and special local crack tip elements were employed to reduce the element density in the neighbourhood of the core region. The work builds upon experience gained by previous research workers and, as part of the general development, the program was modified to incorporate the eight-node isoparametric quadrilateral element. Also. a more flexible solving routine was developed, and provided a very compact method of solving large sets of simultaneous equations, stored in a segmented form. To complement the finite element analysis programs, an automatic mesh generation program has been developed, which enables complex problems. involving fine element detail, to be investigated with a minimum of input data. The scheme has proven to be versati Ie and reasonably easy to implement. Numerous examples are given to demonstrate the accuracy and flexibility of the finite element technique.
Resumo:
Much research is currently centred on the detection of damage in structures using vibrational data. The work presented here examined several areas of interest in support of a practical technique for identifying and locating damage within bridge structures using apparent changes in their vibrational response to known excitation. The proposed goals of such a technique included the need for the measurement system to be operated on site by a minimum number of staff and that the procedure should be as non-invasive to the bridge traffic-flow as possible. Initially the research investigated changes in the vibrational bending characteristics of two series of large-scale model bridge-beams in the laboratory and these included ordinary-reinforced and post-tensioned, prestressed designs. Each beam was progressively damaged at predetermined positions and its vibrational response to impact excitation was analysed. For the load-regime utilised the results suggested that the infuced damage manifested itself as a function of the span of a beam rather than a localised area. A power-law relating apparent damage with the applied loading and prestress levels was then proposed, together with a qualitative vibrational measure of structural damage. In parallel with the laboratory experiments a series of tests were undertaken at the sites of a number of highway bridges. The bridges selected had differing types of construction and geometric design including composite-concrete, concrete slab-and-beam, concrete-slab with supporting steel-troughing constructions together with regular-rectangular, skewed and heavily-skewed geometries. Initial investigations were made of the feasibility and reliability of various methods of structure excitation including traffic and impulse methods. It was found that localised impact using a sledge-hammer was ideal for the purposes of this work and that a cartridge `bolt-gun' could be used in some specific cases.