31 resultados para linearity
Resumo:
Changes in modern structural design have created a demand for products which are light but possess high strength. The objective is a reduction in fuel consumption and weight of materials to satisfy both economic and environmental criteria. Cold roll forming has the potential to fulfil this requirement. The bending process is controlled by the shape of the profile machined on the periphery of the rolls. A CNC lathe can machine complicated profiles to a high standard of precision, but the expertise of a numerical control programmer is required. A computer program was developed during this project, using the expert system concept, to calculate tool paths and consequently to expedite the procurement of the machine control tapes whilst removing the need for a skilled programmer. Codifying the expertise of a human and the encapsulation of knowledge within a computer memory, destroys the dependency on highly trained people whose services can be costly, inconsistent and unreliable. A successful cold roll forming operation, where the product is geometrically correct and free from visual defects, is not easy to attain. The geometry of the sheet after travelling through the rolling mill depends on the residual strains generated by the elastic-plastic deformation. Accurate evaluation of the residual strains can provide the basis for predicting the geometry of the section. A study of geometric and material non-linearity, yield criteria, material hardening and stress-strain relationships was undertaken in this research project. The finite element method was chosen to provide a mathematical model of the bending process and, to ensure an efficient manipulation of the large stiffness matrices, the frontal solution was applied. A series of experimental investigations provided data to compare with corresponding values obtained from the theoretical modelling. A computer simulation, capable of predicting that a design will be satisfactory prior to the manufacture of the rolls, would allow effort to be concentrated into devising an optimum design where costs are minimised.
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
Resumo:
This thesis presents the design, fabrication and testing of novel grating based Optical Fibre Sensor (OFS) systems being interrogated using “off the shelf” interrogation systems, with the eventual development of marketable commercial systems at the forefront of the research. Both in the industrial weighing and aerospace industries, there has been a drive to investigate the feasibility of using optical fibre sensors being deployed where traditionally their electrical or mechanical counterparts would traditionally have been. Already, in the industrial weighing industry, commercial operators are deploying OFS-based Weigh-In-Motion (WIM) systems. Likewise, in the aerospace industry, OFS have been deployed to monitor such parameters as load history, impact detection, structural damage, overload detection, centre of gravity and the determination of blade shape. Based on the intrinsic properties of fibre Bragg gratings (FBGs) and Long Period Fibre Gratings (LPFGs), a number of novel OFS-based systems have been realised. Experimental work has shown that in the case of static industrial weighing, FBGs can be integrated with current commercial products and used to detect applied loads. The work has also shown that embedding FBGs in e-glass, to form a sensing patch, can result in said patches being bonded to rail track, forming the basis of an FBG-based WIM system. The results obtained have been sufficiently encouraging to the industrial partner that this work will be progressed beyond the scope of the work presented in this thesis. Likewise, and to the best of the author’s knowledge, a novel Bragg grating based systems for aircraft fuel parameter sensing has been presented. FBG-based pressure sensors have been shown to demonstrate good sensitivity, linearity and repeatability, whilst LPFG-based systems have demonstrated a far greater sensitivity when compared to FBGs, as well the advantage of being potentially able to detect causes of fuel adulteration based on their sensitivity to refractive index (RI). In the case of the LPFG-based system, considerable work remains to be done on the mechanical strengthening to improve its survivability in a live aircraft fuel tank environment. The FBG system has already been developed to an aerospace compliant prototype and is due to be tested at the fuel testing facility based at Airbus, Filton, UK. It is envisaged by the author that in both application areas, continued research in this area will lead to the eventual development of marketable commercial products.
Resumo:
linearity management is explored as a complete tool to obtain maximum transmission reach in a WDM fiber transmission system, making it possible to optimize multiple system parameters, including optimal dispersion pre-compensation, with fast simulations based on the continuous-wave approximation.
Resumo:
An optical liquid-level sensor (LLS) based on a long-period fiber grating (LPG) interferometer is proposed and experimentally demonstrated. Two identical 3-dB LPGs are fabricated to form an in-fiber Mach-Zehnder interferometer, and the fiber portion between two LPGs is exposed to the liquid as the sensing element. The sensitivity and measurement range of the sensors employing different orders of cladding modes are investigated both theoretically and experimentally. The experimental results show good linearity and large measurement range. One of the significant advantages of such a sensing structure is that the measurement level is not limited to the length of the LPG itself. Also, the measurement range and sensitivity of the proposed LLS can be readily tailored for a particular applications.
Resumo:
We review the recent progress of information theory in optical communications, and describe the current experimental results and associated advances in various individual technologies which increase the information capacity. We confirm the widely held belief that the reported capacities are approaching the fundamental limits imposed by signal-to-noise ratio and the distributed non-linearity of conventional optical fibres, resulting in the reduction in the growth rate of communication capacity. We also discuss the techniques which are promising to increase and/or approach the information capacity limit.
Resumo:
Gastric absorption of feruloylquinic acid and di-O-caffeoylquinic acid analogs has never been investigated despite their potential contribution to the proposed beneficial health effects leading to reduced risk of type 2 diabetes. Using a cultured gastric epithelial model, with an acidic apical pH, the relative permeability coefficients (P(app)) and metabolic fate of a series of chlorogenic acids (CGAs) were investigated. Mechanistic studies were performed in the apical to basal direction and demonstrated differential rates of absorption for different CGA subgroups. For the first time, we show intact absorption of feruloylquinic acids and caffeoylquinic acid lactones across the gastric epithelium (P(app) ~ 0.2 cm/s). Transport seemed to be mainly by passive diffusion, because good linearity was observed over the incubation period and test concentrations, and we speculate that a potential carrier-mediated component may be involved in uptake of certain 4-acyl CGA isomers. In contrast, absorption of intact di-O-caffeoylquinic acids was rapid (P(app) ~ 2-10 cm/s) but nonlinear with respect to time and concentration dependence, which was potentially limited by interaction with an efflux transporter and/or pH gradient dependence. For the first time, methylation is shown in gastric mucosa. Furthermore, isoferulic acid, dimethoxycinnamic acid, and ferulic acid were identified as novel gastric metabolites of CGA biotransformation. We propose that the stomach is the first location for the release of hydroxycinnamic acids, which could explain their early detection after coffee consumption.
Resumo:
The development of ultra-long (UL) cavity (hundreds of meters to several kilometres) mode-locked fibre lasers for the generation of high-energy light pulses with relatively low (sub-megahertz) repetition rates has emerged as a new rapidly advancing area of laser physics. The first demonstration of high pulse energy laser of this type was followed by a number of publications from many research groups on long-cavity Ytterbium and Erbium lasers featuring a variety of configurations with rather different mode-locked operations. The substantial interest to this new approach is stimulated both by non-trivial underlying physics and by the potential of high pulse energy laser sources with unique parameters for a range of applications in industry, bio-medicine, metrology and telecommunications. It is well known, that pulse generation regimes in mode-locked fibre lasers are determined by the intra-cavity balance between the effects of dispersion and non-linearity, and the processes of energy attenuation and amplification. The highest per-pulse energy has been achieved in normal-dispersion UL fibre lasers mode-locked through nonlinear polarization evolution (NPE) for self-modelocking operation. In such lasers are generated the so-called dissipative optical solitons. The uncompensated net normal dispersion in long-cavity resonatorsusually leads to very high chirp and, consequently, to a relatively long duration of generated pulses. This thesis presents the results of research Er-doped ultra-long (more than 1 km cavity length) fibre lasers mode-locked based on NPE. The self-mode-locked erbium-based 3.5-km-long all-fiber laser with the 1.7 µJ pulse energy at a wavelength of 1.55 µm was developed as a part of this research. It has resulted in direct generation of short laser pulses with an ultralow repetition rate of 35.1 kHz. The laser cavity has net normal-dispersion and has been fabricated from commercially-available telecom fibers and optical-fiber elements. Its unconventional linear-ring design with compensation for polarization instability ensures high reliability of the self-mode-locking operation, despite the use of a non polarization-maintaining fibers. The single pulse generation regime in all-fibre erbium mode-locking laser based on NPE with a record cavity length of 25 km was demonstrated. Modelocked lasers with such a long cavity have never been studied before. Our result shows a feasibility of stable mode-locked operation even for an ultra-long cavity length. A new design of fibre laser cavity – “y-configuration”, that offers a range of new functionalities for optimization and stabilization of mode-locked lasing regimes was proposed. This novel cavity configuration has been successfully implemented into a long-cavity normal-dispersion self-mode-locked Er-fibre laser. In particular, it features compensation for polarization instability, suppression of ASE, reduction of pulse duration, prevention of in-cavity wave breaking, and stabilization of the lasing wavelength. This laser along with a specially designed double-pass EDFA have allowed us to demonstrate anenvironmentally stable all-fibre laser system able to deliver sub-nanosecond high-energy pulses with low level of ASE noise.
Resumo:
An optical liquid-level sensor (LLS) based on a long-period fiber grating (LPG) interferometer is proposed and experimentally demonstrated. Two identical 3-dB LPGs are fabricated to form an in-fiber Mach-Zehnder interferometer, and the fiber portion between two LPGs is exposed to the liquid as the sensing element. The sensitivity and measurement range of the sensors employing different orders of cladding modes are investigated both theoretically and experimentally. The experimental results show good linearity and large measurement range. One of the significant advantages of such a sensing structure is that the measurement level is not limited to the length of the LPG itself. Also, the measurement range and sensitivity of the proposed LLS can be readily tailored for a particular applications.
Resumo:
A fully distributed temperature sensor consisting of a chirped fibre Bragg grating has been demonstrated. By fitting a numerical model of the grating response including temperature change, position and width of localized heating applied to the grating, we achieve measurements of these parameters to within 2.2 K, 149 μm and 306 μm of applied values, respectively. Assuming that deviation from linearity is accounted for in making measurement, much higher precision is achievable and the standard deviations for these measurements are 0.6 K, 28.5 μm and 56.0 μm, respectively. © 2004 IOP Publishing Ltd.
Resumo:
As optical coherence tomography (OCT) becomes widespread, validation and characterization of systems becomes important. Reference standards are required to qualitatively and quantitatively measure the performance between difference systems. This would allow the performance degradation of the system over time to be monitored. In this report, the properties of the femtosecond inscribed structures from three different systems for making suitable OCT characterization artefacts (phantoms) are analyzed. The parameter test samples are directly inscribed inside transparent materials. The structures are characterized using an optical microscope and a swept-source OCT. The high reproducibility of the inscribed structures shows high potential for producing multi-modality OCT calibration and characterization phantoms. Such that a single artefact can be used to characterize multiple performance parameters such the resolution, linearity, distortion, and imaging depths. © 2012 SPIE.
Resumo:
The Bragg wavelength of a PMMA based fiber grating is determined by the effective core index and the grating pitch, which, in temperature sensing, depend on the thermo-optic and thermal expansion coefficients of PMMA. These two coefficients are a function of surrounding temperature and humidity. Amorphous polymers including PMMA exhibit a certain degree of anisotropic thermal expansion. The anisotropic nature of expansion mainly depends on the polymer processing history. The expansion coefficient is believed to be lower in the direction of the molecular orientation than in the direction perpendicular to the draw direction. Such anisotropic behavior of polymers can be expected in drawn PMMA based optical fiber, and will lead to a reduced thermal expansion coefficient and larger temperature sensitivity than would be the case were the fiber to be isotropic. Extensive work has been carried out to identify these factors. The temperature responses of gratings have been measured at different relative humidity. Gratings fabricated on annealed and non-annealed PMMA optical fibers are used to compare the sensitivity performance as annealing is considered to be able to mitigate the anisotropic effect in PMMA optical fiber. Furthermore an experiment has been designed to eliminate the thermal expansion contribution to the grating wavelength change, leading to increased temperature sensitivity and improved response linearity. © 2014 Copyright SPIE.
Resumo:
This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.
Resumo:
We review the recent progress of information theory in optical communications, and describe the current experimental results and associated advances in various individual technologies which increase the information capacity. We confirm the widely held belief that the reported capacities are approaching the fundamental limits imposed by signal-to-noise ratio and the distributed non-linearity of conventional optical fibres, resulting in the reduction in the growth rate of communication capacity. We also discuss the techniques which are promising to increase and/or approach the information capacity limit.
Resumo:
In this talk we will review some of the key enabling technologies of optical communications and potential future bottlenecks. Single mode fibre (SMF) has long been the preferred waveguide for long distance communication. This is largely due to low loss, low cost and relative linearity over a wide bandwidth. As capacity demands have grown SMF has largely been able to keep pace with demand. Several groups have been identifying the possibility of exhausting the bandwidth provided by SMF [1,2,3]. This so called “capacity-crunch” has potentially vast economic and social consequences and will be discussed in detail. As demand grows optical power launched into the fibre has the potential to cause nonlinearities that can be detrimental to transmission. There has been considerable work done on identifying this nonlinear limit [4, 5] with a strong re- search interest currently on the topic of nonlinear compensation [6, 7]. Embracing and compensating for nonlinear transmission is one potential solution that may extend the lifetime of the current waveguide technology. However, at sufficiently high powers the waveguide will fail due to heat-induced mechanical failure. Moving forward it be- comes necessary to address the waveguide itself with several promising contenders discussed, including few-mode fibre and multi-core fibre.