31 resultados para linearity
em Aston University Research Archive
Resumo:
Blurred edges appear sharper in motion than when they are stationary. We (Vision Research 38 (1998) 2108) have previously shown how such distortions in perceived edge blur may be accounted for by a model which assumes that luminance contrast is encoded by a local contrast transducer whose response becomes progressively more compressive as speed increases. If the form of the transducer is fixed (independent of contrast) for a given speed, then a strong prediction of the model is that motion sharpening should increase with increasing contrast. We measured the sharpening of periodic patterns over a large range of contrasts, blur widths and speeds. The results indicate that whilst sharpening increases with speed it is practically invariant with contrast. The contrast invariance of motion sharpening is not explained by an early, static compressive non-linearity alone. However, several alternative explanations are also inconsistent with these results. We show that if a dynamic contrast gain control precedes the static non-linear transducer then motion sharpening, its speed dependence, and its invariance with contrast, can be predicted with reasonable accuracy. © 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
This thesis describes an experimental and analytic study of the effects of magnetic non-linearity and finite length on the loss and field distribution in solid iron due to a travelling mmf wave. In the first half of the thesis, a two-dimensional solution is developed which accounts for the effects of both magnetic non-linearity and eddy-current reaction; this solution is extended, in the second half, to a three-dimensional model. In the two-dimensional solution, new equations for loss and flux/pole are given; these equations contain the primary excitation, the machine parameters and factors describing the shape of the normal B-H curve. The solution applies to machines of any air-gap length. The conditions for maximum loss are defined, and generalised torque/frequency curves are obtained. A relationship between the peripheral component of magnetic field on the surface of the iron and the primary excitation is given. The effects of magnetic non-linearity and finite length are combined analytically by introducing an equivalent constant permeability into a linear three-dimensional analysis. The equivalent constant permeability is defined from the non-linear solution for the two-dimensional magnetic field at the axial centre of the machine to avoid iterative solutions. In the linear three-dimensional analysis, the primary excitation in the passive end-regions of the machine is set equal to zero and the secondary end faces are developed onto the air-gap surface. The analyses, and the assumptions on which they are based, were verified on an experimental machine which consists of a three-phase rotor and alternative solid iron stators, one with copper end rings, and one without copper end rings j the main dimensions of the two stators are identical. Measurements of torque, flux /pole, surface current density and radial power flow were obtained for both stators over a range of frequencies and excitations. Comparison of the measurements on the two stators enabled the individual effects of finite length and saturation to be identified, and the definition of constant equivalent permeability to be verified. The penetration of the peripheral flux into the stator with copper end rings was measured and compared with theoretical penetration curves. Agreement between measured and theoretical results was generally good.
Resumo:
Since 1996 direct femtosecond inscription in transparent dielectrics has become the subject of intensive research. This enabling technology significantly expands the technological boundaries for direct fabrication of 3D structures in a wide variety of materials. It allows modification of non-photosensitive materials, which opens the door to numerous practical applications. In this work we explored the direct femtosecond inscription of waveguides and demonstrated at least one order of magnitude enhancement in the most critical parameter - the induced contrast of the refractive index in a standard borosilicate optical glass. A record high induced refractive contrast of 2.5×10-2 is demonstrated. The waveguides fabricated possess one of the lowest losses, approaching level of Fresnel reflection losses at the glassair interface. High refractive index contrast allows the fabrication of curvilinear waveguides with low bend losses. We also demonstrated the optimisation of the inscription regimes in BK7 glass over a broad range of experimental parameters and observed a counter-intuitive increase of the induced refractive index contrast with increasing translation speed of a sample. Examples of inscription in a number of transparent dielectrics hosts using high repetition rate fs laser system (both glasses and crystals) are also presented. Sub-wavelength scale periodic inscription inside any material often demands supercritical propagation regimes, when pulse peak power is more than the critical power for selffocusing, sometimes several times higher than the critical power. For a sub-critical regime, when the pulse peak power is less than the critical power for self-focusing, we derive analytic expressions for Gaussian beam focusing in the presence of Kerr non-linearity as well as for a number of other beam shapes commonly used in experiments, including astigmatic and ring-shaped ones. In the part devoted to the fabrication of periodic structures, we report on recent development of our point-by-point method, demonstrating the shortest periodic perturbation created in the bulk of a pure fused silica sample, by using third harmonics (? =267 nm) of fundamental laser frequency (? =800 nm) and 1 kHz femtosecond laser system. To overcome the fundamental limitations of the point-by-point method we suggested and experimentally demonstrated the micro-holographic inscription method, which is based on using the combination of a diffractive optical element and standard micro-objectives. Sub-500 nm periodic structures with a much higher aspect ratio were demonstrated. From the applications point of view, we demonstrate examples of photonics devices by direct femtosecond fabrication method, including various vectorial bend-sensors fabricated in standard optical fibres, as well as a highly birefringent long-period gratings by direct modulation method. To address the intrinsic limitations of femtosecond inscription at very shallow depths we suggested the hybrid mask-less lithography method. The method is based on precision ablation of a thin metal layer deposited on the surface of the sample to create a mask. After that an ion-exchange process in the melt of Ag-containing salts allows quick and low-cost fabrication of shallow waveguides and other components of integrated optics. This approach covers the gap in direct fs inscription of shallow waveguide. Perspectives and future developments of direct femtosecond micro-fabrication are also discussed.
Resumo:
Principal component analysis (PCA) is one of the most popular techniques for processing, compressing and visualising data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a combination of local linear PCA projections. However, conventional PCA does not correspond to a probability density, and so there is no unique way to combine PCA models. Previous attempts to formulate mixture models for PCA have therefore to some extent been ad hoc. In this paper, PCA is formulated within a maximum-likelihood framework, based on a specific form of Gaussian latent variable model. This leads to a well-defined mixture model for probabilistic principal component analysers, whose parameters can be determined using an EM algorithm. We discuss the advantages of this model in the context of clustering, density modelling and local dimensionality reduction, and we demonstrate its application to image compression and handwritten digit recognition.
Resumo:
Are the learning procedures of genetic algorithms (GAs) able to generate optimal architectures for artificial neural networks (ANNs) in high frequency data? In this experimental study,GAs are used to identify the best architecture for ANNs. Additional learning is undertaken by the ANNs to forecast daily excess stock returns. No ANN architectures were able to outperform a random walk,despite the finding of non-linearity in the excess returns. This failure is attributed to the absence of suitable ANN structures and further implies that researchers need to be cautious when making inferences from ANN results that use high frequency data.
Resumo:
This empirical study examines the extent of non-linearity in a multivariate model of monthly financial series. To capture the conditional heteroscedasticity in the series, both the GARCH(1,1) and GARCH(1,1)-in-mean models are employed. The conditional errors are assumed to follow the normal and Student-t distributions. The non-linearity in the residuals of a standard OLS regression are also assessed. It is found that the OLS residuals as well as conditional errors of the GARCH models exhibit strong non-linearity. Under the Student density, the extent of non-linearity in the GARCH conditional errors was generally similar to those of the standard OLS. The GARCH-in-mean regression generated the worse out-of-sample forecasts.
Resumo:
This thesis is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variant of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here two new extended frameworks are derived and presented that are based on basis function expansions and local polynomial approximations of a recently proposed variational Bayesian algorithm. It is shown that the new extensions converge to the original variational algorithm and can be used for state estimation (smoothing). However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new methods are numerically validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein-Uhlenbeck process, for which the exact likelihood can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz '63 (3-dimensional model). The algorithms are also applied to the 40 dimensional stochastic Lorenz '96 system. In this investigation these new approaches are compared with a variety of other well known methods such as the ensemble Kalman filter / smoother, a hybrid Monte Carlo sampler, the dual unscented Kalman filter (for jointly estimating the systems states and model parameters) and full weak-constraint 4D-Var. Empirical analysis of their asymptotic behaviour as a function of observation density or length of time window increases is provided.
Resumo:
A self-reference fiber Michelson interferometer measurement system, which employs fiber Bragg gratings (FBGs) as in-fiber reflective mirrors and interleaves together two fiber Michelson interferometers that share the common-interferometric-optical path, is presented. One of the fiber interferometers is used to stabilise the system by the use of an electronic feedback loop to compensate the influences resulting from the environmental disturbances, while the other one is used to perform the measurement task. The influences resulting from the environmental disturbances have been eliminated by the compensating action of the electronic feedback loop, this makes the system suitable for on-line precision measurement. By means of the homodyne phase-tracking technique, the linearity of the measurement results of displacement measurements has been very high.
Resumo:
The object of this thesis is to develop a method for calculating the losses developed in steel conductors of circular cross-section and at temperatures below 100oC, by the direct passage of a sinusoidally alternating current. Three cases are considered. 1. Isolated solid or tubular conductor. 2. Concentric arrangement of tube and solid return conductor. 3. Concentric arrangement of two tubes. These cases find applications in process temperature maintenance of pipelines, resistance heating of bars and design of bus-bars. The problems associated with the non-linearity of steel are examined. Resistance heating of bars and methods of surface heating of pipelines are briefly described. Magnetic-linear solutions based on Maxwell's equations are critically examined and conditions under which various formulae apply investigated. The conditions under which a tube is electrically equivalent to a solid conductor and to a semi-infinite plate are derived. Existing solutions for the calculation of losses in isolated steel conductors of circular cross-section are reviewed, evaluated and compared. Two methods of solution are developed for the three cases considered. The first is based on the magnetic-linear solutions and offers an alternative to the available methods which are not universal. The second solution extends the existing B/H step-function approximation method to small diameter conductors and to tubes in isolation or in a concentric arrangement. A comprehensive experimental investigation is presented for cases 1 and 2 above which confirms the validity of the proposed methods of solution. These are further supported by experimental results reported in the literature. Good agreement is obtained between measured and calculated loss values for surface field strengths beyond the linear part of the d.c. magnetisation characteristic. It is also shown that there is a difference in the electrical behaviour of a small diameter conductor or thin tube under resistance or induction heating conditions.
Resumo:
In Information Filtering (IF) a user may be interested in several topics in parallel. But IF systems have been built on representational models derived from Information Retrieval and Text Categorization, which assume independence between terms. The linearity of these models results in user profiles that can only represent one topic of interest. We present a methodology that takes into account term dependencies to construct a single profile representation for multiple topics, in the form of a hierarchical term network. We also introduce a series of non-linear functions for evaluating documents against the profile. Initial experiments produced positive results.
Resumo:
A fully distributed temperature sensor consisting of a chirped fibre Bragg grating has been demonstrated. By fitting a numerical model of the grating response including temperature change, position and width of localized heating applied to the grating, we achieve measurements of these parameters to within 2.2 K, 149 µm and 306 µm of applied values, respectively. Assuming that deviation from linearity is accounted for in making measurement, much higher precision is achievable and the standard deviations for these measurements are 0.6 K, 28.5 µm and 56.0 µm, respectively.
Resumo:
This work is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variation of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here a new extended framework is derived that is based on a local polynomial approximation of a recently proposed variational Bayesian algorithm. The paper begins by showing that the new extension of this variational algorithm can be used for state estimation (smoothing) and converges to the original algorithm. However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new approach is validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein–Uhlenbeck process, the exact likelihood of which can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz ’63 (3D model). As a special case the algorithm is also applied to the 40 dimensional stochastic Lorenz ’96 system. In our investigation we compare this new approach with a variety of other well known methods, such as the hybrid Monte Carlo, dual unscented Kalman filter, full weak-constraint 4D-Var algorithm and analyse empirically their asymptotic behaviour as a function of observation density or length of time window increases. In particular we show that we are able to estimate parameters in both the drift (deterministic) and the diffusion (stochastic) part of the model evolution equations using our new methods.
Resumo:
This paper presents some forecasting techniques for energy demand and price prediction, one day ahead. These techniques combine wavelet transform (WT) with fixed and adaptive machine learning/time series models (multi-layer perceptron (MLP), radial basis functions, linear regression, or GARCH). To create an adaptive model, we use an extended Kalman filter or particle filter to update the parameters continuously on the test set. The adaptive GARCH model is a new contribution, broadening the applicability of GARCH methods. We empirically compared two approaches of combining the WT with prediction models: multicomponent forecasts and direct forecasts. These techniques are applied to large sets of real data (both stationary and non-stationary) from the UK energy markets, so as to provide comparative results that are statistically stronger than those previously reported. The results showed that the forecasting accuracy is significantly improved by using the WT and adaptive models. The best models on the electricity demand/gas price forecast are the adaptive MLP/GARCH with the multicomponent forecast; their MSEs are 0.02314 and 0.15384 respectively.
Resumo:
We demonstrate a novel glucose sensor based on an optical fiber grating with an excessively tilted index fringe structure and its surface modified by glucose oxidase (GOD). The aminopropyltriethoxysilane (APTES) was utilized as binding site for the subsequent GOD immobilization. Confocal microscopy and fluorescence microscope were used to provide the assessment of the effectiveness in modifying the fiber surface. The resonance wavelength of the sensor exhibited red-shift after the binding of the APTES and GOD to the fiber surface and also in the glucose detection process. The red-shift of the resonance wavelength showed a good linear response to the glucose concentration with a sensitivity of 0.298nm(mg/ml)-1 in the very low concentration range of 0.0∼3.0mg/ml. Compared to the previously reported glucose sensor based on the GOD-immobilized long period grating (LPG), the 81° tilted fiber grating (81°-TFG) based sensor has shown a lower thermal cross-talk effect, better linearity and higher Q-factor in sensing response. In addition, its sensitivity for glucose concentration can be further improved by increasing the grating length and/or choosing a higher-order cladding mode for detection. Potentially, the proposed techniques based on 81°-TFG can be developed as sensitive, label free and micro-structural sensors for applications in food safety, disease diagnosis, clinical analysis and environmental monitoring.
Resumo:
This study presents a detailed contrastive description of the textual functioning of connectives in English and Arabic. Particular emphasis is placed on the organisational force of connectives and their role in sustaining cohesion. The description is intended as a contribution for a better understanding of the variations in the dominant tendencies for text organisation in each language. The findings are expected to be utilised for pedagogical purposes, particularly in improving EFL teaching of writing at the undergraduate level. The study is based on an empirical investigation of the phenomenon of connectivity and, for optimal efficiency, employs computer-aided procedures, particularly those adopted in corpus linguistics, for investigatory purposes. One important methodological requirement is the establishment of two comparable and statistically adequate corpora, also the design of software and the use of existing packages and to achieve the basic analysis. Each corpus comprises ca 250,000 words of newspaper material sampled in accordance to a specific set of criteria and assembled in machine readable form prior to the computer-assisted analysis. A suite of programmes have been written in SPITBOL to accomplish a variety of analytical tasks, and in particular to perform a battery of measurements intended to quantify the textual functioning of connectives in each corpus. Concordances and some word lists are produced by using OCP. Results of these researches confirm the existence of fundamental differences in text organisation in Arabic in comparison to English. This manifests itself in the way textual operations of grouping and sequencing are performed and in the intensity of the textual role of connectives in imposing linearity and continuity and in maintaining overall stability. Furthermore, computation of connective functionality and range of operationality has identified fundamental differences in the way favourable choices for text organisation are made and implemented.