979 resultados para Piecewise linear techniques
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
We compared growth rates of the lemon shark, Negaprion brevirostris, from Bimini, Bahamas and the Marquesas Keys (MK), Florida using data obtained in a multi-year annual census. We marked new neonate and juvenile sharks with unique electronic identity tags in Bimini and in the MK we tagged neonate and juvenile sharks. Sharks were tagged with tiny, subcutaneous transponders, a type of tagging thought to cause little, if any disruption to normal growth patterns when compared to conventional external tagging. Within the first 2 years of this project, no age data were recorded for sharks caught for the first time in Bimini. Therefore, we applied and tested two methods of age analysis: ( 1) a modified 'minimum convex polygon' method and ( 2) a new age-assigning method, the 'cut-off technique'. The cut-off technique proved to be the more suitable one, enabling us to identify the age of 134 of the 642 previously unknown aged sharks. This maximised the usable growth data included in our analysis. Annual absolute growth rates of juvenile, nursery-bound lemon sharks were almost constant for the two Bimini nurseries and can be best described by a simple linear model ( growth data was only available for age-0 sharks in the MK). Annual absolute growth for age-0 sharks was much greater in the MK than in either the North Sound (NS) and Shark Land (SL) at Bimini. Growth of SL sharks was significantly faster during the first 2 years of life than of the sharks in the NS population. However, in MK, only growth in the first year was considered to be reliably estimated due to low recapture rates. Analyses indicated no significant differences in growth rates between males and females for any area.
Resumo:
This thesis is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variant of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here two new extended frameworks are derived and presented that are based on basis function expansions and local polynomial approximations of a recently proposed variational Bayesian algorithm. It is shown that the new extensions converge to the original variational algorithm and can be used for state estimation (smoothing). However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new methods are numerically validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein-Uhlenbeck process, for which the exact likelihood can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz '63 (3-dimensional model). The algorithms are also applied to the 40 dimensional stochastic Lorenz '96 system. In this investigation these new approaches are compared with a variety of other well known methods such as the ensemble Kalman filter / smoother, a hybrid Monte Carlo sampler, the dual unscented Kalman filter (for jointly estimating the systems states and model parameters) and full weak-constraint 4D-Var. Empirical analysis of their asymptotic behaviour as a function of observation density or length of time window increases is provided.
Resumo:
This thesis is a study of three techniques to improve performance of some standard fore-casting models, application to the energy demand and prices. We focus on forecasting demand and price one-day ahead. First, the wavelet transform was used as a pre-processing procedure with two approaches: multicomponent-forecasts and direct-forecasts. We have empirically compared these approaches and found that the former consistently outperformed the latter. Second, adaptive models were introduced to continuously update model parameters in the testing period by combining ?lters with standard forecasting methods. Among these adaptive models, the adaptive LR-GARCH model was proposed for the fi?rst time in the thesis. Third, with regard to noise distributions of the dependent variables in the forecasting models, we used either Gaussian or Student-t distributions. This thesis proposed a novel algorithm to infer parameters of Student-t noise models. The method is an extension of earlier work for models that are linear in parameters to the non-linear multilayer perceptron. Therefore, the proposed method broadens the range of models that can use a Student-t noise distribution. Because these techniques cannot stand alone, they must be combined with prediction models to improve their performance. We combined these techniques with some standard forecasting models: multilayer perceptron, radial basis functions, linear regression, and linear regression with GARCH. These techniques and forecasting models were applied to two datasets from the UK energy markets: daily electricity demand (which is stationary) and gas forward prices (non-stationary). The results showed that these techniques provided good improvement to prediction performance.
Resumo:
This paper presents a new method for the optimisation of the mirror element spacing arrangement and operating temperature of linear Fresnel reflectors (LFR). The specific objective is to maximise available power output (i.e. exergy) and operational hours whilst minimising cost. The method is described in detail and compared to an existing design method prominent in the literature. Results are given in terms of the exergy per total mirror area (W/m2) and cost per exergy (US $/W). The new method is applied principally to the optimisation of an LFR in Gujarat, India, for which cost data have been gathered. It is recommended to use a spacing arrangement such that the onset of shadowing among mirror elements occurs at a transversal angle of 45°. This results in a cost per exergy of 2.3 $/W. Compared to the existing design approach, the exergy averaged over the year is increased by 9% to 50 W/m2 and an additional 122 h of operation per year are predicted. The ideal operating temperature at the surface of the absorber tubes is found to be 300 °C. It is concluded that the new method is an improvement over existing techniques and a significant tool for any future design work on LFR systems
Resumo:
Many of the recent improvements in the capacity of data cartridge systems have been achieved through the use of narrower tracks, higher linear densities and continuous servo tracking with multi-channel heads. These changes have produced new tribological problems at the head/tape interface. It is crucial that the tribology of such systems is understood and this will continue since increasing storage capacities and faster transfer rates are constantly being sought. Chemical changes in the surface of single and dual layer MP tape have been correlated to signal performance. An accelerated tape tester, consisting of a custom made cycler ("loop tester"), was used to ascertain if results could be produced that were representative of a real tape drive system. A second set of experiments used a modified tape drive (Georgens cycler), which allowed the effects of the tape transport system on the tape surface to be studied. To isolate any effects on the tape surface due to the head/tape interface, read/write heads were not fitted to the cycler. Two further sets of experiments were conducted which included a head in the tape path. This allowed the effects of the head/tape interface on the physical and chemical properties of the head and tape surfaces to be investigated. It was during the final set of experiments that the effect on the head/tape interface, of an energised MR element, was investigated. The effect of operating each cycler at extreme relative humidity and temperature was investigated through the use of an environmental chamber. Extensive use was made of surface specific analytical techniques such as XPS, AFM, AES, and SEM to study the physical and chemical changes that occur at the head/tape interface. Results showed that cycling improved the signal performance of all the tapes tested. The data cartridge drive belt had an effect on the chemical properties of the tape surface on which it was in contact. Also binder degradation occurred for each tape and appeared to be greater at higher humidity. Lubricant was generally seen to migrate to the tape surface with cycling. Any surface changes likely to affect signal output occurred at the head surface rather than the tape.
Resumo:
This thesis presents several advanced optical techniques that are crucial for improving high capacity transmission systems. The basic theory of optical fibre communications are introduced before optical solitons and their usage in optically amplified fibre systems are discussed. The design, operation, limitations and importance of the recirculating loop are illustrated. The crucial role of dispersion management in the transmission systems is then considered. Two of the most popular dispersion compensation methods - dispersion compensating fibres and fibre Bragg gratings - are emphasised. A tunable dispersion compensator is fabricated using the linear chirped fibre Bragg gratings and a bending rig. Results show that it is capable of compensating not only the second order dispersion, but also higher order dispersion. Stimulated Raman Scattering (SRS) are studied and discussed. Different dispersion maps are performed for all Raman amplified standard fibre link to obtain maximum transmission distances. Raman amplification is used in most of our loop experiments since it improves the optical signal-to-noise ratio (OSNR) and significantly reduces the nonlinear intrachannel effects of the transmission systems. The main body of the experimental work is concerned with nonlinear optical switching using the nonlinear optical loop mirrors (NOLMs). A number of different types of optical loop mirrors are built, tested and implemented in the transmission systems for noise suppression and 2R regeneration. Their results show that for 2R regeneration, NOLM does improve system performance, while NILM degrades system performance due to its sensitivity to the input pulse width, and the NALM built is unstable and therefore affects system performance.
Resumo:
Non-linear relationships are common in microbiological research and often necessitate the use of the statistical techniques of non-linear regression or curve fitting. In some circumstances, the investigator may wish to fit an exponential model to the data, i.e., to test the hypothesis that a quantity Y either increases or decays exponentially with increasing X. This type of model is straight forward to fit as taking logarithms of the Y variable linearises the relationship which can then be treated by the methods of linear regression.
Resumo:
1. The techniques associated with regression, whether linear or non-linear, are some of the most useful statistical procedures that can be applied in clinical studies in optometry. 2. In some cases, there may be no scientific model of the relationship between X and Y that can be specified in advance and the objective may be to provide a ‘curve of best fit’ for predictive purposes. In such cases, the fitting of a general polynomial type curve may be the best approach. 3. An investigator may have a specific model in mind that relates Y to X and the data may provide a test of this hypothesis. Some of these curves can be reduced to a linear regression by transformation, e.g., the exponential and negative exponential decay curves. 4. In some circumstances, e.g., the asymptotic curve or logistic growth law, a more complex process of curve fitting involving non-linear estimation will be required.
Resumo:
Purpose: The use of PHMB as a disinfectant in contact lens multipurpose solutions has been at the centre of much debate in recent times, particularly in relation to the issue of solution induced corneal staining. Clinical studies have been carried out which suggest different effects with individual contact lens materials used in combination with specific PHMB containing care regimes. There does not appear to be, however, a reliable analytical technique that would detect and quantify with any degree of accuracy the specific levels of PHMB that are taken up and released from individual solutions by the various contact lens materials. Methods: PHMB is a mixture of positively charged polymer units of varying molecular weight that has maximum absorbance wavelength of 236 nm. On the basis of these properties a range of assays including capillary electrophoresis, HPLC, a nickelnioxime colorimetric technique, mass spectrophotometry, UV spectroscopy and ion chromatography were assessed paying particular attention to each of their constraints and detection levels. Particular interest was focused on the relative advantage of contactless conductivity compared to UV and mass spectrometry detection in capillary electrophoresis (CE). This study provides an overview of the comparative performance of these techniques. Results: The UV absorbance of PHMB solutions, ranging from 0.0625 to 50 ppm was measured at 236 nm. Within this range the calibration curve appears to be linear however, absorption values below 1 ppm (0.0001%) were extremely difficult to reproduce. The concentration of PHMB in solutions is in the range of 0.0002–0.00005% and our investigations suggest that levels of PHMB below 0.0001% (levels encountered in uptake and release studies) can not be accurately estimated, in particular when analysing complex lens care solutions which can contain competitively absorbing, and thus interfering, species in the solution. The use of separative methodologies, such as CE using UV detection alone is similarly limited. Alternative techniques including contactless conductivity detection offer greater discrimination in complex solutions together with the opportunity for dual channel detection. Preliminary results achieved by TraceDec1 contactless conductivity detection, (Gain 150%, Offset 150) in conjunction with the Agilent capillary electrophoresis system using a bare fused silica capillary (extended light path, 50 mid, total length 64.5 cm, effective length 56 cm) and a cationic buffer at pH 3.2, exhibit great potential with reproducible PHMB split peaks. Conclusions: PHMB-based solutions are commonly associated with the potential to invoke corneal staining in combination with certain contact lens materials. However this terminology ‘PHMBbased solution’ is used primarily because PHMB itself has yet to be adequately implicated as the causative agent of the staining and compromised corneal cell integrity. The lack of well characterised adequately sensitive assays, coupled with the range of additional components that characterise individual care solutions pose a major barrier to the investigation of PHMB interactions in the lenswearing eye.
Using interior point algorithms for the solution of linear programs with special structural features
Resumo:
Linear Programming (LP) is a powerful decision making tool extensively used in various economic and engineering activities. In the early stages the success of LP was mainly due to the efficiency of the simplex method. After the appearance of Karmarkar's paper, the focus of most research was shifted to the field of interior point methods. The present work is concerned with investigating and efficiently implementing the latest techniques in this field taking sparsity into account. The performance of these implementations on different classes of LP problems is reported here. The preconditional conjugate gradient method is one of the most powerful tools for the solution of the least square problem, present in every iteration of all interior point methods. The effect of using different preconditioners on a range of problems with various condition numbers is presented. Decomposition algorithms has been one of the main fields of research in linear programming over the last few years. After reviewing the latest decomposition techniques, three promising methods were chosen the implemented. Sparsity is again a consideration and suggestions have been included to allow improvements when solving problems with these methods. Finally, experimental results on randomly generated data are reported and compared with an interior point method. The efficient implementation of the decomposition methods considered in this study requires the solution of quadratic subproblems. A review of recent work on algorithms for convex quadratic was performed. The most promising algorithms are discussed and implemented taking sparsity into account. The related performance of these algorithms on randomly generated separable and non-separable problems is also reported.
Resumo:
Methods of dynamic modelling and analysis of structures, for example the finite element method, are well developed. However, it is generally agreed that accurate modelling of complex structures is difficult and for critical applications it is necessary to validate or update the theoretical models using data measured from actual structures. The techniques of identifying the parameters of linear dynamic models using Vibration test data have attracted considerable interest recently. However, no method has received a general acceptance due to a number of difficulties. These difficulties are mainly due to (i) Incomplete number of Vibration modes that can be excited and measured, (ii) Incomplete number of coordinates that can be measured, (iii) Inaccuracy in the experimental data (iv) Inaccuracy in the model structure. This thesis reports on a new approach to update the parameters of a finite element model as well as a lumped parameter model with a diagonal mass matrix. The structure and its theoretical model are equally perturbed by adding mass or stiffness and the incomplete number of eigen-data is measured. The parameters are then identified by an iterative updating of the initial estimates, by sensitivity analysis, using eigenvalues or both eigenvalues and eigenvectors of the structure before and after perturbation. It is shown that with a suitable choice of the perturbing coordinates exact parameters can be identified if the data and the model structure are exact. The theoretical basis of the technique is presented. To cope with measurement errors and possible inaccuracies in the model structure, a well known Bayesian approach is used to minimize the least squares difference between the updated and the initial parameters. The eigen-data of the structure with added mass or stiffness is also determined using the frequency response data of the unmodified structure by a structural modification technique. Thus, mass or stiffness do not have to be added physically. The mass-stiffness addition technique is demonstrated by simulation examples and Laboratory experiments on beams and an H-frame.
Resumo:
This thesis applies a hierarchical latent trait model system to a large quantity of data. The motivation for it was lack of viable approaches to analyse High Throughput Screening datasets which maybe include thousands of data points with high dimensions. High Throughput Screening (HTS) is an important tool in the pharmaceutical industry for discovering leads which can be optimised and further developed into candidate drugs. Since the development of new robotic technologies, the ability to test the activities of compounds has considerably increased in recent years. Traditional methods, looking at tables and graphical plots for analysing relationships between measured activities and the structure of compounds, have not been feasible when facing a large HTS dataset. Instead, data visualisation provides a method for analysing such large datasets, especially with high dimensions. So far, a few visualisation techniques for drug design have been developed, but most of them just cope with several properties of compounds at one time. We believe that a latent variable model (LTM) with a non-linear mapping from the latent space to the data space is a preferred choice for visualising a complex high-dimensional data set. As a type of latent variable model, the latent trait model can deal with either continuous data or discrete data, which makes it particularly useful in this domain. In addition, with the aid of differential geometry, we can imagine the distribution of data from magnification factor and curvature plots. Rather than obtaining the useful information just from a single plot, a hierarchical LTM arranges a set of LTMs and their corresponding plots in a tree structure. We model the whole data set with a LTM at the top level, which is broken down into clusters at deeper levels of t.he hierarchy. In this manner, the refined visualisation plots can be displayed in deeper levels and sub-clusters may be found. Hierarchy of LTMs is trained using expectation-maximisation (EM) algorithm to maximise its likelihood with respect to the data sample. Training proceeds interactively in a recursive fashion (top-down). The user subjectively identifies interesting regions on the visualisation plot that they would like to model in a greater detail. At each stage of hierarchical LTM construction, the EM algorithm alternates between the E- and M-step. Another problem that can occur when visualising a large data set is that there may be significant overlaps of data clusters. It is very difficult for the user to judge where centres of regions of interest should be put. We address this problem by employing the minimum message length technique, which can help the user to decide the optimal structure of the model. In this thesis we also demonstrate the applicability of the hierarchy of latent trait models in the field of document data mining.
Resumo:
Methods of solving the neuro-electromagnetic inverse problem are examined and developed, with specific reference to the human visual cortex. The anatomy, physiology and function of the human visual system are first reviewed. Mechanisms by which the visual cortex gives rise to external electric and magnetic fields are then discussed, and the forward problem is described mathematically for the case of an isotropic, piecewise homogeneous volume conductor, and then for an anisotropic, concentric, spherical volume conductor. Methods of solving the inverse problem are reviewed, before a new technique is presented. This technique combines prior anatomical information gained from stereotaxic studies, with a probabilistic distributed-source algorithm to yield accurate, realistic inverse solutions. The solution accuracy is enhanced by using both visual evoked electric and magnetic responses simultaneously. The numerical algorithm is then modified to perform equivalent current dipole fitting and minimum norm estimation, and these three techniques are implemented on a transputer array for fast computation. Due to the linear nature of the techniques, they can be executed on up to 22 transputers with close to linear speedup. The latter part of the thesis describes the application of the inverse methods to the analysis of visual evoked electric and magnetic responses. The CIIm peak of the pattern onset evoked magnetic response is deduced to be a product of current flowing away from the surface areas 17, 18 and 19, while the pattern reversal P100m response originates in the same areas, but from oppositely directed current. Cortical retinotopy is examined using sectorial stimuli, the CI and CIm ;peaks of the pattern onset electric and magnetic responses are found to originate from areas V1 and V2 simultaneously, and they therefore do not conform to a simple cruciform model of primary visual cortex.
Resumo:
Background Evaluation of anterior chamber depth (ACD) can potentially identify those patients at risk of angle-closure glaucoma. We aimed to: compare van Herick’s limbal chamber depth (LCDvh) grades with LCDorb grades calculated from the Orbscan anterior chamber angle values; determine Smith’s technique ACD and compare to Orbscan ACD; and calculate a constant for Smith’s technique using Orbscan ACD. Methods Eighty participants free from eye disease underwent LCDvh grading, Smith’s technique ACD, and Orbscan anterior chamber angle and ACD measurement. Results LCDvh overestimated grades by a mean of 0.25 (coefficient of repeatability [CR] 1.59) compared to LCDorb. Smith’s technique (constant 1.40 and 1.31) overestimated ACD by a mean of 0.33 mm (CR 0.82) and 0.12 mm (CR 0.79) respectively, compared to Orbscan. Using linear regression, we determined a constant of 1.22 for Smith’s slit-length method. Conclusions Smith’s technique (constant 1.31) provided an ACD that is closer to that found with Orbscan compared to a constant of 1.40 or LCDvh. Our findings also suggest that Smith’s technique would produce values closer to that obtained with Orbscan by using a constant of 1.22.