29 resultados para Abbott, Andrew: Methods of discovery


Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are several methods of providing series compensation for transmission lines using power electronic switches. Four methods of series compensation have been examined in this thesis, the thyristor controlled series capacitor, a voltage sourced inverter series compensator using a capacitor as the series element, a current sourced inverter series compensator and a voltage sourced inverter using an inductor as the series element. All the compensators examined will provide a continuously variable series voltage which is controlled by the switching of the electronic switches. Two of the circuits will offer both capacitive and inductive compensation, the thyristor controlled series capacitor and the current sourced inverter series compensator. The other two will produce either capacitive or inductive series compensation. The thyristor controlled series capacitor offers the widest range of series compensation. However, there is a band of unavailable compensation between 0 and 1 pu capacitive compensation. Compared to the other compensators examined the harmonic content of the compensating voltage is quite high. An algebraic analysis showed that there is more than one state the thyristor controlled series capacitor can operate in. This state has the undesirable effect of introducing large losses. The voltage sourced inverter series compensator using a capacitor as the series element will provide only capacitive compensation. It uses two capacitors which increase the cost of the compensator significantly above the other three. This circuit has the advantage of very low harmonic distortion. The current sourced inverter series compensator will provide both capacitive and inductive series compensation. The harmonic content of the compensating voltage is second only to the voltage sourced inverter series compensator using a capacitor as the series element. The voltage sourced inverter series compensator using an inductor as the series element will only provide inductive compensation, and it is the least expensive compensator examined. Unfortunately, the harmonics introduced by this circuit are considerable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lipid peroxidation products like malondialdehyde, 4-hydroxynonenal and F(2)-isoprostanes are widely used as markers of oxidative stress in vitro and in vivo. This study reports the results of a multi-laboratory validation study by COST Action B35 to assess inter-laboratory and intra-laboratory variation in the measurement of lipid peroxidation. Human plasma samples were exposed to UVA irradiation at different doses (0, 15 J, 20 J), encoded and shipped to 15 laboratories, where analyses of malondialdehyde, 4-hydroxynonenal and isoprostanes were conducted. The results demonstrate a low within-day-variation and a good correlation of results observed on two different days. However, high coefficients of variation were observed between the laboratories. Malondialdehyde determined by HPLC was found to be the most sensitive and reproducible lipid peroxidation product in plasma upon UVA treatment. It is concluded that measurement of malondialdehyde by HPLC has good analytical validity for inter-laboratory studies on lipid peroxidation in human EDTA-plasma samples, although it is acknowledged that this may not translate to biological validity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents a two-dimensional approach of risk assessment method based on the quantification of the probability of the occurrence of contaminant source terms, as well as the assessment of the resultant impacts. The risk is calculated using Monte Carlo simulation methods whereby synthetic contaminant source terms were generated to the same distribution as historically occurring pollution events or a priori potential probability distribution. The spatial and temporal distributions of the generated contaminant concentrations at pre-defined monitoring points within the aquifer were then simulated from repeated realisations using integrated mathematical models. The number of times when user defined ranges of concentration magnitudes were exceeded is quantified as risk. The utilities of the method were demonstrated using hypothetical scenarios, and the risk of pollution from a number of sources all occurring by chance together was evaluated. The results are presented in the form of charts and spatial maps. The generated risk maps show the risk of pollution at each observation borehole, as well as the trends within the study area. This capability to generate synthetic pollution events from numerous potential sources of pollution based on historical frequency of their occurrence proved to be a great asset to the method, and a large benefit over the contemporary methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we discuss some practical implications for implementing adaptable network algorithms applied to non-stationary time series problems. Two real world data sets, containing electricity load demands and foreign exchange market prices, are used to test several different methods, ranging from linear models with fixed parameters, to non-linear models which adapt both parameters and model order on-line. Training with the extended Kalman filter, we demonstrate that the dynamic model-order increment procedure of the resource allocating RBF network (RAN) is highly sensitive to the parameters of the novelty criterion. We investigate the use of system noise for increasing the plasticity of the Kalman filter training algorithm, and discuss the consequences for on-line model order selection. The results of our experiments show that there are advantages to be gained in tracking real world non-stationary data through the use of more complex adaptive models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT

Relevância:

100.00% 100.00%

Publicador:

Resumo:

DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT

Relevância:

100.00% 100.00%

Publicador:

Resumo:

DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two contrasting multivariate statistical methods, viz., principal components analysis (PCA) and cluster analysis were applied to the study of neuropathological variations between cases of Alzheimer's disease (AD). To compare the two methods, 78 cases of AD were analyzed, each characterised by measurements of 47 neuropathological variables. Both methods of analysis revealed significant variations between AD cases. These variations were related primarily to differences in the distribution and abundance of senile plaques (SP) and neurofibrillary tangles (NFT) in the brain. Cluster analysis classified the majority of AD cases into five groups which could represent subtypes of AD. However, PCA suggested that variation between cases was more continuous with no distinct subtypes. Hence, PCA may be a more appropriate method than cluster analysis in the study of neuropathological variations between AD cases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gas absorption, the removal of one or more constitutents from a gas mixture, is widely used in chemical processes. In many gas absorption processes, the gas mixture is already at high pressure and in recent years organic solvents have been developed for the process of physical absorption at high pressure followed by low pressure regeneration of the solvent and recovery of the absorbed gases. Until now the discovery of new solvents has usually been by expensive and time consuming trial and error laboratory tests. This work describes a new approach, whereby a solvent is selected from considerations of its molecular structure by applying recently published methods of predicting gas solubility from the molecular groups which make up the solvent molecule. The removal of the acid gases of carbon dioxide and hydrogen sulfide from methane or hydrogen was used as a commercially important example. After a preliminary assessment to identify promising moecular groups, more than eighty new solvent molecules were designed and evaluated by predicting gas solubility. The other important physical properties were also predicted by appropriate theoretical procedures, and a commercially promising new solvent was chosen to have a high solubility for acid gases, a low solubility for methane and hydrogen, a low vapour pressure, and a low viscosity. The solvent chosen, of molecular structure Ch3-COCH2-CH2-CO-CH3, was tested in the laboratory and shown to have physical properties, except for vapour pressures, close to those predicted. That is gas solubilities were within 10% but lower than predicted. Viscosity within 10% but higher than predicted and a vapour pressure significantly lower than predicted. A computer program was written to predict gas solubility in the new solvent at the high pressures (25 bar) used in practice. This is based on the group contribution method of Skold Jorgensen (1984). Before using this with the new solvent, Acetonyl acetone, the method was show to be sufficiently accurate by comparing predicted values of gas solubility with experimental solubilities from the literature for 14 systems up to 50 bar. A test of the commercial potential of the new solvent was made by means of two design studies which compared the size of plant and approximate relative costs of absorbing acid gases by means of the new solvent with other commonly used solvents. These were refrigerated methanol(Rectisol process) and Dimethyl Ether or Polyethylene Glycol(Selexol process). Both studies showed in terms of capital and operating cost some significant advantage for plant designed for the new solvent process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main aim of this thesis is to investigate the application of methods of differential geometry to the constraint analysis of relativistic high spin field theories. As a starting point the coordinate dependent descriptions of the Lagrangian and Dirac-Bergmann constraint algorithms are reviewed for general second order systems. These two algorithms are then respectively employed to analyse the constraint structure of the massive spin-1 Proca field from the Lagrangian and Hamiltonian viewpoints. As an example of a coupled field theoretic system the constraint analysis of the massive Rarita-Schwinger spin-3/2 field coupled to an external electromagnetic field is then reviewed in terms of the coordinate dependent Dirac-Bergmann algorithm for first order systems. The standard Velo-Zwanziger and Johnson-Sudarshan inconsistencies that this coupled system seemingly suffers from are then discussed in light of this full constraint analysis and it is found that both these pathologies degenerate to a field-induced loss of degrees of freedom. A description of the geometrical version of the Dirac-Bergmann algorithm developed by Gotay, Nester and Hinds begins the geometrical examination of high spin field theories. This geometric constraint algorithm is then applied to the free Proca field and to two Proca field couplings; the first of which is the minimal coupling to an external electromagnetic field whilst the second is the coupling to an external symmetric tensor field. The onset of acausality in this latter coupled case is then considered in relation to the geometric constraint algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conventional structured methods of software engineering are often based on the use of functional decomposition coupled with the Waterfall development process model. This approach is argued to be inadequate for coping with the evolutionary nature of large software systems. Alternative development paradigms, including the operational paradigm and the transformational paradigm, have been proposed to address the inadequacies of this conventional view of software developement, and these are reviewed. JSD is presented as an example of an operational approach to software engineering, and is contrasted with other well documented examples. The thesis shows how aspects of JSD can be characterised with reference to formal language theory and automata theory. In particular, it is noted that Jackson structure diagrams are equivalent to regular expressions and can be thought of as specifying corresponding finite automata. The thesis discusses the automatic transformation of structure diagrams into finite automata using an algorithm adapted from compiler theory, and then extends the technique to deal with areas of JSD which are not strictly formalisable in terms of regular languages. In particular, an elegant and novel method for dealing with so called recognition (or parsing) difficulties is described,. Various applications of the extended technique are described. They include a new method of automatically implementing the dismemberment transformation; an efficient way of implementing inversion in languages lacking a goto-statement; and a new in-the-large implementation strategy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis was concerned with investigating methods of improving the IOP pulse’s potential as a measure of clinical utility. There were three principal sections to the work. 1. Optimisation of measurement and analysis of the IOP pulse. A literature review, covering the years 1960 – 2002 and other relevant scientific publications, provided a knowledge base on the IOP pulse. Initial studies investigated suitable instrumentation and measurement techniques. Fourier transformation was identified as a promising method of analysing the IOP pulse and this technique was developed. 2. Investigation of ocular and systemic variables that affect IOP pulse measurements In order to recognise clinically important changes in IOP pulse measurement, studies were performed to identify influencing factors. Fourier analysis was tested against traditional parameters in order to assess its ability to detect differences in IOP pulse. In addition, it had been speculated that the waveform components of the IOP pulse contained vascular characteristic analogous to those components found in arterial pulse waves. Validation studies to test this hypothesis were attempted. 3. The nature of the intraocular pressure pulse in health and disease and its relation to systemic cardiovascular variables. Fourier analysis and traditional parameters were applied to the IOP pulse measurements taken on diseased and healthy eyes. Only the derived parameter, pulsatile ocular blood flow (POBF) detected differences in diseased groups. The use of an ocular pressure-volume relationship may have improved the POBF measure’s variance in comparison to the measurement of the pulse’s amplitude or Fourier components. Finally, the importance of the driving force of pulsatile blood flow, the arterial pressure pulse, is highlighted. A method of combining the measurements of pulsatile blood flow and pulsatile blood pressure to create a measure of ocular vascular impedance is described along with its advantages for future studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As more of the economy moves from traditional manufacturing to the service sector, the nature of work is becoming less tangible and thus, the representation of human behaviour in models is becoming more important. Representing human behaviour and decision making in models is challenging, both in terms of capturing the essence of the processes, and also the way that those behaviours and decisions are or can be represented in the models themselves. In order to advance understanding in this area, a useful first step is to evaluate and start to classify the various types of behaviour and decision making that are required to be modelled. This talk will attempt to set out and provide an initial classification of the different types of behaviour and decision making that a modeller might want to represent in a model. Then, it will be useful to start to assess the main methods of simulation in terms of their capability in representing these various aspects. The three main simulation methods, System Dynamics, Agent Based Modelling and Discrete Event Simulation all achieve this to varying degrees. There is some evidence that all three methods can, within limits, represent the key aspects of the system being modelled. The three simulation approaches are then assessed for their suitability in modelling these various aspects. Illustration of behavioural modelling will be provided from cases in supply chain management, evacuation modelling and rail disruption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Major Depressive Disorder (MDD) is among the most prevalent and disabling medical conditions worldwide. Identification of clinical and biological markers ("biomarkers") of treatment response could personalize clinical decisions and lead to better outcomes. This paper describes the aims, design, and methods of a discovery study of biomarkers in antidepressant treatment response, conducted by the Canadian Biomarker Integration Network in Depression (CAN-BIND). The CAN-BIND research program investigates and identifies biomarkers that help to predict outcomes in patients with MDD treated with antidepressant medication. The primary objective of this initial study (known as CAN-BIND-1) is to identify individual and integrated neuroimaging, electrophysiological, molecular, and clinical predictors of response to sequential antidepressant monotherapy and adjunctive therapy in MDD. Methods: CAN-BIND-1 is a multisite initiative involving 6 academic health centres working collaboratively with other universities and research centres. In the 16-week protocol, patients with MDD are treated with a first-line antidepressant (escitalopram 10-20 mg/d) that, if clinically warranted after eight weeks, is augmented with an evidence-based, add-on medication (aripiprazole 2-10 mg/d). Comprehensive datasets are obtained using clinical rating scales; behavioural, dimensional, and functioning/quality of life measures; neurocognitive testing; genomic, genetic, and proteomic profiling from blood samples; combined structural and functional magnetic resonance imaging; and electroencephalography. De-identified data from all sites are aggregated within a secure neuroinformatics platform for data integration, management, storage, and analyses. Statistical analyses will include multivariate and machine-learning techniques to identify predictors, moderators, and mediators of treatment response. Discussion: From June 2013 to February 2015, a cohort of 134 participants (85 outpatients with MDD and 49 healthy participants) has been evaluated at baseline. The clinical characteristics of this cohort are similar to other studies of MDD. Recruitment at all sites is ongoing to a target sample of 290 participants. CAN-BIND will identify biomarkers of treatment response in MDD through extensive clinical, molecular, and imaging assessments, in order to improve treatment practice and clinical outcomes. It will also create an innovative, robust platform and database for future research. Trial registration: ClinicalTrials.gov identifier NCT01655706. Registered July 27, 2012.