43 resultados para Stochastic Subspace System Identification
Resumo:
Stochastic differential equations arise naturally in a range of contexts, from financial to environmental modeling. Current solution methods are limited in their representation of the posterior process in the presence of data. In this work, we present a novel Gaussian process approximation to the posterior measure over paths for a general class of stochastic differential equations in the presence of observations. The method is applied to two simple problems: the Ornstein-Uhlenbeck process, of which the exact solution is known and can be compared to, and the double-well system, for which standard approaches such as the ensemble Kalman smoother fail to provide a satisfactory result. Experiments show that our variational approximation is viable and that the results are very promising as the variational approximate solution outperforms standard Gaussian process regression for non-Gaussian Markov processes.
Resumo:
Recently identified genes located downstream (3') of the msmEF (transport encoding) gene cluster, msmGH, and located 5' of the structural genes for methanesulfonate monooxygenase (MSAMO) are described from Methylosulfonomonas methylovora. Sequence analysis of the derived polypeptide sequences encoded by these genes revealed a high degree of identity to ABC-type transporters. MsmE showed similarity to a putative periplasmic substrate binding protein, MsmF resembled an integral membraneassociated protein, and MsmG was a putative ATP-binding enzyme. MsmH was thought to be the cognate permease component of the sulfonate transport system. The close association of these putative transport genes to the MSAMO structural genes msmABCD suggested a role for these genes in transport of methanesulfonic acid (MSA) into M. methylovora. msmEFGH and msmABCD constituted two operons for the coordinated expression of MSAMO and the MSA transporter systems. Reverse-transcription-PCR analysis of msmABCD and msmEFGH revealed differential expression of these genes during growth on MSA and methanol. The msmEFGH operon was constitutively expressed, whereas MSA induced expression of msmABCD. A mutant defective in msmE had considerably slower growth rates than the wild type, thus supporting the proposed role of MsmE in the transport of MSA into M. methylovora.
Resumo:
The aim of this work was to design and build an equipment which can detect ferrous and non-ferrous objects in conveyed commodities, discriminate between them and locate the object along the belt and on the width of the belt. The magnetic induction mechanism was used as a means of achieving the objectives of this research. In order to choose the appropriate geometry and size of the induction field source, the field distributions of different source geometries and sizes were studied in detail. From these investigations it was found the square loop geometry is the most appropriate as a field generating source for the purpose of this project. The phenomena of field distribution in the conductors was also investigated. An equipment was designed and built at the preliminary stages of thework based on a flux-gate magnetometer with the ability to detect only ferrous objects.The instrument was designed such that it could be used to detect ferrous objects in the coal conveyors of power stations. The advantages of employing this detector in the power industry over the present ferrous metal electromagnetic separators were also considered. The objectives of this project culminated in the design and construction of a ferrous and non-ferrous detector with the ability to discriminate between ferrous and non-ferrous metals and to locate the objects on the conveying system. An experimental study was carried out to test the performance of the equipment in the detection of ferrous and non-ferrous objects of a given size carried on the conveyor belt. The ability of the equipment to discriminate between the types of metals and to locate the object on the belt was also evaluated experimentally. The benefits which can be gained from the industrial implementations of the equipment were considered. Further topics which may be investigated as an extension of this work are given.
Resumo:
National meteorological offices are largely concerned with synoptic-scale forecasting where weather predictions are produced for a whole country for 24 hours ahead. In practice, many local organisations (such as emergency services, construction industries, forestry, farming, and sports) require only local short-term, bespoke, weather predictions and warnings. This thesis shows that the less-demanding requirements do not require exceptional computing power and can be met by a modern, desk-top system which monitors site-specific ground conditions (such as temperature, pressure, wind speed and direction, etc) augmented with above ground information from satellite images to produce `nowcasts'. The emphasis in this thesis has been towards the design of such a real-time system for nowcasting. Local site-specific conditions are monitored using a custom-built, stand alone, Motorola 6809 based sub-system. Above ground information is received from the METEOSAT 4 geo-stationary satellite using a sub-system based on a commercially available equipment. The information is ephemeral and must be captured in real-time. The real-time nowcasting system for localised weather handles the data as a transparent task using the limited capabilities of the PC system. Ground data produces a time series of measurements at a specific location which represents the past-to-present atmospheric conditions of the particular site from which much information can be extracted. The novel approach adopted in this thesis is one of constructing stochastic models based on the AutoRegressive Integrated Moving Average (ARIMA) technique. The satellite images contain features (such as cloud formations) which evolve dynamically and may be subject to movement, growth, distortion, bifurcation, superposition, or elimination between images. The process of extracting a weather feature, following its motion and predicting its future evolution involves algorithms for normalisation, partitioning, filtering, image enhancement, and correlation of multi-dimensional signals in different domains. To limit the processing requirements, the analysis in this thesis concentrates on an `area of interest'. By this rationale, only a small fraction of the total image needs to be processed, leading to a major saving in time. The thesis also proposes an extention to an existing manual cloud classification technique for its implementation in automatically classifying a cloud feature over the `area of interest' for nowcasting using the multi-dimensional signals.
Resumo:
This thesis is concerned with the measurement of the characteristics of nonlinear systems by crosscorrelation, using pseudorandom input signals based on m sequences. The systems are characterised by Volterra series, and analytical expressions relating the rth order Volterra kernel to r-dimensional crosscorrelation measurements are derived. It is shown that the two-dimensional crosscorrelation measurements are related to the corresponding second order kernel values by a set of equations which may be structured into a number of independent subsets. The m sequence properties determine how the maximum order of the subsets for off-diagonal values is related to the upper bound of the arguments for nonzero kernel values. The upper bound of the arguments is used as a performance index, and the performance of antisymmetric pseudorandom binary, ternary and quinary signals is investigated. The performance indices obtained above are small in relation to the periods of the corresponding signals. To achieve higher performance with ternary signals, a method is proposed for combining the estimates of the second order kernel values so that the effects of some of the undesirable nonzero values in the fourth order autocorrelation function of the input signal are removed. The identification of the dynamics of two-input, single-output systems with multiplicative nonlinearity is investigated. It is shown that the characteristics of such a system may be determined by crosscorrelation experiments using phase-shifted versions of a common signal as inputs. The effects of nonlinearities on the estimates of system weighting functions obtained by crosscorrelation are also investigated. Results obtained by correlation testing of an industrial process are presented, and the differences between theoretical and experimental results discussed for this case;
Resumo:
The identification of disease clusters in space or space-time is of vital importance for public health policy and action. In the case of methicillin-resistant Staphylococcus aureus (MRSA), it is particularly important to distinguish between community and health care-associated infections, and to identify reservoirs of infection. 832 cases of MRSA in the West Midlands (UK) were tested for clustering and evidence of community transmission, after being geo-located to the centroids of UK unit postcodes (postal areas roughly equivalent to Zip+4 zip code areas). An age-stratified analysis was also carried out at the coarser spatial resolution of UK Census Output Areas. Stochastic simulation and kernel density estimation were combined to identify significant local clusters of MRSA (p<0.025), which were supported by SaTScan spatial and spatio-temporal scan. In order to investigate local sampling effort, a spatial 'random labelling' approach was used, with MRSA as cases and MSSA (methicillin-sensitive S. aureus) as controls. Heavy sampling in general was a response to MRSA outbreaks, which in turn appeared to be associated with medical care environments. The significance of clusters identified by kernel estimation was independently supported by information on the locations and client groups of nursing homes, and by preliminary molecular typing of isolates. In the absence of occupational/ lifestyle data on patients, the assumption was made that an individual's location and consequent risk is adequately represented by their residential postcode. The problems of this assumption are discussed, with recommendations for future data collection.
Resumo:
This work is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variation of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here a new extended framework is derived that is based on a local polynomial approximation of a recently proposed variational Bayesian algorithm. The paper begins by showing that the new extension of this variational algorithm can be used for state estimation (smoothing) and converges to the original algorithm. However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new approach is validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein–Uhlenbeck process, the exact likelihood of which can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz ’63 (3D model). As a special case the algorithm is also applied to the 40 dimensional stochastic Lorenz ’96 system. In our investigation we compare this new approach with a variety of other well known methods, such as the hybrid Monte Carlo, dual unscented Kalman filter, full weak-constraint 4D-Var algorithm and analyse empirically their asymptotic behaviour as a function of observation density or length of time window increases. In particular we show that we are able to estimate parameters in both the drift (deterministic) and the diffusion (stochastic) part of the model evolution equations using our new methods.
Resumo:
The work presents a new method that combines plasma etching with extrinsic techniques to simultaneously measure matrix and surface protein and lipid deposits. The acronym for this technique is PEEMS - Plasma Etching and Emission Monitoring System. Previous work has identified the presence of proteinaceous and lipoidal deposition on the surface of contact lenses and highlighted the probability that penetration of these spoilants will occur. This technique developed here allows unambiguous identification of the depth of penetration of spoilants to be made for various material types. It is for this reason that the technique has been employed in this thesis. The technique is applied as a 'molecular' scalpel, removing known amounts of material from the target. In this case from both the anterior .and posterior surfaces of a 'soft' contact lens. The residual material is then characterised by other analytical techniques such as UV/visible .and fluorescence spectroscopy. Several studies have be.en carried out for both in vivo and in vitro spoilt materials. The analysis and identification of absorbed protein and lipid of the substrate revealed the importance of many factors in the absorption and adsorption process. The effect of the material structure, protein nature (in terms of size, shape and charge) and environment conditions were examined in order to determine the relative uptake of tear proteins. The studies were extended to real cases in order to study the. patient dependent factors and lipoidal penetration.
Resumo:
The thermal oxidation of two model compounds representing the aromatic polyamide, MXD6 (poly m-xylylene adipamide) have been investigated. The model compounds (having different chemical structures, viz, one corresponding to the aromatic part of the chain and the other to the aliphatic part), based on the structure of MXD6 were prepared and reactions with different concentrations of cobalt ions examined with the aim of identifying the role of the different structural components of MXD6 on the mechanism of oxidation. The study showed that cobalt, in the presence of sodium phosphite (which acts as an antioxidant for MXD6 and the model compounds), increases the oxidation of the model compounds. It is believed that the cobalt acts predominantly as a catalyst for the decomposition of hydroperoxides, formed during oxidation of the models in the melt phase, to free radical products and to a lesser extent as a catalyst for the initiation of the oxidation reaction by complex formation with the amide, which is more likely to take place in the solid phase. An oxidation cycle has been proposed consisting of two parts both of which will occur, to some extent under all conditions of oxidation (in the melt and in the solid phase), but their individual predominance must be determined by the prevailing oxygen pressure at the reaction site. The different aspects of this proposed mechanism were examined from extensive model compound studies, and the evidence based on the nature of product formation and the kinetics of these reactions. Main techniques used to compare the rates of oxidation and the study of kinetics included, oxygen absorption, FT-IR, UV and TGA. HPLC was used for product separation and identification.
Resumo:
A critical review of previous research revealed that visual attention tests, such as the Useful Field of View (UFOV) test, provided the best means of detecting age-related changes to the visual system that could potentially increase crash risk. However, the question was raised as to whether the UFOV, which was regarded as a static visual attention test, could be improved by inclusion of kinetic targets that more closely represent the driving task. A computer program was written to provide more information about the derivation of UFOV test scores. Although this investigation succeeded in providing new information, some of the commercially protected UFOV test procedures still remain unknown. Two kinetic visual attention tests (DRTS1 and 2), developed at Aston University to investigate inclusion of kinetic targets in visual attention tests, were introduced. The UFOV was found to be more repeatable than either of the kinetic visual attention tests and learning effects or age did not influence these findings. Determinants of static and kinetic visual attention were explored. Increasing target eccentricity led to reduced performance on the UFOV and DRTS1 tests. The DRTS2 was not affected by eccentricity but this may have been due to the style of presentation of its targets. This might also have explained why only the DRTS2 showed laterality effects (i.e. better performance to targets presented on the left hand side of the road). Radial location, explored using the UFOV test, showed that subjects responded best to targets positioned to the horizontal meridian. Distraction had opposite effects on static and kinetic visual attention. While UFOV test performance declined with distraction, DRTS1 performance increased. Previous research had shown that this striking difference was to be expected. Whereas the detection of static targets is attenuated in the presence of distracting stimuli, distracting stimuli that move in a structured flow field enhances the detection of moving targets. Subjects reacted more slowly to kinetic compared to static targets, longitudinal motion compared to angular motion and to increased self-motion. However, the effects of longitudinal motion, angular motion, self-motion and even target eccentricity were caused by target edge speed variations arising because of optic flow field effects. The UFOV test was more able to detect age-related changes to the visual system than were either of the kinetic visual attention tests. The driving samples investigated were too limited to draw firm conclusions. Nevertheless, the results presented showed that neither the DRTS2 nor the UFOV tests were powerful tools for the identification of drivers prone to crashes or poor driving performance.
Resumo:
Warehouse is an essential component in the supply chain, linking the chain partners and providing them with functions of product storage, inbound and outbound operations along with value-added processes. Allocation of warehouse resources should be efficient and effective to achieve optimum productivity and reduce operational costs. Radio frequency identification (RFID) is a technology capable of providing real-time information about supply chain operations. It has been used by warehousing and logistic enterprises to achieve reduced shrinkage, improved material handling and tracking as well as increased accuracy of data collection. However, both academics and practitioners express concerns about challenges to RFID adoption in the supply chain. This paper provides a comprehensive analysis of the problems encountered in RFID implementation at warehouses, discussing the theoretical and practical adoption barriers and causes of not achieving full potential of the technology. Lack of foreseeable return on investment (ROI) and high costs are the most commonly reported obstacles. Variety of standards and radio wave frequencies are identified as source of concern for decision makers. Inaccurate performance of the RFID within the warehouse environment is examined. Description of integration challenges between warehouse management system and RFID technology is given. The paper discusses the existing solutions to technological, investment and performance RFID adoption barriers. Factors to consider when implementing the RFID technology are given to help alleviate implementation problems. By illustrating the challenges of RFID in the warehouse environment and discussing possible solutions the paper aims to help both academics and practitioners to focus on key areas constituting an obstacle to the technology growth. As more studies will address these challenges, the realisation of RFID benefits for warehouses and supply chain will become a reality.
Resumo:
Control design for stochastic uncertain nonlinear systems is traditionally based on minimizing the expected value of a suitably chosen loss function. Moreover, most control methods usually assume the certainty equivalence principle to simplify the problem and make it computationally tractable. We offer an improved probabilistic framework which is not constrained by these previous assumptions, and provides a more natural framework for incorporating and dealing with uncertainty. The focus of this paper is on developing this framework to obtain an optimal control law strategy using a fully probabilistic approach for information extraction from process data, which does not require detailed knowledge of system dynamics. Moreover, the proposed control method framework allows handling the problem of input-dependent noise. A basic paradigm is proposed and the resulting algorithm is discussed. The proposed probabilistic control method is for the general nonlinear class of discrete-time systems. It is demonstrated theoretically on the affine class. A nonlinear simulation example is also provided to validate theoretical development.
Resumo:
The problem of separating structured information representing phenomena of differing natures is considered. A structure is assumed to be independent of the others if can be represented in a complementary subspace. When the concomitant subspaces are well separated the problem is readily solvable by a linear technique. Otherwise, the linear approach fails to correctly discriminate the required information. Hence, a non-extensive approach is proposed. The resulting nonlinear technique is shown to be suitable for dealing with cases that cannot be tackled by the linear one.
Resumo:
Nearest feature line-based subspace analysis is first proposed in this paper. Compared with conventional methods, the newly proposed one brings better generalization performance and incremental analysis. The projection point and feature line distance are expressed as a function of a subspace, which is obtained by minimizing the mean square feature line distance. Moreover, by adopting stochastic approximation rule to minimize the objective function in a gradient manner, the new method can be performed in an incremental mode, which makes it working well upon future data. Experimental results on the FERET face database and the UCI satellite image database demonstrate the effectiveness.
Resumo:
The airway epithelium is the first point of contact in the lung for inhaled material, including infectious pathogens and particulate matter, and protects against toxicity from these substances by trapping and clearance via the mucociliary escalator, presence of a protective barrier with tight junctions and initiation of a local inflammatory response. The inflammatory response involves recruitment of phagocytic cells to neutralise and remove and invading materials and is oftern modelled using rodents. However, development of valid in vitro airway epithelial models is of great importance due to the restrictions on animal studies for cosmetic compound testing implicit in the 7th amendment to the European Union Cosmetics Directive. Further, rodent innate immune responses have fundamental differences to human. Pulmonary endothelial cells and leukocytes are also involved in the innate response initiated during pulmonary inflammation. Co-culture models of the airways, in particular where epithelial cells are cultured at air liquid interface with the presence of tight junctions and differentiated mucociliary cells, offer a solution to this problem. Ideally validated models will allow for detection of early biomarkers of response to exposure and investigation into inflammatory response during exposure. This thesis describes the approaches taken towards developing an in vitro epithelial/endothelial cell model of the human airways and identification biomarkers of response to exposure to xenobiotics. The model comprised normal human primary microvascular endothelial cells and the bronchial epithelial cell line BEAS-2B or normal human bronchial epithelial cells. BEAS-2B were chosen as their characterisation at air liquid interface is limited but they are robust in culture, thereby predicted to provide a more reliable test system. Proteomics analysis was undertaken on challenged cells to investigate biomarkers of exposure. BEAS-2B morphology was characterised at air liquid interface compared with normal human bronchial epithelial cells. The results indicate that BEAS-2B cells at an air liquid interface form tight junctions as shown by expression of the tight junction protein zonula occludens-1. To this author’s knowledge this is the first time this result has been reported. The inflammatory response of BEAS-2B (measured as secretion of the inflammatory mediators interleukin-8 and -6) air liquid interface mono-cultures to Escherichia coli lipopolysaccharide or particulate matter (fine and ultrafine titanium dioxide) was comparable to published data for epithelial cells. Cells were also exposed to polymers of “commercial interest” which were in the nanoparticle range (and referred to particles hereafter). BEAS-2B mono-cultures showed an increased secretion of inflammatory mediators after challenge. Inclusion of microvascular endothelial cells resulted in protection against LPS- and particle- induced epithelial toxicity, measured as cell viability and inflammatory response, indicating the importance of co-cultures for investigations into toxicity. Two-dimensional proteomic analysis of lysates from particle-challenged cells failed to identify biomarkers of toxicity due to assay interference and experimental variability. Separately, decreased plasma concentrations of serine protease inhibitors, and the negative acute phase proteins transthyretin, histidine-rich glycoprotein and alpha2-HS glycoprotein were identified as potential biomarkers of methyl methacrylate/ethyl methacrylate/butylacrylate treatment in rats.