19 resultados para thermo-dynamical
em Aston University Research Archive
Resumo:
In the analysis and prediction of many real-world time series, the assumption of stationarity is not valid. A special form of non-stationarity, where the underlying generator switches between (approximately) stationary regimes, seems particularly appropriate for financial markets. We introduce a new model which combines a dynamic switching (controlled by a hidden Markov model) and a non-linear dynamical system. We show how to train this hybrid model in a maximum likelihood approach and evaluate its performance on both synthetic and financial data.
Resumo:
A 21-residue peptide in explicit water has been simulated using classical molecular dynamics. The system's trajectory has been analysed with a novel approach that quantifies the process of how atom's environment trajectories are explored. The approach is based on the measure of Statistical Complexity that extracts complete dynamical information from the signal. The introduced characteristic quantifies the system's dynamics at the nanoseconds time scale. It has been found that the peptide exhibits nanoseconds long periods that significantly differ in the rates of the exploration of the dynamically allowed configurations of the environment. During these periods the rates remain the same but different from other periods and from the rate for water. Periods of dynamical frustration are detected when only limited routes in the space of possible trajectories of the surrounding atoms are realised.
Resumo:
This thesis is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variant of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here two new extended frameworks are derived and presented that are based on basis function expansions and local polynomial approximations of a recently proposed variational Bayesian algorithm. It is shown that the new extensions converge to the original variational algorithm and can be used for state estimation (smoothing). However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new methods are numerically validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein-Uhlenbeck process, for which the exact likelihood can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz '63 (3-dimensional model). The algorithms are also applied to the 40 dimensional stochastic Lorenz '96 system. In this investigation these new approaches are compared with a variety of other well known methods such as the ensemble Kalman filter / smoother, a hybrid Monte Carlo sampler, the dual unscented Kalman filter (for jointly estimating the systems states and model parameters) and full weak-constraint 4D-Var. Empirical analysis of their asymptotic behaviour as a function of observation density or length of time window increases is provided.
Resumo:
When making predictions with complex simulators it can be important to quantify the various sources of uncertainty. Errors in the structural specification of the simulator, for example due to missing processes or incorrect mathematical specification, can be a major source of uncertainty, but are often ignored. We introduce a methodology for inferring the discrepancy between the simulator and the system in discrete-time dynamical simulators. We assume a structural form for the discrepancy function, and show how to infer the maximum-likelihood parameter estimates using a particle filter embedded within a Monte Carlo expectation maximization (MCEM) algorithm. We illustrate the method on a conceptual rainfall-runoff simulator (logSPM) used to model the Abercrombie catchment in Australia. We assess the simulator and discrepancy model on the basis of their predictive performance using proper scoring rules. This article has supplementary material online. © 2011 International Biometric Society.
Resumo:
This paper reviews nitrogen (N) cycle of effluent-irrigated energy crop plantations, starting from wastewater treatment to thermo-chemical conversion processes. In wastewater, N compounds contribute to eutrophication and toxicity in water cycle. Removal of N via vegetative filters and specifically in short-rotation energy plantations, is a relatively new approach to managing nitrogenous effluents. Though combustion of energy crops is in principle carbon neutral, in practice, N content may contribute to NOx emissions with significant global warming potential. Intermediate pyrolysis produces advanced fuels while reducing such emissions. By operating at intermediate temperature (500°C), it retains most N in char as pyrrolic-N, pyridinic-N, quaternary-N and amines. In addition, biochar provides long-term sequestration of carbon in soils.
Resumo:
Different species and genotypes of Miscanthus were analysed to determine the influence of genotypic variation and harvest time on cell wall composition and the products which may be refined via pyrolysis. Wet chemical, thermo-gravimetric (TGA) and pyrolysis-gas chromatography–mass spectrometry (Py-GC–MS) methods were used to identify the main pyrolysis products and determine the extent to which genotypic differences in cell wall composition influence the range and yield of pyrolysis products. Significant genotypic variation in composition was identified between species and genotypes, and a clear relationship was observed between the biomass composition, yields of pyrolysis products, and the composition of the volatile fraction. Results indicated that genotypes other than the commercially cultivated Miscanthus x giganteus may have greater potential for use in bio-refining of fuels and chemicals and several genotypes were identified as excellent candidates for the generation of genetic mapping families and the breeding of new genotypes with improved conversion quality characteristics.
Resumo:
In recent years structured packings have become more widely used in the process industries because of their improved volumetric efficiency. Most structured packings consist of corrugated sheets placed in the vertical plane The corrugations provide a regular network of channels for vapour liquid contact. Until recently it has been necessary to develop new packings by trial and error, testing new shapes in the laboratory. The orderly repetitive nature of the channel network produced by a structured packing suggests it may be possible to develop improved structured packings by the application of computational fluid dynamics (CFD) to calculate the packing performance and evaluate changes in shape so as to reduce the need for laboratory testing. In this work the CFD package PHOENICS has been used to predict the flow patterns produced in the vapour phase as it passes through the channel network. A particular novelty of the approach is to set up a method of solving the Navier Stokes equations for any particular intersection of channels. The flow pattern of the streams leaving the intersection is then made the input to the downstream intersection. In this way the flow pattern within a section of packing can be calculated. The resulting heat or mass transfer performance can be calculated by other standard CFD procedures. The CFD predictions revealed a circulation developing within the channels which produce a loss in mass transfer efficiency The calculations explained and predicted a change in mass transfer efficiency with depth of the sheets. This effect was also shown experimentally. New shapes of packing were proposed to remove the circulation and these were evaluated using CFD. A new shape was chosen and manufactured. This was tested experimentally and found to have a higher mass transfer efficiency than the standard packing.
Resumo:
This thesis is about the study of relationships between experimental dynamical systems. The basic approach is to fit radial basis function maps between time delay embeddings of manifolds. We have shown that under certain conditions these maps are generically diffeomorphisms, and can be analysed to determine whether or not the manifolds in question are diffeomorphically related to each other. If not, a study of the distribution of errors may provide information about the lack of equivalence between the two. The method has applications wherever two or more sensors are used to measure a single system, or where a single sensor can respond on more than one time scale: their respective time series can be tested to determine whether or not they are coupled, and to what degree. One application which we have explored is the determination of a minimum embedding dimension for dynamical system reconstruction. In this special case the diffeomorphism in question is closely related to the predictor for the time series itself. Linear transformations of delay embedded manifolds can also be shown to have nonlinear inverses under the right conditions, and we have used radial basis functions to approximate these inverse maps in a variety of contexts. This method is particularly useful when the linear transformation corresponds to the delay embedding of a finite impulse response filtered time series. One application of fitting an inverse to this linear map is the detection of periodic orbits in chaotic attractors, using suitably tuned filters. This method has also been used to separate signals with known bandwidths from deterministic noise, by tuning a filter to stop the signal and then recovering the chaos with the nonlinear inverse. The method may have applications to the cancellation of noise generated by mechanical or electrical systems. In the course of this research a sophisticated piece of software has been developed. The program allows the construction of a hierarchy of delay embeddings from scalar and multi-valued time series. The embedded objects can be analysed graphically, and radial basis function maps can be fitted between them asynchronously, in parallel, on a multi-processor machine. In addition to a graphical user interface, the program can be driven by a batch mode command language, incorporating the concept of parallel and sequential instruction groups and enabling complex sequences of experiments to be performed in parallel in a resource-efficient manner.
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
Resumo:
Fifteen Miscanthus genotypes grown in five locations across Europe were analysed to investigate the influence of genetic and environmental factors on cell wall composition. Chemometric techniques combining near infrared reflectance spectroscopy and conventional chemical analyses were used to construct calibration models for determination of acid detergent lignin, acid detergent fibre, and neutral detergent fibre from sample spectra. The developed equations were shown to predict cell wall components with a good degree of accuracy and significant genetic and environmental variation was identified. The influence of nitrogen and potassium fertiliser on the dry matter yield and cell wall composition of M. x giganteus was investigated. A detrimental affect on feedstock quality was observed to result from application of these inputs which resulted in an overall reduction in concentrations of cell wall components and increased accumulation of ash within the biomass. Pyrolysis-gas chromatography-mass spectrometry and thermo-gravimetric analysis indicates that genotypes other than the commercially cultivated M. x giganteus have potential for use in energy conversion processes and in the bio-refining. The yields and quality parameters of the pyrolysis liquids produced from Miscanthus compared favourably with that produced from SRC willow and produced a more stable pyrolysis liquid with a higher lower heating value. Overall, genotype had a more significant effect on cell wall composition than environment. This indicates good potential for dissection of this trait by QTL analysis and also for plant breeding to produce new genotypes with improved feedstock characteristics for energy conversion.
Resumo:
Fundamental analytical pyrolysis studies of biomass from Polar seaweeds, which exhibit a different biomass composition than terrestrial and micro-algae biomass were performed via thermogravimetric analysis (TGA) and pyrolysis-gas chromatography/mass-spectrometry (Py-GC/MS). The main reason for this study is the adaptation of these species to very harsh environments making them an interesting source for thermo-chemical processing for bioenergy generation and production of biochemicals via intermediate pyrolysis. Several macroalgal species from the Arctic region Kongsfjorden, Spitsbergen/Norway (Prasiola crispa, Monostroma arcticum, Polysiphonia arctica, Devaleraea ramentacea, Odonthalia dentata, Phycodrys rubens, Sphacelaria plumosa) and from the Antarctic peninsula, Potter Cove King George Island (Gigartina skottsbergii, Plocamium cartilagineum, Myriogramme manginii, Hymencladiopsis crustigena, Kallymenia antarctica) were investigated under intermediate pyrolysis conditions. TGA of the Polar seaweeds revealed three stages of degradation representing dehydration, devolatilization and decomposition of carbonaceous solids. The maximum degradation temperatures Prasiola crispa were observed within the range of 220-320 C and are lower than typically obtained by terrestrial biomass, due to divergent polysaccharide compositions. Biochar residues accounted for 33-46% and ash contents of 27-45% were obtained. Identification of volatile products by Py-GC/MS revealed a complexity of generated chemical compounds and significant differences between the species. A widespread occurrence of aromatics (toluene, styrene, phenol and 4-methylphenol), acids (acetic acid, benzoic acid alkyl ester derivatives, 2-propenoic acid esters and octadecanoic acid octyl esters) in pyrolysates was detected. Ubiquitous furan-derived products included furfural and 5-methyl-2-furaldehyde. As a pyran-derived compound maltol was obtained by one red algal species (P. rubens) and the monosaccharide d-allose was detected in pyrolysates in one green algal (P. crispa). Further unique chemicals detected were dianhydromannitol from brown algae and isosorbide from green algae biomass. In contrast, the anhydrosugar levoglucosan and the triterpene squalene was detected in a large number of pyrolysates analysed. © 2013 Elsevier B.V. All rights reserved.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
The influence of the comonomer content in a series of metallocene-based ethylene-1-octene copolymers (m-LLDPE) on thermo-mechanical, rheological, and thermo-oxidative behaviours during melt processing were examined using a range of characterisation techniques. The amount of branching was calculated from 13C NMR and studies using differential scanning calorimetry (DSC) and dynamic mechanical analysis (DMA) were employed to determine the effect of short chain branching (SCB, comonomer content) on thermal and mechanical characteristics of the polymer. The effect of melt processing at different temperatures on the thermo-oxidative behaviour of the polymers was investigated by examining the changes in rheological properties, using both melt flow and capillary rheometry, and the evolution of oxidation products during processing using infrared spectroscopy. The results show that the comonomer content and catalyst type greatly affect thermal, mechanical and oxidative behaviour of the polymers. For the metallocene polymer series, it was shown from both DSC and DMA that (i) crystallinity and melting temperatures decreased linearly with comonomer content, (ii) the intensity of the ß-transition increased, and (iii) the position of the tan δmax peak corresponding to the a-transition shifted to lower temperatures, with higher comonomer content. In contrast, a corresponding Ziegler polymer containing the same level of SCB as in one of the m-LLDPE polymers, showed different characteristics due to its more heterogeneous nature: higher elongational viscosity, and a double melting peak with broader intensity that occurred at higher temperature (from DSC endotherm) indicating a much broader short chain branch distribution. The thermo-oxidative behaviour of the polymers after melt processing was similarly influenced by the comonomer content. Rheological characteristics and changes in concentrations of carbonyl and the different unsaturated groups, particularly vinyl, vinylidene and trans-vinylene, during processing of m-LLDPE polymers, showed that polymers with lower levels of SCB gave rise to predominantly crosslinking reactions at all processing temperatures. By contrast, chain scission reactions at higher processing temperatures became more favoured in the higher comonomer-containing polymers. Compared to its metallocene analogue, the Ziegler polymer showed a much higher degree of crosslinking at all temperatures because of the high levels of vinyl unsaturation initially present.
Resumo:
Haloclean a performance enhanced low temperature pyrolysis for biomass developed by Forschungszentrum Karlsruhe and Sea Marconi Is closing the gap between classical and fast pyrolysis approaches. For pyrolysis of straw (chaffed-, finely ground and pellets) temperature ranges between 320 to 420°C and residence times of only 1 to 5 minutes can be realized. Liquid yields of up to 45 wt-% and 35 wt-% of solids are possible. Solid yields can be increased up to 73 wt-% while loosing 4.5 % of the feed energy by pyrolysis gases only. Toxicity tests of the fractions do not show relevant numbers.