46 resultados para third-order non-linearity
Resumo:
Are the learning procedures of genetic algorithms (GAs) able to generate optimal architectures for artificial neural networks (ANNs) in high frequency data? In this experimental study,GAs are used to identify the best architecture for ANNs. Additional learning is undertaken by the ANNs to forecast daily excess stock returns. No ANN architectures were able to outperform a random walk,despite the finding of non-linearity in the excess returns. This failure is attributed to the absence of suitable ANN structures and further implies that researchers need to be cautious when making inferences from ANN results that use high frequency data.
Resumo:
A method for inscribing fiber bragg gratings (FBG) using direct, point-by-point writing by an infrared femtosecond laser was described. The method requires neither phase-masks nor photosensitized fibers and hence offers remarkable technology flexibility. It requires a very short inscription time of less than 60 s per grating. Gratings of first to third order were produced in non-photosensitized, standard telecommunication fiber (SMF) and dispersion shifted fiber (DSF). The gratings produced in this method showed low insertion loss, narrow linewidth and strong, fundamental or high-order resonance.
Resumo:
We review the recent progress of information theory in optical communications, and describe the current experimental results and associated advances in various individual technologies which increase the information capacity. We confirm the widely held belief that the reported capacities are approaching the fundamental limits imposed by signal-to-noise ratio and the distributed non-linearity of conventional optical fibres, resulting in the reduction in the growth rate of communication capacity. We also discuss the techniques which are promising to increase and/or approach the information capacity limit.
Resumo:
The kinetic parameters of the pyrolysis of miscanthus and its acid hydrolysis residue (AHR) were determined using thermogravimetric analysis (TGA). The AHR was produced at the University of Limerick by treating miscanthus with 5 wt.% sulphuric acid at 175 °C as representative of a lignocellulosic acid hydrolysis product. For the TGA experiments, 3 to 6 g of sample, milled and sieved to a particle size below 250 μm, were placed in the TGA ceramic crucible. The experiments were carried out under non-isothermal conditions heating the samples from 50 to 900 °C at heating rates of 2.5, 5, 10, 17 and 25 °C/min. The activation energy (EA) of the decomposition process was determined from the TGA data by differential analysis (Friedman) and three isoconversional methods of integral analysis (Kissinger–Akahira–Sunose, Ozawa–Flynn–Wall, Vyazovkin). The activation energy ranged from 129 to 156 kJ/mol for miscanthus and from 200 to 376 kJ/mol for AHR increasing with increasing conversion. The reaction model was selected using the non-linear least squares method and the pre-exponential factor was calculated from the Arrhenius approximation. The results showed that the best fitting reaction model was the third order reaction for both feedstocks. The pre-exponential factor was in the range of 5.6 × 1010 to 3.9 × 10+ 13 min− 1 for miscanthus and 2.1 × 1016 to 7.7 × 1025 min− 1 for AHR.
Resumo:
The research presented in this thesis was developed as part of DIBANET, an EC funded project aiming to develop an energetically self-sustainable process for the production of diesel miscible biofuels (i.e. ethyl levulinate) via acid hydrolysis of selected biomass feedstocks. Three thermal conversion technologies, pyrolysis, gasification and combustion, were evaluated in the present work with the aim of recovering the energy stored in the acid hydrolysis solid residue (AHR). Mainly consisting of lignin and humins, the AHR can contain up to 80% of the energy in the original feedstock. Pyrolysis of AHR proved unsatisfactory, so attention focussed on gasification and combustion with the aim of producing heat and/or power to supply the energy demanded by the ethyl levulinate production process. A thermal processing rig consisting on a Laminar Entrained Flow Reactor (LEFR) equipped with solid and liquid collection and online gas analysis systems was designed and built to explore pyrolysis, gasification and air-blown combustion of AHR. Maximum liquid yield for pyrolysis of AHR was 30wt% with volatile conversion of 80%. Gas yield for AHR gasification was 78wt%, with 8wt% tar yields and conversion of volatiles close to 100%. 90wt% of the AHR was transformed into gas by combustion, with volatile conversions above 90%. 5volO2%-95vol%N2 gasification resulted in a nitrogen diluted, low heating value gas (2MJ/m3). Steam and oxygen-blown gasification of AHR were additionally investigated in a batch gasifier at KTH in Sweden. Steam promoted the formation of hydrogen (25vol%) and methane (14vol%) improving the gas heating value to 10MJ/m3, below the typical for steam gasification due to equipment limitations. Arrhenius kinetic parameters were calculated using data collected with the LEFR to provide reaction rate information for process design and optimisation. Activation energy (EA) and pre-exponential factor (ko in s-1) for pyrolysis (EA=80kJ/mol, lnko=14), gasification (EA=69kJ/mol, lnko=13) and combustion (EA=42kJ/mol, lnko=8) were calculated after linearly fitting the data using the random pore model. Kinetic parameters for pyrolysis and combustion were also determined by dynamic thermogravimetric analysis (TGA), including studies of the original biomass feedstocks for comparison. Results obtained by differential and integral isoconversional methods for activation energy determination were compared. Activation energy calculated by the Vyazovkin method was 103-204kJ/mol for pyrolysis of untreated feedstocks and 185-387kJ/mol for AHRs. Combustion activation energy was 138-163kJ/mol for biomass and 119-158 for AHRs. The non-linear least squares method was used to determine reaction model and pre-exponential factor. Pyrolysis and combustion of biomass were best modelled by a combination of third order reaction and 3 dimensional diffusion models, while AHR decomposed following the third order reaction for pyrolysis and the 3 dimensional diffusion for combustion.
Resumo:
We review the recent progress of information theory in optical communications, and describe the current experimental results and associated advances in various individual technologies which increase the information capacity. We confirm the widely held belief that the reported capacities are approaching the fundamental limits imposed by signal-to-noise ratio and the distributed non-linearity of conventional optical fibres, resulting in the reduction in the growth rate of communication capacity. We also discuss the techniques which are promising to increase and/or approach the information capacity limit.
Resumo:
This paper reviews evidence from previous growth-rate studies on lichens of the yellow-green species of Subgenus Rhizocarpon - the family most commonly used in lichenometric dating. New data are presented from Rhizocarpon section Rhizocarpon thalli growing on a moraine in southern Iceland over a period of 4.33yr. Measurements of 38 lichen thalli, between 2001 and 2005, show that diametral growth rate (DGR, mmyr-1) is a function of thallus size. Growth rates increase rapidly in small thalli (<10 mm diameter), remain high (ca. 0.8 mm yr-1) and then decrease gradually in larger thalli (>50 mm diameter). Mean DGR in southern Iceland, between 2001 and 2005, was 0.64 mm yr-1 (SD = 0.24). The resultant growth-rate curve is parabolic and is best described by a third-order polynomial function. The striking similarity between these findings in Iceland and those of Armstrong (1983) in Wales implies that the shape of the growth-rate curve may be characteristic of Rhizocarpon geographicum lichens. The difference between the absolute growth rate in southern Iceland and Wales (ca. 66% faster) is probably a function of climate and micro-environment between the two sites. These findings have implications for previous lichenometric-dating studies, namely, that those studies which assume constant lichen growth rates over many decades are probably unreliable. © British Geological Survey/ Natural Environment Research Council copyright 2006.
Resumo:
This empirical study examines the extent of non-linearity in a multivariate model of monthly financial series. To capture the conditional heteroscedasticity in the series, both the GARCH(1,1) and GARCH(1,1)-in-mean models are employed. The conditional errors are assumed to follow the normal and Student-t distributions. The non-linearity in the residuals of a standard OLS regression are also assessed. It is found that the OLS residuals as well as conditional errors of the GARCH models exhibit strong non-linearity. Under the Student density, the extent of non-linearity in the GARCH conditional errors was generally similar to those of the standard OLS. The GARCH-in-mean regression generated the worse out-of-sample forecasts.
Resumo:
This thesis is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variant of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here two new extended frameworks are derived and presented that are based on basis function expansions and local polynomial approximations of a recently proposed variational Bayesian algorithm. It is shown that the new extensions converge to the original variational algorithm and can be used for state estimation (smoothing). However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new methods are numerically validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein-Uhlenbeck process, for which the exact likelihood can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz '63 (3-dimensional model). The algorithms are also applied to the 40 dimensional stochastic Lorenz '96 system. In this investigation these new approaches are compared with a variety of other well known methods such as the ensemble Kalman filter / smoother, a hybrid Monte Carlo sampler, the dual unscented Kalman filter (for jointly estimating the systems states and model parameters) and full weak-constraint 4D-Var. Empirical analysis of their asymptotic behaviour as a function of observation density or length of time window increases is provided.
Resumo:
This thesis investigates the physical behaviour of solitons in wavelength division multiplexed (WDM) systems with dispersion management in a wide range of dispersion regimes. Background material is presented to show how solitons propagate in optical fibres, and key problems associated with real systems are outlined. Problems due to collision induced frequency shifts are calculated using numerical simulation, and these results compared with analytical techniques where possible. Different two-step dispersion regimes, as well as the special cases of uniform and exponentially profiled systems, are identified and investigated. In shallow profile, the constituent second-order dispersions in the system are always close to the average soliton value. It is shown that collision-induced frequency shifts in WDM soliton transmission systems are reduced with increasing dispersion management. New resonances in the collision dynamics are illustrated, due to the relative motion induced by the dispersion map. Consideration of third-order dispersion is shown to modify the effects of collision-induced timing jitter and third-order compensation investigated. In all cases pseudo-phase-matched four-wave mixing was found to be insignificant compared to collision induced frequency shift in causing deterioration of data. It is also demonstrated that all these effects are additive with that of Gordon-Haus jitter.
Resumo:
This thesis presents details on both theoretical and experimental aspects of UV written fibre gratings. The main body of the thesis deals with the design, fabrication and testing of telecommunication optical fibre grating devices, but also an accurate theoretical analysis of intra-core fibre gratings is presented. Since more than a decade, fibre gratings have been extensively used in the telecommunication field (as filters, dispersion compensators, and add/drop multiplexers for instance). Gratings for telecommunication should conform to very high fabrication standards as the presence of any imperfection raises the noise level in the transmission system compromising its ability of transmitting intelligible sequence of bits to the receiver. Strong side lobes suppression and high and sharp reflection profile are then necessary characteristics. A fundamental part of the theoretical and experimental work reported in this thesis is about apodisation. The physical principle of apodisation is introduced and a number of apodisation techniques, experimental results and numerical optimisation of the shading functions and all the practical parameters involved in the fabrication are detailed. The measurement of chromatic dispersion in fibres and FBGs is detailed and an estimation of its accuracy is given. An overview on the possible methods that can be implemented for the fabrication of tunable fibre gratings is given before detailing a new dispersion compensator device based on the action of a distributed strain onto a linearly chirped FBG. It is shown that tuning of second and third order dispersion of the grating can be obtained by the use of a specially designed multipoint bending rig. Experiments on the recompression of optical pulses travelling long distances are detailed for 10 Gb/s and 40 Gb/s. The characterisation of a new kind of double section LPG fabricated on a metal-clad coated fibre is reported. The fabrication of the device is made easier by directly writing the grating through the metal coating. This device may be used to overcome the recoating problems associated with standard LPGs written in step-index fibre. Also, it can be used as a sensor for simultaneous measurements of temperature and surrounding medium refractive index.
Resumo:
The object of this thesis is to develop a method for calculating the losses developed in steel conductors of circular cross-section and at temperatures below 100oC, by the direct passage of a sinusoidally alternating current. Three cases are considered. 1. Isolated solid or tubular conductor. 2. Concentric arrangement of tube and solid return conductor. 3. Concentric arrangement of two tubes. These cases find applications in process temperature maintenance of pipelines, resistance heating of bars and design of bus-bars. The problems associated with the non-linearity of steel are examined. Resistance heating of bars and methods of surface heating of pipelines are briefly described. Magnetic-linear solutions based on Maxwell's equations are critically examined and conditions under which various formulae apply investigated. The conditions under which a tube is electrically equivalent to a solid conductor and to a semi-infinite plate are derived. Existing solutions for the calculation of losses in isolated steel conductors of circular cross-section are reviewed, evaluated and compared. Two methods of solution are developed for the three cases considered. The first is based on the magnetic-linear solutions and offers an alternative to the available methods which are not universal. The second solution extends the existing B/H step-function approximation method to small diameter conductors and to tubes in isolation or in a concentric arrangement. A comprehensive experimental investigation is presented for cases 1 and 2 above which confirms the validity of the proposed methods of solution. These are further supported by experimental results reported in the literature. Good agreement is obtained between measured and calculated loss values for surface field strengths beyond the linear part of the d.c. magnetisation characteristic. It is also shown that there is a difference in the electrical behaviour of a small diameter conductor or thin tube under resistance or induction heating conditions.
Resumo:
This work is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variation of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here a new extended framework is derived that is based on a local polynomial approximation of a recently proposed variational Bayesian algorithm. The paper begins by showing that the new extension of this variational algorithm can be used for state estimation (smoothing) and converges to the original algorithm. However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new approach is validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein–Uhlenbeck process, the exact likelihood of which can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz ’63 (3D model). As a special case the algorithm is also applied to the 40 dimensional stochastic Lorenz ’96 system. In our investigation we compare this new approach with a variety of other well known methods, such as the hybrid Monte Carlo, dual unscented Kalman filter, full weak-constraint 4D-Var algorithm and analyse empirically their asymptotic behaviour as a function of observation density or length of time window increases. In particular we show that we are able to estimate parameters in both the drift (deterministic) and the diffusion (stochastic) part of the model evolution equations using our new methods.
Resumo:
The kinetics and mechanisms of the ring-opening polymerization of oxetane were studied using cationic and coordinated anionic catalysts. The cationic initiators used were BF30Et2!/ethanol, BF30Et2!/ethanediol and BF30Et2/propantriol. Kinetic determinations with the BF30Et2/diol system indicated that a 1: 1 BF3:0H ratio gave the maximum rate of polymerization and this ratio was employed to detenmne the overall rates of polymerization. An overall second-order dependence was obtained when the system involved ethanediol or propantriol as co-catalyst and a 3/2-order dependence with ethanol, in each case the monomer gave a first-order relationship. This suggested that two mechanisms accounted for the cationic polymerization. These mechanisms were investigated and further evidence for these was obtained from the study of the complex formation of BF30Et2 and the co-catalysts by 1H NMR. Molecular weight studies (using size-exclusion chromatography) indicated that the hydroxyl ion acted as a chain transfer reagent when the [OH] > [BF3]. A linear relationship was observed when the number average molecular weight was plotted against [oxetane] at constant [BF3:0H], and similarly a linear dependency was observed on the BF3:0H 1:1 adduct at constant oxetane concentration. Copolymerization of oxetane and THF was carried out using BF30Et2/ethanol system. The reactivity ratios were calculated as rOXT = 1.2 ± 0.30 and rTHF = 0.14 ± 0.03. These copolymers were random copolymers with no evidence of oligomer formation. The coordinated anionic catalyst, porphinato-aluminium chloride [(TPP)AICl], was used to produce a living polymerization of oxetane. An overall third-order kinetics was obtained, with a second-order with respect to the [(TPP)AICl] and a first-order with respect to the [oxetane] and a mechanism was postulated using these results. The stereochemistry of [(TPP)AlCl] catalyst was investigated using cyclohexene and cyclopentene oxide monomers, using extensive 1H NMR, 2-D COSY and decoupling NMR techniques it was concluded that [(TPP)AlCl] gave rise to stereoregular polymers.
Resumo:
This paper presents some forecasting techniques for energy demand and price prediction, one day ahead. These techniques combine wavelet transform (WT) with fixed and adaptive machine learning/time series models (multi-layer perceptron (MLP), radial basis functions, linear regression, or GARCH). To create an adaptive model, we use an extended Kalman filter or particle filter to update the parameters continuously on the test set. The adaptive GARCH model is a new contribution, broadening the applicability of GARCH methods. We empirically compared two approaches of combining the WT with prediction models: multicomponent forecasts and direct forecasts. These techniques are applied to large sets of real data (both stationary and non-stationary) from the UK energy markets, so as to provide comparative results that are statistically stronger than those previously reported. The results showed that the forecasting accuracy is significantly improved by using the WT and adaptive models. The best models on the electricity demand/gas price forecast are the adaptive MLP/GARCH with the multicomponent forecast; their MSEs are 0.02314 and 0.15384 respectively.