937 resultados para additive variance
Resumo:
Electrospinning is a route to polymer fibres with diameters considerably smaller than available from most fibre-producing techniques. We explore the use of a low molecular weight compound as an effective control additive during the electrospinning of poly(epsilon-caprolactone). This approach extends the control variables for the electrospinning of nanoscale fibres from the more usual ones such as the polymer molecular weight, solvent and concentration. We show that through the use of dual solvent systems, we can alter the impact of the additive on the electrospinning process so that finer as well as thicker fibres can be prepared under otherwise identical conditions. As well as the size of the fibres and the number of beads, the use of the additive allows us to alter the level of crystallinity as well as the level of preferred orientation of the poly(epsilon-caprolactone) crystals. This approach, involving the use of a dual solvent and a low molar mass compound, offers considerable potential for application to other polymer systems. (C) 2010 Society of Chemical Industry
Resumo:
In this paper, observations by a ground-based vertically pointing Doppler lidar and sonic anemometer are used to investigate the diurnal evolution of boundary-layer turbulence in cloudless, cumulus and stratocumulus conditions. When turbulence is driven primarily by surface heating, such as in cloudless and cumulus-topped boundary layers, both the vertical velocity variance and skewness follow similar profiles, on average, to previous observational studies of turbulence in convective conditions, with a peak skewness of around 0.8 in the upper third of the mixed layer. When the turbulence is driven primarily by cloud-top radiative cooling, such as in the presence of nocturnal stratocumulus, it is found that the skewness is inverted in both sign and height: its minimum value of around −0.9 occurs in the lower third of the mixed layer. The profile of variance is consistent with a cloud-top cooling rate of around 30Wm−2. This is also consistent with the evolution of the thermodynamic profile and the rate of growth of the mixed layer into the stable nocturnal boundary layer from above. In conditions where surface heating occurs simultaneously with cloud-top cooling, the skewness is found to be useful for diagnosing the source of the turbulence, suggesting that long-term Doppler lidar observations would be valuable for evaluating boundary-layer parametrization schemes. Copyright c 2009 Royal Meteorological Society
Resumo:
A neural network enhanced self-tuning controller is presented, which combines the attributes of neural network mapping with a generalised minimum variance self-tuning control (STC) strategy. In this way the controller can deal with nonlinear plants, which exhibit features such as uncertainties, nonminimum phase behaviour, coupling effects and may have unmodelled dynamics, and whose nonlinearities are assumed to be globally bounded. The unknown nonlinear plants to be controlled are approximated by an equivalent model composed of a simple linear submodel plus a nonlinear submodel. A generalised recursive least squares algorithm is used to identify the linear submodel and a layered neural network is used to detect the unknown nonlinear submodel in which the weights are updated based on the error between the plant output and the output from the linear submodel. The procedure for controller design is based on the equivalent model therefore the nonlinear submodel is naturally accommodated within the control law. Two simulation studies are provided to demonstrate the effectiveness of the control algorithm.
Resumo:
A self-tuning controller which automatically assigns weightings to control and set-point following is introduced. This discrete-time single-input single-output controller is based on a generalized minimum-variance control strategy. The automatic on-line selection of weightings is very convenient, especially when the system parameters are unknown or slowly varying with respect to time, which is generally considered to be the type of systems for which self-tuning control is useful. This feature also enables the controller to overcome difficulties with non-minimum phase systems.
Resumo:
A neural network enhanced proportional, integral and derivative (PID) controller is presented that combines the attributes of neural network learning with a generalized minimum-variance self-tuning control (STC) strategy. The neuro PID controller is structured with plant model identification and PID parameter tuning. The plants to be controlled are approximated by an equivalent model composed of a simple linear submodel to approximate plant dynamics around operating points, plus an error agent to accommodate the errors induced by linear submodel inaccuracy due to non-linearities and other complexities. A generalized recursive least-squares algorithm is used to identify the linear submodel, and a layered neural network is used to detect the error agent in which the weights are updated on the basis of the error between the plant output and the output from the linear submodel. The procedure for controller design is based on the equivalent model, and therefore the error agent is naturally functioned within the control law. In this way the controller can deal not only with a wide range of linear dynamic plants but also with those complex plants characterized by severe non-linearity, uncertainties and non-minimum phase behaviours. Two simulation studies are provided to demonstrate the effectiveness of the controller design procedure.
Resumo:
This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bezier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bezier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bezier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bezier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.
Resumo:
An alternative blind deconvolution algorithm for white-noise driven minimum phase systems is presented and verified by computer simulation. This algorithm uses a cost function based on a novel idea: variance approximation and series decoupling (VASD), and suggests that not all autocorrelation function values are necessary to implement blind deconvolution.
Resumo:
A bit-level processing (BLP) based linear CDMA detector is derived following the principle of minimum variance distortionless response (MVDR). The combining taps for the MVDR detector are determined from (1) the covariance matrix of the matched filter output, and (2) the corresponding row (or column) of the user correlation matrix. Due to the interference suppression capability of MVDR and the fact that no inversion of the user correlation matrix is involved, the influence of the synchronisation errors is greatly reduced. The detector performance is demonstrated via computer simulations (both synchronisation errors and intercell interference are considered).
Resumo:
We consider the finite sample properties of model selection by information criteria in conditionally heteroscedastic models. Recent theoretical results show that certain popular criteria are consistent in that they will select the true model asymptotically with probability 1. To examine the empirical relevance of this property, Monte Carlo simulations are conducted for a set of non–nested data generating processes (DGPs) with the set of candidate models consisting of all types of model used as DGPs. In addition, not only is the best model considered but also those with similar values of the information criterion, called close competitors, thus forming a portfolio of eligible models. To supplement the simulations, the criteria are applied to a set of economic and financial series. In the simulations, the criteria are largely ineffective at identifying the correct model, either as best or a close competitor, the parsimonious GARCH(1, 1) model being preferred for most DGPs. In contrast, asymmetric models are generally selected to represent actual data. This leads to the conjecture that the properties of parameterizations of processes commonly used to model heteroscedastic data are more similar than may be imagined and that more attention needs to be paid to the behaviour of the standardized disturbances of such models, both in simulation exercises and in empirical modelling.
Resumo:
Using the formalism of the Ruelle response theory, we study how the invariant measure of an Axiom A dynamical system changes as a result of adding noise, and describe how the stochastic perturbation can be used to explore the properties of the underlying deterministic dynamics. We first find the expression for the change in the expectation value of a general observable when a white noise forcing is introduced in the system, both in the additive and in the multiplicative case. We also show that the difference between the expectation value of the power spectrum of an observable in the stochastically perturbed case and of the same observable in the unperturbed case is equal to the variance of the noise times the square of the modulus of the linear susceptibility describing the frequency-dependent response of the system to perturbations with the same spatial patterns as the considered stochastic forcing. This provides a conceptual bridge between the change in the fluctuation properties of the system due to the presence of noise and the response of the unperturbed system to deterministic forcings. Using Kramers-Kronig theory, it is then possible to derive the real and imaginary part of the susceptibility and thus deduce the Green function of the system for any desired observable. We then extend our results to rather general patterns of random forcing, from the case of several white noise forcings, to noise terms with memory, up to the case of a space-time random field. Explicit formulas are provided for each relevant case analysed. As a general result, we find, using an argument of positive-definiteness, that the power spectrum of the stochastically perturbed system is larger at all frequencies than the power spectrum of the unperturbed system. We provide an example of application of our results by considering the spatially extended chaotic Lorenz 96 model. These results clarify the property of stochastic stability of SRB measures in Axiom A flows, provide tools for analysing stochastic parameterisations and related closure ansatz to be implemented in modelling studies, and introduce new ways to study the response of a system to external perturbations. Taking into account the chaotic hypothesis, we expect that our results have practical relevance for a more general class of system than those belonging to Axiom A.
Resumo:
We study the empirical performance of the classical minimum-variance hedging strategy, comparing several econometric models for estimating hedge ratios of crude oil, gasoline and heating oil crack spreads. Given the great variability and large jumps in both spot and futures prices, considerable care is required when processing the relevant data and accounting for the costs of maintaining and re-balancing the hedge position. We find that the variance reduction produced by all models is statistically and economically indistinguishable from the one-for-one “naïve” hedge. However, minimum-variance hedging models, especially those based on GARCH, generate much greater margin and transaction costs than the naïve hedge. Therefore we encourage hedgers to use a naïve hedging strategy on the crack spread bundles now offered by the exchange; this strategy is the cheapest and easiest to implement. Our conclusion contradicts the majority of the existing literature, which favours the implementation of GARCH-based hedging strategies.
Resumo:
Although the potential to adapt to warmer climate is constrained by genetic trade-offs, our understanding of how selection and mutation shape genetic (co)variances in thermal reaction norms is poor. Using 71 isofemale lines of the fly Sepsis punctum, originating from northern, central, and southern European climates, we tested for divergence in juvenile development rate across latitude at five experimental temperatures. To investigate effects of evolutionary history in different climates on standing genetic variation in reaction norms, we further compared genetic (co)variances between regions. Flies were reared on either high or low food resources to explore the role of energy acquisition in determining genetic trade-offs between different temperatures. Although the latter had only weak effects on the strength and sign of genetic correlations, genetic architecture differed significantly between climatic regions, implying that evolution of reaction norms proceeds via different trajectories at high latitude versus low latitude in this system. Accordingly, regional genetic architecture was correlated to region-specific differentiation. Moreover, hot development temperatures were associated with low genetic variance and stronger genetic correlations compared to cooler temperatures. We discuss the evolutionary potential of thermal reaction norms in light of their underlying genetic architectures, evolutionary histories, and the materialization of trade-offs in natural environments.