41 resultados para Discrete function theory
em CentAUR: Central Archive University of Reading - UK
Resumo:
Vertically pointing Doppler radar has been used to study the evolution of ice particles as they sediment through a cirrus cloud. The measured Doppler fall speeds, together with radar-derived estimates for the altitude of cloud top, are used to estimate a characteristic fall time tc for the `average' ice particle. The change in radar reflectivity Z is studied as a function of tc, and is found to increase exponentially with fall time. We use the idea of dynamically scaling particle size distributions to show that this behaviour implies exponential growth of the average particle size, and argue that this exponential growth is a signature of ice crystal aggregation.
Resumo:
Flow and turbulence above urban terrain is more complex than above rural terrain, due to the different momentum and heat transfer characteristics that are affected by the presence of buildings (e.g. pressure variations around buildings). The applicability of similarity theory (as developed over rural terrain) is tested using observations of flow from a sonic anemometer located at 190.3 m height in London, U.K. using about 6500 h of data. Turbulence statistics—dimensionless wind speed and temperature, standard deviations and correlation coefficients for momentum and heat transfer—were analysed in three ways. First, turbulence statistics were plotted as a function only of a local stability parameter z/Λ (where Λ is the local Obukhov length and z is the height above ground); the σ_i/u_* values (i = u, v, w) for neutral conditions are 2.3, 1.85 and 1.35 respectively, similar to canonical values. Second, analysis of urban mixed-layer formulations during daytime convective conditions over London was undertaken, showing that atmospheric turbulence at high altitude over large cities might not behave dissimilarly from that over rural terrain. Third, correlation coefficients for heat and momentum were analyzed with respect to local stability. The results give confidence in using the framework of local similarity for turbulence measured over London, and perhaps other cities. However, the following caveats for our data are worth noting: (i) the terrain is reasonably flat, (ii) building heights vary little over a large area, and (iii) the sensor height is above the mean roughness sublayer depth.
Resumo:
The calculation of accurate and reliable vibrational potential functions and normal co-ordinates is discussed, for such simple polyatomic molecules as it may be possible. Such calculations should be corrected for the effects of anharmonicity and of resonance interactions between the vibrational states, and should be fitted to all the available information on all isotopic species: particularly the vibrational frequencies, Coriolis zeta constants and centrifugal distortion constants. The difficulties of making these corrections, and of making use of the observed data are reviewed. A programme for the Ferranti Mercury Computer is described by means of which harmonic vibration frequencies and normal co-ordinate vectors, zeta factors and centrifugal distortion constants can be calculated, from a given force field and from given G-matrix elements, etc. The programme has been used on up to 5 × 5 secular equations for which a single calculation and output of results takes approximately l min; it can readily be extended to larger determinants. The best methods of using such a programme and the possibility of reversing the direction of calculation are discussed. The methods are applied to calculating the best possible vibrational potential function for the methane molecule, making use of all the observed data.
Resumo:
Asynchronous Optical Sampling (ASOPS) [1,2] and frequency comb spectrometry [3] based on dual Ti:saphire resonators operated in a master/slave mode have the potential to improve signal to noise ratio in THz transient and IR sperctrometry. The multimode Brownian oscillator time-domain response function described by state-space models is a mathematically robust framework that can be used to describe the dispersive phenomena governed by Lorentzian, Debye and Drude responses. In addition, the optical properties of an arbitrary medium can be expressed as a linear combination of simple multimode Brownian oscillator functions. The suitability of a range of signal processing schemes adopted from the Systems Identification and Control Theory community for further processing the recorded THz transients in the time or frequency domain will be outlined [4,5]. Since a femtosecond duration pulse is capable of persistent excitation of the medium within which it propagates, such approach is perfectly justifiable. Several de-noising routines based on system identification will be shown. Furthermore, specifically developed apodization structures will be discussed. These are necessary because due to dispersion issues, the time-domain background and sample interferograms are non-symmetrical [6-8]. These procedures can lead to a more precise estimation of the complex insertion loss function. The algorithms are applicable to femtosecond spectroscopies across the EM spectrum. Finally, a methodology for femtosecond pulse shaping using genetic algorithms aiming to map and control molecular relaxation processes will be mentioned.
Resumo:
This study suggests a statistical strategy for explaining how food purchasing intentions are influenced by different levels of risk perception and trust in food safety information. The modelling process is based on Ajzen's Theory of Planned Behaviour and includes trust and risk perception as additional explanatory factors. Interaction and endogeneity across these determinants is explored through a system of simultaneous equations, while the SPARTA equation is estimated through an ordered probit model. Furthermore, parameters are allowed to vary as a function of socio-demographic variables. The application explores chicken purchasing intentions both in a standard situation and conditional to an hypothetical salmonella scare. Data were collected through a nationally representative UK wide survey of 533 UK respondents in face-to-face, in-home interviews. Empirical findings show that interactions exist among the determinants of planned behaviour and socio-demographic variables improve the model's performance. Attitudes emerge as the key determinant of intention to purchase chicken, while trust in food safety information provided by media reduces the likelihood to purchase. (C) 2006 Elsevier Ltd. All rights reserved.
Resumo:
A construction algorithm for multioutput radial basis function (RBF) network modelling is introduced by combining a locally regularised orthogonal least squares (LROLS) model selection with a D-optimality experimental design. The proposed algorithm aims to achieve maximised model robustness and sparsity via two effective and complementary approaches. The LROLS method alone is capable of producing a very parsimonious RBF network model with excellent generalisation performance. The D-optimality design criterion enhances the model efficiency and robustness. A further advantage of the combined approach is that the user only needs to specify a weighting for the D-optimality cost in the combined RBF model selecting criterion and the entire model construction procedure becomes automatic. The value of this weighting does not influence the model selection procedure critically and it can be chosen with ease from a wide range of values.
Resumo:
A modified radial basis function (RBF) neural network and its identification algorithm based on observational data with heterogeneous noise are introduced. The transformed system output of Box-Cox is represented by the RBF neural network. To identify the model from observational data, the singular value decomposition of the full regression matrix consisting of basis functions formed by system input data is initially carried out and a new fast identification method is then developed using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator (MLE) for a model base spanned by the largest eigenvectors. Finally, the Box-Cox transformation-based RBF neural network, with good generalisation and sparsity, is identified based on the derived optimal Box-Cox transformation and an orthogonal forward regression algorithm using a pseudo-PRESS statistic to select a sparse RBF model with good generalisation. The proposed algorithm and its efficacy are demonstrated with numerical examples.
Resumo:
In a previous paper (J. of Differential Equations, Vol. 249 (2010), 3081-3098) we examined a family of periodic Sturm-Liouville problems with boundary and interior singularities which are highly non-self-adjoint but have only real eigenvalues. We now establish Schatten class properties of the associated resolvent operator.
Resumo:
This study explores the implications of an organization moving toward service-dominant logic (S-D logic) on the sales function. Driven by its customers’ needs, a service orientation by its nature requires personal interaction and sales personnel are in an ideal position to develop offerings with the customer. However, the development of S-D logic may require sales staff to develop additional skills. Employing a single case study, the study identified that sales personnel are quick to appreciate the advantages of S-D logic for customer satisfaction and six specific skills were highlighted and explored. Further, three propositions were identified: in an organization adopting S-D logic, the sales process needs to elicit needs at both embedded-value and value-in-use levels. In addition, the sales process needs to coproduce not just goods and service attributes but also attributes of the customer’s usage processes. Further, the sales process needs to coproduce not just goods and service attributes but also attributes of the customer’s usage processes.
Resumo:
A neural network enhanced self-tuning controller is presented, which combines the attributes of neural network mapping with a generalised minimum variance self-tuning control (STC) strategy. In this way the controller can deal with nonlinear plants, which exhibit features such as uncertainties, nonminimum phase behaviour, coupling effects and may have unmodelled dynamics, and whose nonlinearities are assumed to be globally bounded. The unknown nonlinear plants to be controlled are approximated by an equivalent model composed of a simple linear submodel plus a nonlinear submodel. A generalised recursive least squares algorithm is used to identify the linear submodel and a layered neural network is used to detect the unknown nonlinear submodel in which the weights are updated based on the error between the plant output and the output from the linear submodel. The procedure for controller design is based on the equivalent model therefore the nonlinear submodel is naturally accommodated within the control law. Two simulation studies are provided to demonstrate the effectiveness of the control algorithm.
Resumo:
An efficient numerical self-consistent field theory (SCFT) algorithm is developed for treating structured polymers on spherical surfaces. The method solves the diffusion equations of SCFT with a pseudospectral approach that combines a spherical-harmonics expansion for the angular coordinates with a modified real-space Crank–Nicolson method for the radial direction. The self-consistent field equations are solved with Anderson-mixing iterations using dynamical parameters and an alignment procedure to prevent angular drift of the solution. A demonstration of the algorithm is provided for thin films of diblock copolymer grafted to the surface of a spherical core, in which the sequence of equilibrium morphologies is predicted as a function of diblock composition. The study reveals an array of interesting behaviors as the block copolymer pattern is forced to adapt to the finite surface area of the sphere.
Resumo:
This paper compares a number of different extreme value models for determining the value at risk (VaR) of three LIFFE futures contracts. A semi-nonparametric approach is also proposed, where the tail events are modeled using the generalised Pareto distribution, and normal market conditions are captured by the empirical distribution function. The value at risk estimates from this approach are compared with those of standard nonparametric extreme value tail estimation approaches, with a small sample bias-corrected extreme value approach, and with those calculated from bootstrapping the unconditional density and bootstrapping from a GARCH(1,1) model. The results indicate that, for a holdout sample, the proposed semi-nonparametric extreme value approach yields superior results to other methods, but the small sample tail index technique is also accurate.