36 resultados para Analytic function theory,
em CentAUR: Central Archive University of Reading - UK
Resumo:
Details are given of a boundary-fitted mesh generation method for use in modelling free surface flow and water quality. A numerical method has been developed for generating conformal meshes for curvilinear polygonal and multiply-connected regions. The method is based on the Cauchy-Riemann conditions for the analytic function and is able to map a curvilinear polygonal region directly onto a regular polygonal region, with horizontal and vertical sides. A set of equations have been derived for determining the lengths of these sides and the least-squares method has been used in solving the equations. Several numerical examples are presented to illustrate the method.
Resumo:
This paper extends the singular value decomposition to a path of matricesE(t). An analytic singular value decomposition of a path of matricesE(t) is an analytic path of factorizationsE(t)=X(t)S(t)Y(t) T whereX(t) andY(t) are orthogonal andS(t) is diagonal. To maintain differentiability the diagonal entries ofS(t) are allowed to be either positive or negative and to appear in any order. This paper investigates existence and uniqueness of analytic SVD's and develops an algorithm for computing them. We show that a real analytic pathE(t) always admits a real analytic SVD, a full-rank, smooth pathE(t) with distinct singular values admits a smooth SVD. We derive a differential equation for the left factor, develop Euler-like and extrapolated Euler-like numerical methods for approximating an analytic SVD and prove that the Euler-like method converges.
Resumo:
By eliminating the short range negative divergence of the Debye–Hückel pair distribution function, but retaining the exponential charge screening known to operate at large interparticle separation, the thermodynamic properties of one-component plasmas of point ions or charged hard spheres can be well represented even in the strong coupling regime. Predicted electrostatic free energies agree within 5% of simulation data for typical Coulomb interactions up to a factor of 10 times the average kinetic energy. Here, this idea is extended to the general case of a uniform ionic mixture, comprising an arbitrary number of components, embedded in a rigid neutralizing background. The new theory is implemented in two ways: (i) by an unambiguous iterative algorithm that requires numerical methods and breaks the symmetry of cross correlation functions; and (ii) by invoking generalized matrix inverses that maintain symmetry and yield completely analytic solutions, but which are not uniquely determined. The extreme computational simplicity of the theory is attractive when considering applications to complex inhomogeneous fluids of charged particles.
Resumo:
Retrieving a subset of items can cause the forgetting of other items, a phenomenon referred to as retrieval-induced forgetting. According to some theorists, retrieval-induced forgetting is the consequence of an inhibitory mechanism that acts to reduce the accessibility of non-target items that interfere with the retrieval of target items. Other theorists argue that inhibition is unnecessary to account for retrieval-induced forgetting, contending instead that the phenomenon can be best explained by non-inhibitory mechanisms, such as strength-based competition or blocking. The current paper provides the first major meta-analysis of retrieval-induced forgetting, conducted with the primary purpose of quantitatively evaluating the multitude of findings that have been used to contrast these two theoretical viewpoints. The results largely supported inhibition accounts, but also provided some challenging evidence, with the nature of the results often varying as a function of how retrieval-induced forgetting was assessed. Implications for further research and theory development are discussed.
Resumo:
Vertically pointing Doppler radar has been used to study the evolution of ice particles as they sediment through a cirrus cloud. The measured Doppler fall speeds, together with radar-derived estimates for the altitude of cloud top, are used to estimate a characteristic fall time tc for the `average' ice particle. The change in radar reflectivity Z is studied as a function of tc, and is found to increase exponentially with fall time. We use the idea of dynamically scaling particle size distributions to show that this behaviour implies exponential growth of the average particle size, and argue that this exponential growth is a signature of ice crystal aggregation.
Resumo:
We study generalised prime systems P (1 < p(1) <= p(2) <= ..., with p(j) is an element of R tending to infinity) and the associated Beurling zeta function zeta p(s) = Pi(infinity)(j=1)(1 - p(j)(-s))(-1). Under appropriate assumptions, we establish various analytic properties of zeta p(s), including its analytic continuation, and we characterise the existence of a suitable generalised functional equation. In particular, we examine the relationship between a counterpart of the Prime Number Theorem (with error term) and the properties of the analytic continuation of zeta p(s). Further we study 'well-behaved' g-prime systems, namely, systems for which both the prime and integer counting function are asymptotically well-behaved. Finally, we show that there exists a natural correspondence between generalised prime systems and suitable orders on N-2. Some of the above results are relevant to the second author's theory of 'fractal membranes', whose spectral partition functions are given by Beurling-type zeta functions, as well as to joint work of that author and R. Nest on zeta functions attached to quasicrystals.
Resumo:
The radar scattering properties of realistic aggregate snowflakes have been calculated using the Rayleigh-Gans theory. We find that the effect of the snowflake geometry on the scattering may be described in terms of a single universal function, which depends only on the overall shape of the aggregate and not the geometry or size of the pristine ice crystals which compose the flake. This function is well approximated by a simple analytic expression at small sizes; for larger snowflakes we fit a curve to Our numerical data. We then demonstrate how this allows a characteristic snowflake radius to be derived from dual wavelength radar measurements without knowledge of the pristine crystal size or habit, while at the same time showing that this detail is crucial to using such data to estimate ice water content. We also show that the 'effective radius'. characterizing the ratio of particle volume to projected area, cannot be inferred from dual wavelength radar data for aggregates. Finally, we consider the errors involved in approximating snowflakes by 'air-ice spheres', and show that for small enough aggregates the predicted dual wavelength ratio typically agrees to within a few percent, provided some care is taken in choosing the radius of the sphere and the dielectric constant of the air-ice mixture; at larger sizes the radar becomes more sensitive to particle shape, and the errors associated with the sphere model are found to increase accordingly.
Resumo:
Flow and turbulence above urban terrain is more complex than above rural terrain, due to the different momentum and heat transfer characteristics that are affected by the presence of buildings (e.g. pressure variations around buildings). The applicability of similarity theory (as developed over rural terrain) is tested using observations of flow from a sonic anemometer located at 190.3 m height in London, U.K. using about 6500 h of data. Turbulence statistics—dimensionless wind speed and temperature, standard deviations and correlation coefficients for momentum and heat transfer—were analysed in three ways. First, turbulence statistics were plotted as a function only of a local stability parameter z/Λ (where Λ is the local Obukhov length and z is the height above ground); the σ_i/u_* values (i = u, v, w) for neutral conditions are 2.3, 1.85 and 1.35 respectively, similar to canonical values. Second, analysis of urban mixed-layer formulations during daytime convective conditions over London was undertaken, showing that atmospheric turbulence at high altitude over large cities might not behave dissimilarly from that over rural terrain. Third, correlation coefficients for heat and momentum were analyzed with respect to local stability. The results give confidence in using the framework of local similarity for turbulence measured over London, and perhaps other cities. However, the following caveats for our data are worth noting: (i) the terrain is reasonably flat, (ii) building heights vary little over a large area, and (iii) the sensor height is above the mean roughness sublayer depth.
Resumo:
The calculation of accurate and reliable vibrational potential functions and normal co-ordinates is discussed, for such simple polyatomic molecules as it may be possible. Such calculations should be corrected for the effects of anharmonicity and of resonance interactions between the vibrational states, and should be fitted to all the available information on all isotopic species: particularly the vibrational frequencies, Coriolis zeta constants and centrifugal distortion constants. The difficulties of making these corrections, and of making use of the observed data are reviewed. A programme for the Ferranti Mercury Computer is described by means of which harmonic vibration frequencies and normal co-ordinate vectors, zeta factors and centrifugal distortion constants can be calculated, from a given force field and from given G-matrix elements, etc. The programme has been used on up to 5 × 5 secular equations for which a single calculation and output of results takes approximately l min; it can readily be extended to larger determinants. The best methods of using such a programme and the possibility of reversing the direction of calculation are discussed. The methods are applied to calculating the best possible vibrational potential function for the methane molecule, making use of all the observed data.
Resumo:
Asynchronous Optical Sampling (ASOPS) [1,2] and frequency comb spectrometry [3] based on dual Ti:saphire resonators operated in a master/slave mode have the potential to improve signal to noise ratio in THz transient and IR sperctrometry. The multimode Brownian oscillator time-domain response function described by state-space models is a mathematically robust framework that can be used to describe the dispersive phenomena governed by Lorentzian, Debye and Drude responses. In addition, the optical properties of an arbitrary medium can be expressed as a linear combination of simple multimode Brownian oscillator functions. The suitability of a range of signal processing schemes adopted from the Systems Identification and Control Theory community for further processing the recorded THz transients in the time or frequency domain will be outlined [4,5]. Since a femtosecond duration pulse is capable of persistent excitation of the medium within which it propagates, such approach is perfectly justifiable. Several de-noising routines based on system identification will be shown. Furthermore, specifically developed apodization structures will be discussed. These are necessary because due to dispersion issues, the time-domain background and sample interferograms are non-symmetrical [6-8]. These procedures can lead to a more precise estimation of the complex insertion loss function. The algorithms are applicable to femtosecond spectroscopies across the EM spectrum. Finally, a methodology for femtosecond pulse shaping using genetic algorithms aiming to map and control molecular relaxation processes will be mentioned.
Resumo:
This study suggests a statistical strategy for explaining how food purchasing intentions are influenced by different levels of risk perception and trust in food safety information. The modelling process is based on Ajzen's Theory of Planned Behaviour and includes trust and risk perception as additional explanatory factors. Interaction and endogeneity across these determinants is explored through a system of simultaneous equations, while the SPARTA equation is estimated through an ordered probit model. Furthermore, parameters are allowed to vary as a function of socio-demographic variables. The application explores chicken purchasing intentions both in a standard situation and conditional to an hypothetical salmonella scare. Data were collected through a nationally representative UK wide survey of 533 UK respondents in face-to-face, in-home interviews. Empirical findings show that interactions exist among the determinants of planned behaviour and socio-demographic variables improve the model's performance. Attitudes emerge as the key determinant of intention to purchase chicken, while trust in food safety information provided by media reduces the likelihood to purchase. (C) 2006 Elsevier Ltd. All rights reserved.
Resumo:
A construction algorithm for multioutput radial basis function (RBF) network modelling is introduced by combining a locally regularised orthogonal least squares (LROLS) model selection with a D-optimality experimental design. The proposed algorithm aims to achieve maximised model robustness and sparsity via two effective and complementary approaches. The LROLS method alone is capable of producing a very parsimonious RBF network model with excellent generalisation performance. The D-optimality design criterion enhances the model efficiency and robustness. A further advantage of the combined approach is that the user only needs to specify a weighting for the D-optimality cost in the combined RBF model selecting criterion and the entire model construction procedure becomes automatic. The value of this weighting does not influence the model selection procedure critically and it can be chosen with ease from a wide range of values.
Resumo:
A modified radial basis function (RBF) neural network and its identification algorithm based on observational data with heterogeneous noise are introduced. The transformed system output of Box-Cox is represented by the RBF neural network. To identify the model from observational data, the singular value decomposition of the full regression matrix consisting of basis functions formed by system input data is initially carried out and a new fast identification method is then developed using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator (MLE) for a model base spanned by the largest eigenvectors. Finally, the Box-Cox transformation-based RBF neural network, with good generalisation and sparsity, is identified based on the derived optimal Box-Cox transformation and an orthogonal forward regression algorithm using a pseudo-PRESS statistic to select a sparse RBF model with good generalisation. The proposed algorithm and its efficacy are demonstrated with numerical examples.