35 resultados para General-method
em CentAUR: Central Archive University of Reading - UK
Resumo:
Recent interest in the validation of general circulation models (GCMs) has been devoted to objective methods. A small number of authors have used the direct synoptic identification of phenomena together with a statistical analysis to perform the objective comparison between various datasets. This paper describes a general method for performing the synoptic identification of phenomena that can be used for an objective analysis of atmospheric, or oceanographic, datasets obtained from numerical models and remote sensing. Methods usually associated with image processing have been used to segment the scene and to identify suitable feature points to represent the phenomena of interest. This is performed for each time level. A technique from dynamic scene analysis is then used to link the feature points to form trajectories. The method is fully automatic and should be applicable to a wide range of geophysical fields. An example will be shown of results obtained from this method using data obtained from a run of the Universities Global Atmospheric Modelling Project GCM.
Resumo:
A simple general route of obtaining very stable octacoordinated non-oxovanadium( IV) complexes of the general formula VL2 (where H2L is a tetradentate ONNO donor) is presented. Six such complexes (1-6) are adequately characterized by elemental analysis, mass spectrometry, and various spectroscopic techniques. One of these compounds (1) has been structurally characterized. The molecule has crystallographic 4 symmetry and has a dodecahedral structure existing in a tetragonal space group P4n2. The non-oxo character and VL2 stoichiometry for all of the complexes are established from analytical and mass spectrometric data. In addition, the non-oxo character is clearly indicated by the complete absence of the strong nu(v=o) band in the 925-1025 cm(-1) region, which is a signature of all oxovanadium species. The complexes are quite stable in open air in the solid state and in solution, a phenomenon rarely observed in non-oxovanadium(IV) or bare vanadium(IV) complexes.
Resumo:
In addition to the Hamiltonian functional itself, non-canonical Hamiltonian dynamical systems generally possess integral invariants known as ‘Casimir functionals’. In the case of the Euler equations for a perfect fluid, the Casimir functionals correspond to the vortex topology, whose invariance derives from the particle-relabelling symmetry of the underlying Lagrangian equations of motion. In a recent paper, Vallis, Carnevale & Young (1989) have presented algorithms for finding steady states of the Euler equations that represent extrema of energy subject to given vortex topology, and are therefore stable. The purpose of this note is to point out a very general method for modifying any Hamiltonian dynamical system into an algorithm that is analogous to those of Vallis etal. in that it will systematically increase or decrease the energy of the system while preserving all of the Casimir invariants. By incorporating momentum into the extremization procedure, the algorithm is able to find steadily-translating as well as steady stable states. The method is applied to a variety of perfect-fluid systems, including Euler flow as well as compressible and incompressible stratified flow.
Resumo:
The ever increasing demand for high image quality requires fast and efficient methods for noise reduction. The best-known order-statistics filter is the median filter. A method is presented to calculate the median on a set of N W-bit integers in W/B time steps. Blocks containing B-bit slices are used to find B-bits of the median; using a novel quantum-like representation allowing the median to be computed in an accelerated manner compared to the best-known method (W time steps). The general method allows a variety of designs to be synthesised systematically. A further novel architecture to calculate the median for a moving set of N integers is also discussed.
Resumo:
The different types of surface intersection which may occur in linear configurations of triatomic molecules are reviewed, particularly with regard to the way in which the degeneracy is split as the molecule bends. The Renner-Teller effect in states of symmetry Π, Δ, Φ, etc., and intersections between Σ and Π, Σ and Δ, and Π and Δ states are discussed. A general method of modelling such intersecting potential surfaces is proposed, as a development of the model previously used by Murrell and Carter and co-workers for single-valued surfaces. Some of the lower energy surfaces of H2O, NH2, O3, C3, and HNO are discussed as examples.
Resumo:
We have developed a general method for multiplexed quantitative proteomics using differential metabolic stable isotope labeling and mass spectrometry. The method was successfully used to study the dynamics of heat-shock response in Arabidopsis thaliana. A number of known heat-shock proteins were confirmed, and some proteins not previously associated with heat shock were discovered. The method is applicable in stable isotope labeling and allows for high degrees of multiplexing.
Resumo:
This article introduces a new general method for genealogical inference that samples independent genealogical histories using importance sampling (IS) and then samples other parameters with Markov chain Monte Carlo (MCMC). It is then possible to more easily utilize the advantages of importance sampling in a fully Bayesian framework. The method is applied to the problem of estimating recent changes in effective population size from temporally spaced gene frequency data. The method gives the posterior distribution of effective population size at the time of the oldest sample and at the time of the most recent sample, assuming a model of exponential growth or decline during the interval. The effect of changes in number of alleles, number of loci, and sample size on the accuracy of the method is described using test simulations, and it is concluded that these have an approximately equivalent effect. The method is used on three example data sets and problems in interpreting the posterior densities are highlighted and discussed.
Resumo:
Time/frequency and temporal analyses have been widely used in biomedical signal processing. These methods represent important characteristics of a signal in both time and frequency domain. In this way, essential features of the signal can be viewed and analysed in order to understand or model the physiological system. Historically, Fourier spectral analyses have provided a general method for examining the global energy/frequency distributions. However, an assumption inherent to these methods is the stationarity of the signal. As a result, Fourier methods are not generally an appropriate approach in the investigation of signals with transient components. This work presents the application of a new signal processing technique, empirical mode decomposition and the Hilbert spectrum, in the analysis of electromyographic signals. The results show that this method may provide not only an increase in the spectral resolution but also an insight into the underlying process of the muscle contraction.
Resumo:
We study boundary value problems for a linear evolution equation with spatial derivatives of arbitrary order, on the domain 0 < x < L, 0 < t < T, with L and T positive nite constants. We present a general method for identifying well-posed problems, as well as for constructing an explicit representation of the solution of such problems. This representation has explicit x and t dependence, and it consists of an integral in the k-complex plane and of a discrete sum. As illustrative examples we solve some two-point boundary value problems for the equations iqt + qxx = 0 and qt + qxxx = 0.
Resumo:
Improved crop yield forecasts could enable more effective adaptation to climate variability and change. Here, we explore how to combine historical observations of crop yields and weather with climate model simulations to produce crop yield projections for decision relevant timescales. Firstly, the effects on historical crop yields of improved technology, precipitation and daily maximum temperatures are modelled empirically, accounting for a nonlinear technology trend and interactions between temperature and precipitation, and applied specifically for a case study of maize in France. The relative importance of precipitation variability for maize yields in France has decreased significantly since the 1960s, likely due to increased irrigation. In addition, heat stress is found to be as important for yield as precipitation since around 2000. A significant reduction in maize yield is found for each day with a maximum temperature above 32 °C, in broad agreement with previous estimates. The recent increase in such hot days has likely contributed to the observed yield stagnation. Furthermore, a general method for producing near-term crop yield projections, based on climate model simulations, is developed and utilized. We use projections of future daily maximum temperatures to assess the likely change in yields due to variations in climate. Importantly, we calibrate the climate model projections using observed data to ensure both reliable temperature mean and daily variability characteristics, and demonstrate that these methods work using retrospective predictions. We conclude that, to offset the projected increased daily maximum temperatures over France, improved technology will need to increase base level yields by 12% to be confident about maintaining current levels of yield for the period 2016–2035; the current rate of yield technology increase is not sufficient to meet this target.
Resumo:
The Fourier series can be used to describe periodic phenomena such as the one-dimensional crystal wave function. By the trigonometric treatements in Hückel theory it is shown that Hückel theory is a special case of Fourier series theory. Thus, the conjugated π system is in fact a periodic system. Therefore, it can be explained why such a simple theorem as Hückel theory can be so powerful in organic chemistry. Although it only considers the immediate neighboring interactions, it implicitly takes account of the periodicity in the complete picture where all the interactions are considered. Furthermore, the success of the trigonometric methods in Hückel theory is not accidental, as it based on the fact that Hückel theory is a specific example of the more general method of Fourier series expansion. It is also important for education purposes to expand a specific approach such as Hückel theory into a more general method such as Fourier series expansion.
Resumo:
Mean field models (MFMs) of cortical tissue incorporate salient, average features of neural masses in order to model activity at the population level, thereby linking microscopic physiology to macroscopic observations, e.g., with the electroencephalogram (EEG). One of the common aspects of MFM descriptions is the presence of a high-dimensional parameter space capturing neurobiological attributes deemed relevant to the brain dynamics of interest. We study the physiological parameter space of a MFM of electrocortical activity and discover robust correlations between physiological attributes of the model cortex and its dynamical features. These correlations are revealed by the study of bifurcation plots, which show that the model responses to changes in inhibition belong to two archetypal categories or “families”. After investigating and characterizing them in depth, we discuss their essential differences in terms of four important aspects: power responses with respect to the modeled action of anesthetics, reaction to exogenous stimuli such as thalamic input, and distributions of model parameters and oscillatory repertoires when inhibition is enhanced. Furthermore, while the complexity of sustained periodic orbits differs significantly between families, we are able to show how metamorphoses between the families can be brought about by exogenous stimuli. We here unveil links between measurable physiological attributes of the brain and dynamical patterns that are not accessible by linear methods. They instead emerge when the nonlinear structure of parameter space is partitioned according to bifurcation responses. We call this general method “metabifurcation analysis”. The partitioning cannot be achieved by the investigation of only a small number of parameter sets and is instead the result of an automated bifurcation analysis of a representative sample of 73,454 physiologically admissible parameter sets. Our approach generalizes straightforwardly and is well suited to probing the dynamics of other models with large and complex parameter spaces.
Resumo:
Traditional derivations of available potential energy, in a variety of contexts, involve combining some form of mass conservation together with energy conservation. This raises the questions of why such constructions are required in the first place, and whether there is some general method of deriving the available potential energy for an arbitrary fluid system. By appealing to the underlying Hamiltonian structure of geophysical fluid dynamics, it becomes clear why energy conservation is not enough, and why other conservation laws such as mass conservation need to be incorporated in order to construct an invariant, known as the pseudoenergy, that is a positive‐definite functional of disturbance quantities. The available potential energy is just the non‐kinetic part of the pseudoenergy, the construction of which follows a well defined algorithm. Two notable features of the available potential energy defined thereby are first, that it is a locally defined quantity, and second, that it is inherently definable at finite amplitude (though one may of course always take the small‐amplitude limit if this is appropriate). The general theory is made concrete by systematic derivations of available potential energy in a number of different contexts. All the well known expressions are recovered, and some new expressions are obtained. The possibility of generalizing the concept of available potential energy to dynamically stable basic flows (as opposed to statically stable basic states) is also discussed.
Resumo:
Weeds tend to aggregate in patches within fields and there is evidence that this is partly owing to variation in soil properties. Because the processes driving soil heterogeneity operate at different scales, the strength of the relationships between soil properties and weed density would also be expected to be scale-dependent. Quantifying these effects of scale on weed patch dynamics is essential to guide the design of discrete sampling protocols for mapping weed distribution. We have developed a general method that uses novel within-field nested sampling and residual maximum likelihood (REML) estimation to explore scale-dependent relationships between weeds and soil properties. We have validated the method using a case study of Alopecurus myosuroides in winter wheat. Using REML, we partitioned the variance and covariance into scale-specific components and estimated the correlations between the weed counts and soil properties at each scale. We used variograms to quantify the spatial structure in the data and to map variables by kriging. Our methodology successfully captured the effect of scale on a number of edaphic drivers of weed patchiness. The overall Pearson correlations between A. myosuroides and soil organic matter and clay content were weak and masked the stronger correlations at >50 m. Knowing how the variance was partitioned across the spatial scales we optimized the sampling design to focus sampling effort at those scales that contributed most to the total variance. The methods have the potential to guide patch spraying of weeds by identifying areas of the field that are vulnerable to weed establishment.
Resumo:
In this paper we consider the problem of time-harmonic acoustic scattering in two dimensions by convex polygons. Standard boundary or finite element methods for acoustic scattering problems have a computational cost that grows at least linearly as a function of the frequency of the incident wave. Here we present a novel Galerkin boundary element method, which uses an approximation space consisting of the products of plane waves with piecewise polynomials supported on a graded mesh, with smaller elements closer to the corners of the polygon. We prove that the best approximation from the approximation space requires a number of degrees of freedom to achieve a prescribed level of accuracy that grows only logarithmically as a function of the frequency. Numerical results demonstrate the same logarithmic dependence on the frequency for the Galerkin method solution. Our boundary element method is a discretization of a well-known second kind combined-layer-potential integral equation. We provide a proof that this equation and its adjoint are well-posed and equivalent to the boundary value problem in a Sobolev space setting for general Lipschitz domains.