17 resultados para Generalized Procrustes Analysis
em CentAUR: Central Archive University of Reading - UK
Resumo:
The semi-distributed, dynamic INCA-N model was used to simulate the behaviour of dissolved inorganic nitrogen (DIN) in two Finnish research catchments. Parameter sensitivity and model structural uncertainty were analysed using generalized sensitivity analysis. The Mustajoki catchment is a forested upstream catchment, while the Savijoki catchment represents intensively cultivated lowlands. In general, there were more influential parameters in Savijoki than Mustajoki. Model results were sensitive to N-transformation rates, vegetation dynamics, and soil and river hydrology. Values of the sensitive parameters were based on long-term measurements covering both warm and cold years. The highest measured DIN concentrations fell between minimum and maximum values estimated during the uncertainty analysis. The lowest measured concentrations fell outside these bounds, suggesting that some retention processes may be missing from the current model structure. The lowest concentrations occurred mainly during low flow periods; so effects on total loads were small.
Resumo:
A new dynamic model of water quality, Q(2), has recently been developed, capable of simulating large branched river systems. This paper describes the application of a generalized sensitivity analysis (GSA) to Q(2) for single reaches of the River Thames in southern England. Focusing on the simulation of dissolved oxygen (DO) (since this may be regarded as a proxy for the overall health of a river); the GSA is used to identify key parameters controlling model behavior and provide a probabilistic procedure for model calibration. It is shown that, in the River Thames at least, it is more important to obtain high quality forcing functions than to obtain improved parameter estimates once approximate values have been estimated. Furthermore, there is a need to ensure reasonable simulation of a range of water quality determinands, since a focus only on DO increases predictive uncertainty in the DO simulations. The Q(2) model has been applied here to the River Thames, but it has a broad utility for evaluating other systems in Europe and around the world.
Resumo:
Fine sediment delivery to and storage in stream channel reaches can disrupt aquatic habitats, impact river hydromorphology, and transfer adsorbed nutrients and pollutants from catchment slopes to the fluvial system. This paper presents a modelling toot for simulating the time-dependent response of the fine sediment system in catchments, using an integrated approach that incorporates both land phase and in-stream processes of sediment generation, storage and transfer. The performance of the model is demonstrated by applying it to simulate in-stream suspended sediment concentrations in two lowland catchments in southern England, the Enborne and the Lambourn, which exhibit contrasting hydrological and sediment responses due to differences in substrate permeability. The sediment model performs well in the Enborne catchment, where direct runoff events are frequent and peak suspended sediment concentrations can exceed 600 mg l(-1). The general trends in the in-stream concentrations in the Lambourn catchment are also reproduced by the model, although the observed concentrations are low (rarely exceeding 50 mg l(-1)) and the background variability in the concentrations is not fully characterized by the model. Direct runoff events are rare in this highly permeable catchment, resulting in a weak coupling between the sediment delivery system and the catchment hydrology. The generic performance of the model is also assessed using a generalized sensitivity analysis based on the parameter bounds identified in the catchment applications. Results indicate that the hydrological parameters contributing to the sediment response include those controlling (1) the partitioning of runoff between surface and soil zone flows and (2) the fractional loss of direct runoff volume prior to channel delivery. The principal sediment processes controlling model behaviour in the simulations are the transport capacity of direct runoff and the in-stream generation, storage and release of the fine sediment fraction. The in-stream processes appear to be important in maintaining the suspended sediment concentrations during low flows in the River Enborne and throughout much of the year in the River Lambourn. Copyright (c) 2007 John Wiley & Sons, Ltd.
Resumo:
In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are entered into group-level statistical tests such as the t-test. In the current work, we argue that the by-participant analysis, regardless of the accuracy measurements used, would produce a substantial inflation of Type-1 error rates, when a random item effect is present. A mixed-effects model is proposed as a way to effectively address the issue, and our simulation studies examining Type-1 error rates indeed showed superior performance of mixed-effects model analysis as compared to the conventional by-participant analysis. We also present real data applications to illustrate further strengths of mixed-effects model analysis. Our findings imply that caution is needed when using the by-participant analysis, and recommend the mixed-effects model analysis.
Resumo:
This letter presents an accurate delay analysis in prioritised wireless sensor networks (WSN). The analysis is an enhancement of the existing analysis proposed by Choobkar and Dilmaghani, which is only applicable to the case where the lower priority nodes always have packets to send in the empty slots of the higher priority node. The proposed analysis is applicable for any pattern of packet arrival, which includes the general case where the lower priority nodes may or may not have packets to send in the empty slots of the higher priority nodes. Evaluation of both analyses showed that the proposed delay analysis has better accuracy over the full range of loads and provides an excellent match to simulation results.
Resumo:
Pharmacogenetic trials investigate the effect of genotype on treatment response. When there are two or more treatment groups and two or more genetic groups, investigation of gene-treatment interactions is of key interest. However, calculation of the power to detect such interactions is complicated because this depends not only on the treatment effect size within each genetic group, but also on the number of genetic groups, the size of each genetic group, and the type of genetic effect that is both present and tested for. The scale chosen to measure the magnitude of an interaction can also be problematic, especially for the binary case. Elston et al. proposed a test for detecting the presence of gene-treatment interactions for binary responses, and gave appropriate power calculations. This paper shows how the same approach can also be used for normally distributed responses. We also propose a method for analysing and performing sample size calculations based on a generalized linear model (GLM) approach. The power of the Elston et al. and GLM approaches are compared for the binary and normal case using several illustrative examples. While more sensitive to errors in model specification than the Elston et al. approach, the GLM approach is much more flexible and in many cases more powerful. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
The determination of the minimum size of a k-neighborhood (i.e., a neighborhood of a set of k nodes) in a given graph is essential in the analysis of diagnosability and fault tolerance of multicomputer systems. The generalized cubes include the hypercube and most hypercube variants as special cases. In this paper, we present a lower bound on the size of a k-neighborhood in n-dimensional generalized cubes, where 2n + 1 <= k <= 3n - 2. This lower bound is tight in that it is met by the n-dimensional hypercube. Our result is an extension of two previously known results. (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
A novel radix-3/9 algorithm for type-III generalized discrete Hartley transform (GDHT) is proposed, which applies to length-3(P) sequences. This algorithm is especially efficient in the case that multiplication is much more time-consuming than addition. A comparison analysis shows that the proposed algorithm outperforms a known algorithm when one multiplication is more time-consuming than five additions. When combined with any known radix-2 type-III GDHT algorithm, the new algorithm also applies to length-2(q)3(P) sequences.
Resumo:
This article introduces generalized beta-generated (GBG) distributions. Sub-models include all classical beta-generated, Kumaraswamy-generated and exponentiated distributions. They are maximum entropy distributions under three intuitive conditions, which show that the classical beta generator skewness parameters only control tail entropy and an additional shape parameter is needed to add entropy to the centre of the parent distribution. This parameter controls skewness without necessarily differentiating tail weights. The GBG class also has tractable properties: we present various expansions for moments, generating function and quantiles. The model parameters are estimated by maximum likelihood and the usefulness of the new class is illustrated by means of some real data sets.
Resumo:
We consider the general response theory recently proposed by Ruelle for describing the impact of small perturbations to the non-equilibrium steady states resulting from Axiom A dynamical systems. We show that the causality of the response functions entails the possibility of writing a set of Kramers-Kronig (K-K) relations for the corresponding susceptibilities at all orders of nonlinearity. Nonetheless, only a special class of directly observable susceptibilities obey K-K relations. Specific results are provided for the case of arbitrary order harmonic response, which allows for a very comprehensive K-K analysis and the establishment of sum rules connecting the asymptotic behavior of the harmonic generation susceptibility to the short-time response of the perturbed system. These results set in a more general theoretical framework previous findings obtained for optical systems and simple mechanical models, and shed light on the very general impact of considering the principle of causality for testing self-consistency: the described dispersion relations constitute unavoidable benchmarks that any experimental and model generated dataset must obey. The theory exposed in the present paper is dual to the time-dependent theory of perturbations to equilibrium states and to non-equilibrium steady states, and has in principle similar range of applicability and limitations. In order to connect the equilibrium and the non equilibrium steady state case, we show how to rewrite the classical response theory by Kubo so that response functions formally identical to those proposed by Ruelle, apart from the measure involved in the phase space integration, are obtained. These results, taking into account the chaotic hypothesis by Gallavotti and Cohen, might be relevant in several fields, including climate research. In particular, whereas the fluctuation-dissipation theorem does not work for non-equilibrium systems, because of the non-equivalence between internal and external fluctuations, K-K relations might be robust tools for the definition of a self-consistent theory of climate change.
Resumo:
By eliminating the short range negative divergence of the Debye–Hückel pair distribution function, but retaining the exponential charge screening known to operate at large interparticle separation, the thermodynamic properties of one-component plasmas of point ions or charged hard spheres can be well represented even in the strong coupling regime. Predicted electrostatic free energies agree within 5% of simulation data for typical Coulomb interactions up to a factor of 10 times the average kinetic energy. Here, this idea is extended to the general case of a uniform ionic mixture, comprising an arbitrary number of components, embedded in a rigid neutralizing background. The new theory is implemented in two ways: (i) by an unambiguous iterative algorithm that requires numerical methods and breaks the symmetry of cross correlation functions; and (ii) by invoking generalized matrix inverses that maintain symmetry and yield completely analytic solutions, but which are not uniquely determined. The extreme computational simplicity of the theory is attractive when considering applications to complex inhomogeneous fluids of charged particles.
Resumo:
Gamow's explanation of the exponential decay law uses complex 'eigenvalues' and exponentially growing 'eigenfunctions'. This raises the question, how Gamow's description fits into the quantum mechanical description of nature, which is based on real eigenvalues and square integrable wavefunctions. Observing that the time evolution of any wavefunction is given by its expansion in generalized eigenfunctions, we shall answer this question in the most straightforward manner, which at the same time is accessible to graduate students and specialists. Moreover, the presentation can well be used in physics lectures to students.
Resumo:
We consider a generic basic semi-algebraic subset S of the space of generalized functions, that is a set given by (not necessarily countably many) polynomial constraints. We derive necessary and sufficient conditions for an infinite sequence of generalized functions to be realizable on S, namely to be the moment sequence of a finite measure concentrated on S. Our approach combines the classical results about the moment problem on nuclear spaces with the techniques recently developed to treat the moment problem on basic semi-algebraic sets of Rd. In this way, we determine realizability conditions that can be more easily verified than the well-known Haviland type conditions. Our result completely characterizes the support of the realizing measure in terms of its moments. As concrete examples of semi-algebraic sets of generalized functions, we consider the set of all Radon measures and the set of all the measures having bounded Radon–Nikodym density w.r.t. the Lebesgue measure.