85 resultados para POLYNOMIAL CHAOS
Resumo:
None of the current surveillance streams monitoring the presence of scrapie in Great Britain provide a comprehensive and unbiased estimate of the prevalence of the disease at the holding level. Previous work to estimate the under-ascertainment adjusted prevalence of scrapie in Great Britain applied multiple-list capture–recapture methods. The enforcement of new control measures on scrapie-affected holdings in 2004 has stopped the overlapping between surveillance sources and, hence, the application of multiple-list capture–recapture models. Alternative methods, still under the capture–recapture methodology, relying on repeated entries in one single list have been suggested in these situations. In this article, we apply one-list capture–recapture approaches to data held on the Scrapie Notifications Database to estimate the undetected population of scrapie-affected holdings with clinical disease in Great Britain for the years 2002, 2003, and 2004. For doing so, we develop a new diagnostic tool for indication of heterogeneity as well as a new understanding of the Zelterman and Chao’s lower bound estimators to account for potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a special, locally truncated Poisson likelihood equivalent to a binomial likelihood. This understanding allows the extension of the Zelterman approach by means of logistic regression to include observed heterogeneity in the form of covariates—in case studied here, the holding size and country of origin. Our results confirm the presence of substantial unobserved heterogeneity supporting the application of our two estimators. The total scrapie-affected holding population in Great Britain is around 300 holdings per year. None of the covariates appear to inform the model significantly.
Resumo:
A key concern for conservation biologists is whether populations of plants and animals are likely to fluctuate widely in number or remain relatively stable around some steady-state value. In our study of 634 populations of mammals, birds, fish and insects, we find that most can be expected to remain stable despite year to year fluctuations caused by environmental factors. Mean return rates were generally around one but were higher in insects (1.09 +/- 0.02 SE) and declined with body size in mammals. In general, this is good news for conservation, as stable populations are less likely to go extinct. However, the lower return rates of the large mammals may make them more vulnerable to extinction. Our estimates of return rates were generally well below the threshold for chaos, which makes it unlikely that chaotic dynamics occur in natural populations - one of ecology's key unanswered questions.
Resumo:
We have studied growth and estimated recruitment of massive coral colonies at three sites, Kaledupa, Hoga and Sampela, separated by about 1.5 km in the Wakatobi Marine National Park, S.E. Sulawesi, Indonesia. There was significantly higher species richness (P<0.05), coral cover (P<0.05) and rugosity (P<0.01) at Kaledupa than at Sampela. A model for coral reef growth has been developed based on a rational polynomial function, where dx/dt is an index of coral growth with time; W is the variable (for example, coral weight, coral length or coral area), up to the power of n in the numerator and m in the denominator; a1……an and b1…bm are constants. The values for n and m represent the degree of the polynomial, and can relate to the morphology of the coral. The model was used to simulate typical coral growth curves, and tested using published data obtained by weighing coral colonies underwater in reefs on the south-west coast of Curaçao [‘Neth. J. Sea Res. 10 (1976) 285’]. The model proved an accurate fit to the data, and parameters were obtained for a number of coral species. Surface area data was obtained on over 1200 massive corals at three different sites in the Wakatobi Marine National Park, S.E. Sulawesi, Indonesia. The year of an individual's recruitment was calculated from knowledge of the growth rate modified by application of the rational polynomial model. The estimated pattern of recruitment was variable, with little numbers of massive corals settling and growing before 1950 at the heavily used site, Sampela, relative to the reef site with little or no human use, Kaledupa, and the intermediate site, Hoga. There was a significantly greater sedimentation rate at Sampela than at either Kaledupa (P<0.0001) or Hoga (P<0.0005). The relative mean abundance of fish families present at the reef crests at the three sites, determined using digital video photography, did not correlate with sedimentation rates, underwater visibility or lack of large non-branching coral colonies. Radial growth rates of three genera of non-branching corals were significantly lower at Sampela than at Kaledupa or at Hoga, and there was a high correlation (r=0.89) between radial growth rates and underwater visibility. Porites spp. was the most abundant coral over all the sites and at all depths followed by Favites (P<0.04) and Favia spp. (P<0.03). Colony ages of Porites corals were significantly lower at the 5 m reef flat on the Sampela reef than at the same depth on both other reefs (P<0.005). At Sampela, only 2.8% of corals on the 5 m reef crest are of a size to have survived from before 1950. The Scleractinian coral community of Sampela is severely impacted by depositing sediments which can lead to the suffocation of corals, whilst also decreasing light penetration resulting in decreased growth and calcification rates. The net loss of material from Sampela, if not checked, could result in the loss of this protective barrier which would be to the detriment of the sublittoral sand flats and hence the Sampela village.
Resumo:
This article is the second part of a review of the historical evolution of mathematical models applied in the development of building technology. The first part described the current state of the art and contrasted various models with regard to the applications to conventional buildings and intelligent buildings. It concluded that mathematical techniques adopted in neural networks, expert systems, fuzzy logic and genetic models, that can be used to address model uncertainty, are well suited for modelling intelligent buildings. Despite the progress, the possible future development of intelligent buildings based on the current trends implies some potential limitations of these models. This paper attempts to uncover the fundamental limitations inherent in these models and provides some insights into future modelling directions, with special focus on the techniques of semiotics and chaos. Finally, by demonstrating an example of an intelligent building system with the mathematical models that have been developed for such a system, this review addresses the influences of mathematical models as a potential aid in developing intelligent buildings and perhaps even more advanced buildings for the future.
Resumo:
The aim of this study was to evaluate the survivability of Bifidobacterium breve NCIMB 702257 in a three malt-based media supplemented with cysteine and yeast extract, and to determine the protective effect of these growth factors. A number of parameterised mathematical models were used to predict of kinetics of viability and total acidity during storage at different temperatures. Results demonstrated a good fit to the experimental mathematical model. The Arrhenius equations showed only reasonable fits and the polynomial plots contained a large area without data between 4 and 25 degrees C. In addition, it was shown that cysteine promotes growth and acid production by bifidobacteria, but does not extend survivability. On the other hand, increasing the yeast extract content of the fermentation media enhances the survivability of B. breve. To our knowledge, this is the first study to address the modelling of the survivability of probiotic bacteria in a cereal based fermentation media at different temperatures, introducing a more quantitative approach to the study of the shelf-life of a probiotic product. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Inverse problems for dynamical system models of cognitive processes comprise the determination of synaptic weight matrices or kernel functions for neural networks or neural/dynamic field models, respectively. We introduce dynamic cognitive modeling as a three tier top-down approach where cognitive processes are first described as algorithms that operate on complex symbolic data structures. Second, symbolic expressions and operations are represented by states and transformations in abstract vector spaces. Third, prescribed trajectories through representation space are implemented in neurodynamical systems. We discuss the Amari equation for a neural/dynamic field theory as a special case and show that the kernel construction problem is particularly ill-posed. We suggest a Tikhonov-Hebbian learning method as regularization technique and demonstrate its validity and robustness for basic examples of cognitive computations.
Resumo:
We present the symbolic resonance analysis (SRA) as a viable method for addressing the problem of enhancing a weakly dominant mode in a mixture of impulse responses obtained from a nonlinear dynamical system. We demonstrate this using results from a numerical simulation with Duffing oscillators in different domains of their parameter space, and by analyzing event-related brain potentials (ERPs) from a language processing experiment in German as a representative application. In this paradigm, the averaged ERPs exhibit an N400 followed by a sentence final negativity. Contemporary sentence processing models predict a late positivity (P600) as well. We show that the SRA is able to unveil the P600 evoked by the critical stimuli as a weakly dominant mode from the covering sentence final negativity. (c) 2007 American Institute of Physics. (c) 2007 American Institute of Physics.
Resumo:
The emergence of mental states from neural states by partitioning the neural phase space is analyzed in terms of symbolic dynamics. Well-defined mental states provide contexts inducing a criterion of structural stability for the neurodynamics that can be implemented by particular partitions. This leads to distinguished subshifts of finite type that are either cyclic or irreducible. Cyclic shifts correspond to asymptotically stable fixed points or limit tori whereas irreducible shifts are obtained from generating partitions of mixing hyperbolic systems. These stability criteria are applied to the discussion of neural correlates of consiousness, to the definition of macroscopic neural states, and to aspects of the symbol grounding problem. In particular, it is shown that compatible mental descriptions, topologically equivalent to the neurodynamical description, emerge if the partition of the neural phase space is generating. If this is not the case, mental descriptions are incompatible or complementary. Consequences of this result for an integration or unification of cognitive science or psychology, respectively, will be indicated.
Resumo:
We model the large scale fading of wireless THz communications links deployed in a metropolitan area taking into account reception through direct line of sight, ground or wall reflection and diffraction. The movement of the receiver in the three dimensions is modelled by an autonomous dynamic linear system in state-space whereas the geometric relations involved in the attenuation and multi-path propagation of the electric field are described by a static non-linear mapping. A subspace algorithm in conjunction with polynomial regression is used to identify a Wiener model from time-domain measurements of the field intensity.
Resumo:
We discuss the feasibility of wireless terahertz communications links deployed in a metropolitan area and model the large-scale fading of such channels. The model takes into account reception through direct line of sight, ground and wall reflection, as well as diffraction around a corner. The movement of the receiver is modeled by an autonomous dynamic linear system in state space, whereas the geometric relations involved in the attenuation and multipath propagation of the electric field are described by a static nonlinear mapping. A subspace algorithm in conjunction with polynomial regression is used to identify a single-output Wiener model from time-domain measurements of the field intensity when the receiver motion is simulated using a constant angular speed and an exponentially decaying radius. The identification procedure is validated by using the model to perform q-step ahead predictions. The sensitivity of the algorithm to small-scale fading, detector noise, and atmospheric changes are discussed. The performance of the algorithm is tested in the diffraction zone assuming a range of emitter frequencies (2, 38, 60, 100, 140, and 400 GHz). Extensions of the simulation results to situations where a more complicated trajectory describes the motion of the receiver are also implemented, providing information on the performance of the algorithm under a worst case scenario. Finally, a sensitivity analysis to model parameters for the identified Wiener system is proposed.
Resumo:
The large scale fading of wireless mobile communications links is modelled assuming the mobile receiver motion is described by a dynamic linear system in state-space. The geometric relations involved in the attenuation and multi-path propagation of the electric field are described by a static non-linear mapping. A Wiener system subspace identification algorithm in conjunction with polynomial regression is used to identify a model from time-domain estimates of the field intensity assuming a multitude of emitters and an antenna array at the receiver end.
Resumo:
We introduce and describe the Multiple Gravity Assist problem, a global optimisation problem that is of great interest in the design of spacecraft and their trajectories. We discuss its formalization and we show, in one particular problem instance, the performance of selected state of the art heuristic global optimisation algorithms. A deterministic search space pruning algorithm is then developed and its polynomial time and space complexity derived. The algorithm is shown to achieve search space reductions of greater than six orders of magnitude, thus reducing significantly the complexity of the subsequent optimisation.
Resumo:
In this paper, we propose to study a class of neural networks with recent-history distributed delays. A sufficient condition is derived for the global exponential periodicity of the proposed neural networks, which has the advantage that it assumes neither the differentiability nor monotonicity of the activation function of each neuron nor the symmetry of the feedback matrix or delayed feedback matrix. Our criterion is shown to be valid by applying it to an illustrative system. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
In this paper the meteorological processes responsible for transporting tracer during the second ETEX (European Tracer EXperiment) release are determined using the UK Met Office Unified Model (UM). The UM predicted distribution of tracer is also compared with observations from the ETEX campaign. The dominant meteorological process is a warm conveyor belt which transports large amounts of tracer away from the surface up to a height of 4 km over a 36 h period. Convection is also an important process, transporting tracer to heights of up to 8 km. Potential sources of error when using an operational numerical weather prediction model to forecast air quality are also investigated. These potential sources of error include model dynamics, model resolution and model physics. In the UM a semi-Lagrangian monotonic advection scheme is used with cubic polynomial interpolation. This can predict unrealistic negative values of tracer which are subsequently set to zero, and hence results in an overprediction of tracer concentrations. In order to conserve mass in the UM tracer simulations it was necessary to include a flux corrected transport method. Model resolution can also affect the accuracy of predicted tracer distributions. Low resolution simulations (50 km grid length) were unable to resolve a change in wind direction observed during ETEX 2, this led to an error in the transport direction and hence an error in tracer distribution. High resolution simulations (12 km grid length) captured the change in wind direction and hence produced a tracer distribution that compared better with the observations. The representation of convective mixing was found to have a large effect on the vertical transport of tracer. Turning off the convective mixing parameterisation in the UM significantly reduced the vertical transport of tracer. Finally, air quality forecasts were found to be sensitive to the timing of synoptic scale features. Errors in the position of the cold front relative to the tracer release location of only 1 h resulted in changes in the predicted tracer concentrations that were of the same order of magnitude as the absolute tracer concentrations.
Resumo:
We consider problems of splitting and connectivity augmentation in hypergraphs. In a hypergraph G = (V +s, E), to split two edges su, sv, is to replace them with a single edge uv. We are interested in doing this in such a way as to preserve a defined level of connectivity in V . The splitting technique is often used as a way of adding new edges into a graph or hypergraph, so as to augment the connectivity to some prescribed level. We begin by providing a short history of work done in this area. Then several preliminary results are given in a general form so that they may be used to tackle several problems. We then analyse the hypergraphs G = (V + s, E) for which there is no split preserving the local-edge-connectivity present in V. We provide two structural theorems, one of which implies a slight extension to Mader’s classical splitting theorem. We also provide a characterisation of the hypergraphs for which there is no such “good” split and a splitting result concerned with a specialisation of the local-connectivity function. We then use our splitting results to provide an upper bound on the smallest number of size-two edges we must add to any given hypergraph to ensure that in the resulting hypergraph we have λ(x, y) ≥ r(x, y) for all x, y in V, where r is an integer valued, symmetric requirement function on V*V. This is the so called “local-edge-connectivity augmentation problem” for hypergraphs. We also provide an extension to a Theorem of Szigeti, about augmenting to satisfy a requirement r, but using hyperedges. Next, in a result born of collaborative work with Zoltán Király from Budapest, we show that the local-connectivity augmentation problem is NP-complete for hypergraphs. Lastly we concern ourselves with an augmentation problem that includes a locational constraint. The premise is that we are given a hypergraph H = (V,E) with a bipartition P = {P1, P2} of V and asked to augment it with size-two edges, so that the result is k-edge-connected, and has no new edge contained in some P(i). We consider the splitting technique and describe the obstacles that prevent us forming “good” splits. From this we deduce results about which hypergraphs have a complete Pk-split. This leads to a minimax result on the optimal number of edges required and a polynomial algorithm to provide an optimal augmentation.