886 resultados para Finite-time stochastic stability


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a new reconstruction method for diffuse optical tomography using reduced-order models of light transport in tissue. The models, which directly map optical tissue parameters to optical flux measurements at the detector locations, are derived based on data generated by numerical simulation of a reference model. The reconstruction algorithm based on the reduced-order models is a few orders of magnitude faster than the one based on a finite element approximation on a fine mesh incorporating a priori anatomical information acquired by magnetic resonance imaging. We demonstrate the accuracy and speed of the approach using a phantom experiment and through numerical simulation of brain activation in a rat's head. The applicability of the approach for real-time monitoring of brain hemodynamics is demonstrated through a hypercapnic experiment. We show that our results agree with the expected physiological changes and with results of a similar experimental study. However, by using our approach, a three-dimensional tomographic reconstruction can be performed in ∼3  s per time point instead of the 1 to 2 h it takes when using the conventional finite element modeling approach

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Energy storage is a potential alternative to conventional network reinforcementof the low voltage (LV) distribution network to ensure the grid’s infrastructure remainswithin its operating constraints. This paper presents a study on the control of such storagedevices, owned by distribution network operators. A deterministic model predictive control (MPC) controller and a stochastic receding horizon controller (SRHC) are presented, wherethe objective is to achieve the greatest peak reduction in demand, for a given storagedevice specification, taking into account the high level of uncertainty in the prediction of LV demand. The algorithms presented in this paper are compared to a standard set-pointcontroller and bench marked against a control algorithm with a perfect forecast. A specificcase study, using storage on the LV network, is presented, and the results of each algorithmare compared. A comprehensive analysis is then carried out simulating a large number of LV networks of varying numbers of households. The results show that the performance of each algorithm is dependent on the number of aggregated households. However, on a typical aggregation, the novel SRHC algorithm presented in this paper is shown to outperform each of the comparable storage control techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In theory, enrichment of resource in a predator-prey model leads to destabilization of the system, thereby collapsing the trophic interaction, a phenomenon referred to as "the paradox of enrichment". After it was first proposed by Rosenzweig (1971), a number of subsequent studies were carried out on this dilemma over many decades. In this article, we review these theoretical and experimental works and give a brief overview of the proposed solutions to the paradox. The mechanisms that have been discussed are modifications of simple predator-prey models in the presence of prey that is inedible, invulnerable, unpalatable and toxic. Another class of mechanisms includes an incorporation of a ratio-dependent functional form, inducible defence of prey and density-dependent mortality of the predator. Moreover, we find a third set of explanations based on complex population dynamics including chaos in space and time. We conclude that, although any one of the various mechanisms proposed so far might potentially prevent destabilization of the predator-prey dynamics following enrichment, in nature different mechanisms may combine to cause stability, even when a system is enriched. The exact mechanisms, which may differ among systems, need to be disentangled through extensive field studies and laboratory experiments coupled with realistic theoretical models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Observational evidence is scarce concerning the distribution of plant pathogen population sizes or densities as a function of time-scale or spatial scale. For wild pathosystems we can only get indirect evidence from evolutionary patterns and the consequences of biological invasions.We have little or no evidence bearing on extermination of hosts by pathogens, or successful escape of a host from a pathogen. Evidence over the last couple of centuries from crops suggest that the abundance of particular pathogens in the spectrum affecting a given host can vary hugely on decadal timescales. However, this may be an artefact of domestication and intensive cultivation. Host-pathogen dynamics can be formulated mathematically fairly easily–for example as SIR-type differential equation or difference equation models, and this has been the (successful) focus of recent work in crops. “Long-term” is then discussed in terms of the time taken to relax from a perturbation to the asymptotic state. However, both host and pathogen dynamics are driven by environmental factors as well as their mutual interactions, and both host and pathogen co-evolve, and evolve in response to external factors. We have virtually no information about the importance and natural role of higher trophic levels (hyperpathogens) and competitors, but they could also induce long-scale fluctuations in the abundance of pathogens on particular hosts. In wild pathosystems the host distribution cannot be modelled as either a uniform density or even a uniform distribution of fields (which could then be treated as individuals). Patterns of short term density-dependence and the detail of host distribution are therefore critical to long-term dynamics. Host density distributions are not usually scale-free, but are rarely uniform or clearly structured on a single scale. In a (multiply structured) metapopulation with coevolution and external disturbances it could well be the case that the time required to attain equilibrium (if it exists) based on conditions stable over a specified time-scale is longer than that time-scale. Alternatively, local equilibria may be reached fairly rapidly following perturbations but the meta-population equilibrium be attained very slowly. In either case, meta-stability on various time-scales is a more relevant than equilibrium concepts in explaining observed patterns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives A pharmacy Central Intravenous Additives Service (CIVAS) provides ready to use injectable medicines. However, manipulation of a licensed injectable medicine may significantly alter the stability of drug(s) in the final product. The aim of this study was to develop a stability indicating assay for CIVAS produced dobutamine 500 mg in 50 ml dextrose 1% (w/v) prefilled syringes, and to allocate a suitable shelf life. Methods A stability indicating high performance liquid chromatography (HPLC) assay was established for dobutamine. The stability of dobutamine prefilled syringes was evaluated under storage conditions of 4°C (protected from light), room temperature (protected from light), room temperature (exposed to light) and 40°C (protected from light) at various time points (up to 42 days). Results An HPLC method employing a Hypersil column, mobile phase (pH=4.0) consisting of 82:12:6 (v/v/v) 0.05 M KH2PO4:acetonitrile:methanol plus 0.3% (v/v) triethylamine with UV detection at λ=280 nm was specific for dobutamine. Under different storage conditions only samples stored at 40°C showed greater than 5% degradation (5.08%) at 42 days and had the shortest T95% based on this criterion (44.6 days compared with 111.4 days for 4°C). Exposure to light also reduced dobutamine stability. Discolouration on storage was the limiting factor in shelf life allocation, even when dobutamine remained within 5% of the initial concentration. Conclusions A stability indicating HPLC assay for dobutamine was developed. The shelf life recommended for the CIVAS product was 42 days at 4°C and 35 days at room temperature when protected from light.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we study jumps in commodity prices. Unlike assumed in existing models of commodity price dynamics, a simple analysis of the data reveals that the probability of tail events is not constant but depends on the time of the year, i.e. exhibits seasonality. We propose a stochastic volatility jump–diffusion model to capture this seasonal variation. Applying the Markov Chain Monte Carlo (MCMC) methodology, we estimate our model using 20 years of futures data from four different commodity markets. We find strong statistical evidence to suggest that our model with seasonal jump intensity outperforms models featuring a constant jump intensity. To demonstrate the practical relevance of our findings, we show that our model typically improves Value-at-Risk (VaR) forecasts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this work is the efficient solution of the heat equation with Dirichlet or Neumann boundary conditions using the Boundary Elements Method (BEM). Efficiently solving the heat equation is useful, as it is a simple model problem for other types of parabolic problems. In complicated spatial domains as often found in engineering, BEM can be beneficial since only the boundary of the domain has to be discretised. This makes BEM easier than domain methods such as finite elements and finite differences, conventionally combined with time-stepping schemes to solve this problem. The contribution of this work is to further decrease the complexity of solving the heat equation, leading both to speed gains (in CPU time) as well as requiring smaller amounts of memory to solve the same problem. To do this we will combine the complexity gains of boundary reduction by integral equation formulations with a discretisation using wavelet bases. This reduces the total work to O(h

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of three urban land surface models, run in offline mode, with their default external parameters, is evaluated for two distinctly different sites in Helsinki: Torni and Kumpula. The former is a dense city centre site with 22% vegetation, while the latter is a suburban site with over 50% vegetation. At both locations the models are compared against sensible and latent heat fluxes measured using the eddy covariance technique, along with snow depth observations. The cold climate experienced by the city causes strong seasonal variations that include snow cover and stable atmospheric conditions. Most of the time the three models are able to account for the differences between the study areas as well as the seasonal and diurnal variability of the energy balance components. However, the performances are not systematic across the modelled components, season and surface type. The net all-wave radiation is well simulated, with the greatest uncertainties related to snowmelt timing, when the fraction of snow cover has a key role, particularly in determining the surface albedo. For the turbulent fluxes, more variation between the models is seen which can partly be explained by the different methods in their calculation and partly by surface parameter values. For the sensible heat flux, simulation of wintertime values was the main problem, which also leads to issues in predicting near-surface stabilities particularly at the dense city centre site. All models have the most difficulties in simulating latent heat flux. This study particularly emphasizes that improvements are needed in the parameterization of anthropogenic heat flux and thermal parameters in winter, snow cover in spring and evapotranspiration in order to improve the surface energy balance modelling in cold climate cities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The horizontal gradient of potential vorticity (PV) across the tropopause typically declines with lead time in global numerical weather forecasts and tends towards a steady value dependent on model resolution. This paper examines how spreading the tropopause PV contrast over a broader frontal zone affects the propagation of Rossby waves. The approach taken is to analyse Rossby waves on a PV front of finite width in a simple single-layer model. The dispersion relation for linear Rossby waves on a PV front of infinitesimal width is well known; here an approximate correction is derived for the case of a finite width front, valid in the limit that the front is narrow compared to the zonal wavelength. Broadening the front causes a decrease in both the jet speed and the ability of waves to propagate upstream. The contribution of these changes to Rossby wave phase speeds cancel at leading order. At second order the decrease in jet speed dominates, meaning phase speeds are slower on broader PV fronts. This asymptotic phase speed result is shown to hold for a wide class of single-layer dynamics with a varying range of PV inversion operators. The phase speed dependence on frontal width is verified by numerical simulations and also shown to be robust at finite wave amplitude, and estimates are made for the error in Rossby wave propagation speeds due to the PV gradient error present in numerical weather forecast models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Madden-Julian oscillation (MJO) is the most prominent form of tropical intraseasonal variability. This study investigated the following questions. Do inter-annual-to-decadal variations in tropical sea surface temperature (SST) lead to substantial changes in MJO activity? Was there a change in the MJO in the 1970s? Can this change be associated to SST anomalies? What was the level of MJO activity in the pre-reanalysis era? These questions were investigated with a stochastic model of the MJO. Reanalysis data (1948-2008) were used to develop a nine-state first order Markov model capable to simulate the non-stationarity of the MJO. The model is driven by observed SST anomalies and a large ensemble of simulations was performed to infer the activity of the MJO in the instrumental period (1880-2008). The model is capable to reproduce the activity of the MJO during the reanalysis period. The simulations indicate that the MJO exhibited a regime of near normal activity in 1948-1972 (3.4 events year(-1)) and two regimes of high activity in 1973-1989 (3.9 events) and 1990-2008 (4.6 events). Stochastic simulations indicate decadal shifts with near normal levels in 1880-1895 (3.4 events), low activity in 1896 1917 (2.6 events) and a return to near normal levels during 1918-1947 (3.3 events). The results also point out to significant decadal changes in probabilities of very active years (5 or more MJO events): 0.214 (1880-1895), 0.076 (1896-1917), 0.197 (1918-1947) and 0.193 (1948-1972). After a change in behavior in the 1970s, this probability has increased to 0.329 (1973-1989) and 0.510 (1990-2008). The observational and stochastic simulations presented here call attention to the need to further understand the variability of the MJO on a wide range of time scales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flickering is a phenomenon related to mass accretion observed among many classes of astrophysical objects. In this paper we present a study of flickering emission lines and the continuum of the cataclysmic variable V3885 Sgr. The flickering behavior was first analyzed through statistical analysis and the power spectra of lightcurves. Autocorrelation techniques were then employed to estimate the flickering timescale of flares. A cross-correlation study between the line and its underlying continuum variability is presented. The cross-correlation between the photometric and spectroscopic data is also discussed. Periodograms, calculated using emission-line data, show a behavior that is similar to those obtained from photometric datasets found in the literature, with a plateau at lower frequencies and a power-law at higher frequencies. The power-law index is consistent with stochastic events. The cross-correlation study indicates the presence of a correlation between the variability on Ha and its underlying continuum. Flickering timescales derived from the photometric data were estimated to be 25 min for two lightcurves and 10 min for one of them. The average timescales of the line flickering is 40 min, while for its underlying continuum it drops to 20 min.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A particle filter method is presented for the discrete-time filtering problem with nonlinear ItA ` stochastic ordinary differential equations (SODE) with additive noise supposed to be analytically integrable as a function of the underlying vector-Wiener process and time. The Diffusion Kernel Filter is arrived at by a parametrization of small noise-driven state fluctuations within branches of prediction and a local use of this parametrization in the Bootstrap Filter. The method applies for small noise and short prediction steps. With explicit numerical integrators, the operations count in the Diffusion Kernel Filter is shown to be smaller than in the Bootstrap Filter whenever the initial state for the prediction step has sufficiently few moments. The established parametrization is a dual-formula for the analysis of sensitivity to gaussian-initial perturbations and the analysis of sensitivity to noise-perturbations, in deterministic models, showing in particular how the stability of a deterministic dynamics is modeled by noise on short times and how the diffusion matrix of an SODE should be modeled (i.e. defined) for a gaussian-initial deterministic problem to be cast into an SODE problem. From it, a novel definition of prediction may be proposed that coincides with the deterministic path within the branch of prediction whose information entropy at the end of the prediction step is closest to the average information entropy over all branches. Tests are made with the Lorenz-63 equations, showing good results both for the filter and the definition of prediction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyzed the structure of a multispecific network or interacting ants and plants bearing extrafloral nectaries recorded in 1990 and again in 2000 in La Mancha, Veracruz, Mexico. We assessed the replicability of the number of interactions found among species and also whether there had been changes in the network structure associated with appearance of new ant and plant species during. that 10-year period. Our results show that the nested topology of the network was similar between sampling dates, group dissimilarity increased, mean number of interactions for ant species increased, the frequency distribution of standardized degrees reached higher values for plant species, more ant species and fewer plant species constituted the core of the more recent network, and the presence of new ant and plant species increased while their contribution to nestedness remained the same. Generalist species (i.e., those with the most links or interactions) appeared to maintain the stability of the network because the new species incorporated into the communities were linked to this core of generalists. Camponotus planatus was the most extreme generalist ant species (the one with the most links) in both networks, followed by four other ant species; but other species changed either their position along the continuum of generalists relative to specialists or their presence or absence within the network. Even though new species moved into the area during the decade between the surveys, the overall network structure remained unmodified.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we make use of some stochastic volatility models to analyse the behaviour of a weekly ozone average measurements series. The models considered here have been used previously in problems related to financial time series. Two models are considered and their parameters are estimated using a Bayesian approach based on Markov chain Monte Carlo (MCMC) methods. Both models are applied to the data provided by the monitoring network of the Metropolitan Area of Mexico City. The selection of the best model for that specific data set is performed using the Deviance Information Criterion and the Conditional Predictive Ordinate method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigates the numerical simulation of three-dimensional time-dependent viscoelastic free surface flows using the Upper-Convected Maxwell (UCM) constitutive equation and an algebraic explicit model. This investigation was carried out to develop a simplified approach that can be applied to the extrudate swell problem. The relevant physics of this flow phenomenon is discussed in the paper and an algebraic model to predict the extrudate swell problem is presented. It is based on an explicit algebraic representation of the non-Newtonian extra-stress through a kinematic tensor formed with the scaled dyadic product of the velocity field. The elasticity of the fluid is governed by a single transport equation for a scalar quantity which has dimension of strain rate. Mass and momentum conservations, and the constitutive equation (UCM and algebraic model) were solved by a three-dimensional time-dependent finite difference method. The free surface of the fluid was modeled using a marker-and-cell approach. The algebraic model was validated by comparing the numerical predictions with analytic solutions for pipe flow. In comparison with the classical UCM model, one advantage of this approach is that computational workload is substantially reduced: the UCM model employs six differential equations while the algebraic model uses only one. The results showed stable flows with very large extrudate growths beyond those usually obtained with standard differential viscoelastic models. (C) 2010 Elsevier Ltd. All rights reserved.