931 resultados para Spatial conditional autoregressive model
Resumo:
Lack of access to insurance exacerbates the impact of climate variability on smallholder famers in Africa. Unlike traditional insurance, which compensates proven agricultural losses, weather index insurance (WII) pays out in the event that a weather index is breached. In principle, WII could be provided to farmers throughout Africa. There are two data-related hurdles to this. First, most farmers do not live close enough to a rain gauge with sufficiently long record of observations. Second, mismatches between weather indices and yield may expose farmers to uncompensated losses, and insurers to unfair payouts – a phenomenon known as basis risk. In essence, basis risk results from complexities in the progression from meteorological drought (rainfall deficit) to agricultural drought (low soil moisture). In this study, we use a land-surface model to describe the transition from meteorological to agricultural drought. We demonstrate that spatial and temporal aggregation of rainfall results in a clearer link with soil moisture, and hence a reduction in basis risk. We then use an advanced statistical method to show how optimal aggregation of satellite-based rainfall estimates can reduce basis risk, enabling remotely sensed data to be utilized robustly for WII.
Resumo:
Land cover data derived from satellites are commonly used to prescribe inputs to models of the land surface. Since such data inevitably contains errors, quantifying how uncertainties in the data affect a model’s output is important. To do so, a spatial distribution of possible land cover values is required to propagate through the model’s simulation. However, at large scales, such as those required for climate models, such spatial modelling can be difficult. Also, computer models often require land cover proportions at sites larger than the original map scale as inputs, and it is the uncertainty in these proportions that this article discusses. This paper describes a Monte Carlo sampling scheme that generates realisations of land cover proportions from the posterior distribution as implied by a Bayesian analysis that combines spatial information in the land cover map and its associated confusion matrix. The technique is computationally simple and has been applied previously to the Land Cover Map 2000 for the region of England and Wales. This article demonstrates the ability of the technique to scale up to large (global) satellite derived land cover maps and reports its application to the GlobCover 2009 data product. The results show that, in general, the GlobCover data possesses only small biases, with the largest belonging to non–vegetated surfaces. In vegetated surfaces, the most prominent area of uncertainty is Southern Africa, which represents a complex heterogeneous landscape. It is also clear from this study that greater resources need to be devoted to the construction of comprehensive confusion matrices.
Resumo:
Field observations of new particle formation and the subsequent particle growth are typically only possible at a fixed measurement location, and hence do not follow the temporal evolution of an air parcel in a Lagrangian sense. Standard analysis for determining formation and growth rates requires that the time-dependent formation rate and growth rate of the particles are spatially invariant; air parcel advection means that the observed temporal evolution of the particle size distribution at a fixed measurement location may not represent the true evolution if there are spatial variations in the formation and growth rates. Here we present a zero-dimensional aerosol box model coupled with one-dimensional atmospheric flow to describe the impact of advection on the evolution of simulated new particle formation events. Wind speed, particle formation rates and growth rates are input parameters that can vary as a function of time and location, using wind speed to connect location to time. The output simulates measurements at a fixed location; formation and growth rates of the particle mode can then be calculated from the simulated observations at a stationary point for different scenarios and be compared with the ‘true’ input parameters. Hence, we can investigate how spatial variations in the formation and growth rates of new particles would appear in observations of particle number size distributions at a fixed measurement site. We show that the particle size distribution and growth rate at a fixed location is dependent on the formation and growth parameters upwind, even if local conditions do not vary. We also show that different input parameters used may result in very similar simulated measurements. Erroneous interpretation of observations in terms of particle formation and growth rates, and the time span and areal extent of new particle formation, is possible if the spatial effects are not accounted for.
Resumo:
In arthropods, most cases of morphological dimorphism within males are the result of a conditional evolutionarily stable strategy (ESS) with status-dependent tactics. In conditionally male-dimorphic species, the status` distributions of male morphs often overlap, and the environmentally cued threshold model (ET) states that the degree of overlap depends on the genetic variation in the distribution of the switchpoints that determine which morph is expressed in each value of status. Here we describe male dimorphism and alternative mating behaviors in the harvestman Serracutisoma proximum. Majors express elongated second legs and use them in territorial fights; minors possess short second legs and do not fight, but rather sneak into majors` territories and copulate with egg-guarding females. The static allometry of second legs reveals that major phenotype expression depends on body size (status), and that the switchpoint underlying the dimorphism presents a large amount of genetic variation in the population, which probably results from weak selective pressure on this trait. With a mark-recapture study, we show that major phenotype expression does not result in survival costs, which is consistent with our hypothesis that there is weak selection on the switchpoint. Finally, we demonstrate that switchpoint is independent of status distribution. In conclusion, our data support the ET model prediction that the genetic correlation between status and switchpoint is low, allowing the status distribution to evolve or to fluctuate seasonally, without any effect on the position of the mean switchpoint.
Resumo:
Voluntary physical activity improves memory and learning ability in rodents, whereas status epilepticus has been associated with memory impairment. Physical activity and seizures have been associated with enhanced hippocampal expression of BDNF, indicating that this protein may have a dual role in epilepsy. The influence of voluntary physical activity on memory and BDNF expression has been poorly studied in experimental models of epilepsy. In this paper, we have investigated the effect of voluntary physical activity on memory and BDNF expression in mice with pilocarpine-incluced epilepsy. Male Swiss mice were assigned to four experimental groups: pilocarpine sedentary (PS), pilocarpine runners (PRs), saline sedentary (SS) and saline runners (SRs). Two days after pilocarpine-induced status epilepticus, the affected mice (PR) and their running controls (SR) were housed with access to a running wheel for 28 days. After that, the spatial memory and the expression of the precursor and mature forms of hippocampal BDNF were assessed. PR mice performed better than PS mice in the water maze test. In addition, PR mice had a higher amount of mature BDNF (14 kDa) relative to the total BDNF (14 kDa + 28 kDa + 32 kDa forms) content when compared with PS mice. These results show that voluntary physical activity improved the spatial memory and increased the hippocampal content of mature BDNF of mice with pilocarpine-induced status epilepticus. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The transition to turbulence (spatio-temporal chaos) in a wide class of spatially extended dynamical system is due to the loss of transversal stability of a chaotic attractor lying on a homogeneous manifold (in the Fourier phase space of the system) causing spatial mode excitation Since the latter manifests as intermittent spikes this has been called a bubbling transition We present numerical evidences that this transition occurs due to the so called blowout bifurcation whereby the attractor as a whole loses transversal stability and becomes a chaotic saddle We used a nonlinear three-wave interacting model with spatial diffusion as an example of this transition (C) 2010 Elsevier B V All rights reserved
Resumo:
[1] The retrieval of aerosol optical depth (Ta) over land by satellite remote sensing is still a challenge when a high spatial resolution is required. This study presents a tool that uses satellite measurements to dynamically identify the aerosol optical model that best represents the optical properties of the aerosol present in the atmosphere. We use aerosol critical reflectance to identify the single scattering albedo of the aerosol layer. Two case studies show that the Sao Paulo region can have different aerosol properties and demonstrates how the dynamic methodology works to identify those differences to obtain a better T a retrieval. The methodology assigned the high single scattering albedo aerosol model (pi o( lambda = 0.55) = 0.90) to the case where the aerosol source was dominated by biomass burning and the lower pi(o) model (pi(o) (lambda = 0.55) = 0.85) to the case where the local urban aerosol had the dominant influence on the region, as expected. The dynamic methodology was applied using cloud-free data from 2002 to 2005 in order to retrieve Ta with Moderate Resolution Imaging Spectroradiometer ( MODIS). These results were compared with collocated data measured by AERONET in Sao Paulo. The comparison shows better results when the dynamic methodology using two aerosol optical models is applied (slope 1.06 +/- 0.08 offset 0.01 +/- 0.02 r(2) 0.6) than when a single and fixed aerosol model is used (slope 1.48 +/- 0.11 and offset - 0.03 +/- 0.03 r(2) 0.6). In conclusion the dynamical methodology is shown to work well with two aerosol models. Further studies are necessary to evaluate the methodology in other regions and under different conditions.
Resumo:
We study a stochastic process describing the onset of spreading dynamics of an epidemic in a population composed of individuals of three classes: susceptible (S), infected (I), and recovered (R). The stochastic process is defined by local rules and involves the following cyclic process: S -> I -> R -> S (SIRS). The open process S -> I -> R (SIR) is studied as a particular case of the SIRS process. The epidemic process is analyzed at different levels of description: by a stochastic lattice gas model and by a birth and death process. By means of Monte Carlo simulations and dynamical mean-field approximations we show that the SIRS stochastic lattice gas model exhibit a line of critical points separating the two phases: an absorbing phase where the lattice is completely full of S individuals and an active phase where S, I and R individuals coexist, which may or may not present population cycles. The critical line, that corresponds to the onset of epidemic spreading, is shown to belong in the directed percolation universality class. By considering the birth and death process we analyze the role of noise in stabilizing the oscillations. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
We have considered a Bayesian approach for the nonlinear regression model by replacing the normal distribution on the error term by some skewed distributions, which account for both skewness and heavy tails or skewness alone. The type of data considered in this paper concerns repeated measurements taken in time on a set of individuals. Such multiple observations on the same individual generally produce serially correlated outcomes. Thus, additionally, our model does allow for a correlation between observations made from the same individual. We have illustrated the procedure using a data set to study the growth curves of a clinic measurement of a group of pregnant women from an obstetrics clinic in Santiago, Chile. Parameter estimation and prediction were carried out using appropriate posterior simulation schemes based in Markov Chain Monte Carlo methods. Besides the deviance information criterion (DIC) and the conditional predictive ordinate (CPO), we suggest the use of proper scoring rules based on the posterior predictive distribution for comparing models. For our data set, all these criteria chose the skew-t model as the best model for the errors. These DIC and CPO criteria are also validated, for the model proposed here, through a simulation study. As a conclusion of this study, the DIC criterion is not trustful for this kind of complex model.
Resumo:
The objective of this article is to find out the influence of the parameters of the ARIMA-GARCH models in the prediction of artificial neural networks (ANN) of the feed forward type, trained with the Levenberg-Marquardt algorithm, through Monte Carlo simulations. The paper presents a study of the relationship between ANN performance and ARIMA-GARCH model parameters, i.e. the fact that depending on the stationarity and other parameters of the time series, the ANN structure should be selected differently. Neural networks have been widely used to predict time series and their capacity for dealing with non-linearities is a normally outstanding advantage. However, the values of the parameters of the models of generalized autoregressive conditional heteroscedasticity have an influence on ANN prediction performance. The combination of the values of the GARCH parameters with the ARIMA autoregressive terms also implies in ANN performance variation. Combining the parameters of the ARIMA-GARCH models and changing the ANN`s topologies, we used the Theil inequality coefficient to measure the prediction of the feed forward ANN.
Resumo:
This paper studies a special class of vector smooth-transition autoregressive (VSTAR) models that contains common nonlinear features (CNFs), for which we proposed a triangular representation and developed a procedure of testing CNFs in a VSTAR model. We first test a unit root against a stable STAR process for each individual time series and then examine whether CNFs exist in the system by Lagrange Multiplier (LM) test if unit root is rejected in the first step. The LM test has standard Chi-squared asymptotic distribution. The critical values of our unit root tests and small-sample properties of the F form of our LM test are studied by Monte Carlo simulations. We illustrate how to test and model CNFs using the monthly growth of consumption and income data of United States (1985:1 to 2011:11).
Resumo:
The p-median model is used to locate P facilities to serve a geographically distributed population. Conventionally, it is assumed that the population always travels to the nearest facility. Drezner and Drezner (2006, 2007) provide three arguments on why this assumption might be incorrect, and they introduce the extended the gravity p-median model to relax the assumption. We favour the gravity p-median model, but we note that in an applied setting, Drezner and Drezner’s arguments are incomplete. In this communication, we point at the existence of a fourth compelling argument for the gravity p-median model.
Resumo:
Millions of unconscious calculations are made daily by pedestrians walking through the Colby College campus. I used ArcGIS to make a predictive spatial model that chose paths similar to those that are actually used by people on a regular basis. To make a viable model of how most travelers choose their way, I considered both the distance required and the type of traveling surface. I used an iterative process to develop a scheme for weighting travel costs which resulted in accurate least-cost paths to be predicted by ArcMap. The accuracy was confirmed when the calculated routes were compared to satellite photography and were found to overlap well-worn “shortcuts” taken between the paved paths throughout campus.
Resumo:
The reliable evaluation of the flood forecasting is a crucial problem for assessing flood risk and consequent damages. Different hydrological models (distributed, semi-distributed or lumped) have been proposed in order to deal with this issue. The choice of the proper model structure has been investigated by many authors and it is one of the main sources of uncertainty for a correct evaluation of the outflow hydrograph. In addition, the recent increasing of data availability makes possible to update hydrological models as response of real-time observations. For these reasons, the aim of this work it is to evaluate the effect of different structure of a semi-distributed hydrological model in the assimilation of distributed uncertain discharge observations. The study was applied to the Bacchiglione catchment, located in Italy. The first methodological step was to divide the basin in different sub-basins according to topographic characteristics. Secondly, two different structures of the semi-distributed hydrological model were implemented in order to estimate the outflow hydrograph. Then, synthetic observations of uncertain value of discharge were generated, as a function of the observed and simulated value of flow at the basin outlet, and assimilated in the semi-distributed models using a Kalman Filter. Finally, different spatial patterns of sensors location were assumed to update the model state as response of the uncertain discharge observations. The results of this work pointed out that, overall, the assimilation of uncertain observations can improve the hydrologic model performance. In particular, it was found that the model structure is an important factor, of difficult characterization, since can induce different forecasts in terms of outflow discharge. This study is partly supported by the FP7 EU Project WeSenseIt.
Resumo:
This paper proposes a spatial-temporal downscaling approach to construction of the intensity-duration-frequency (IDF) relations at a local site in the context of climate change and variability. More specifically, the proposed approach is based on a combination of a spatial downscaling method to link large-scale climate variables given by General Circulation Model (GCM) simulations with daily extreme precipitations at a site and a temporal downscaling procedure to describe the relationships between daily and sub-daily extreme precipitations based on the scaling General Extreme Value (GEV) distribution. The feasibility and accuracy of the suggested method were assessed using rainfall data available at eight stations in Quebec (Canada) for the 1961-2000 period and climate simulations under four different climate change scenarios provided by the Canadian (CGCM3) and UK (HadCM3) GCM models. Results of this application have indicated that it is feasible to link sub-daily extreme rainfalls at a local site with large-scale GCM-based daily climate predictors for the construction of the IDF relations for present (1961-1990) and future (2020s, 2050s, and 2080s) periods at a given site under different climate change scenarios. In addition, it was found that annual maximum rainfalls downscaled from the HadCM3 displayed a smaller change in the future, while those values estimated from the CGCM3 indicated a large increasing trend for future periods. This result has demonstrated the presence of high uncertainty in climate simulations provided by different GCMs. In summary, the proposed spatial-temporal downscaling method provided an essential tool for the estimation of extreme rainfalls that are required for various climate-related impact assessment studies for a given region.