948 resultados para Random Coefficient Autoregressive Model{ RCAR (1)}
Resumo:
Nitrogen and phosphorus losses from the catchment of Slapton Ley, a small coastal lake in SW England, were calculated using an adaptation of a model developed by Jorgensen (1980). A detailed survey of the catchment revealed that its land use is dominated by both permanent and temporary grassland (respectively 38 and 32% of its total area), and that the remainder is made up of the cultivation of cereals and field vegetables, and market gardening. Livestock numbers in the catchment constitute ca. 6600 head of cattle, 10,000 sheep, 590 pigs, 1700 poultry and 58 horses. The permanent human population of the area is ca. 2000, served by two small gravity-fed sewage treatment works (STWs). Inputs to, and losses from, farmland in the catchment were computed using Jorgensen’s model, and coefficients derived from the data of Cooke (1976), Gostick (1982), Rast and Lee (1983) and Vollenweider (1968). Allowing for outputs from STWs, the total annual external load of N and P upon Slapton Ley is 160 t (35 kg ha-1) a-1 N, and 4.8 t (1.05 kg ha-1) a-1 P. Accordingly to Vollenweider (1968, 1975), such loadings exceed OECD permissible level by a factor of ca. 50 in the case of N, and ca. 5 in that of P. In order to reduce nutrient loads, attention would need to be paid to both STW and agricultural sources.
Resumo:
Existing numerical characterizations of the optimal income tax have been based on a limited number of model specifications. As a result, they do not reveal which properties are general. We determine the optimal tax in the quasi-linear model under weaker assumptions than have previously been used; in particular, we remove the assumption of a lower bound on the utility of zero consumption and the need to permit negative labor incomes. A Monte Carlo analysis is then conducted in which economies are selected at random and the optimal tax function constructed. The results show that in a significant proportion of economies the marginal tax rate rises at low skills and falls at high. The average tax rate is equally likely to rise or fall with skill at low skill levels, rises in the majority of cases in the centre of the skill range, and falls at high skills. These results are consistent across all the specifications we test. We then extend the analysis to show that these results also hold for Cobb-Douglas utility.
Resumo:
We present a model of market participation in which the presence of non-negligible fixed costs leads to random censoring of the traditional double-hurdle model. Fixed costs arise when household resources must be devoted a priori to the decision to participate in the market. These costs, usually of time, are manifested in non-negligible minimum-efficient supplies and supply correspondence that requires modification of the traditional Tobit regression. The costs also complicate econometric estimation of household behavior. These complications are overcome by application of the Gibbs sampler. The algorithm thus derived provides robust estimates of the fixed-costs, double-hurdle model. The model and procedures are demonstrated in an application to milk market participation in the Ethiopian highlands.
Resumo:
In order to validate the reported precision of space‐based atmospheric composition measurements, validation studies often focus on measurements in the tropical stratosphere, where natural variability is weak. The scatter in tropical measurements can then be used as an upper limit on single‐profile measurement precision. Here we introduce a method of quantifying the scatter of tropical measurements which aims to minimize the effects of short‐term atmospheric variability while maintaining large enough sample sizes that the results can be taken as representative of the full data set. We apply this technique to measurements of O3, HNO3, CO, H2O, NO, NO2, N2O, CH4, CCl2F2, and CCl3F produced by the Atmospheric Chemistry Experiment–Fourier Transform Spectrometer (ACE‐FTS). Tropical scatter in the ACE‐FTS retrievals is found to be consistent with the reported random errors (RREs) for H2O and CO at altitudes above 20 km, validating the RREs for these measurements. Tropical scatter in measurements of NO, NO2, CCl2F2, and CCl3F is roughly consistent with the RREs as long as the effect of outliers in the data set is reduced through the use of robust statistics. The scatter in measurements of O3, HNO3, CH4, and N2O in the stratosphere, while larger than the RREs, is shown to be consistent with the variability simulated in the Canadian Middle Atmosphere Model. This result implies that, for these species, stratospheric measurement scatter is dominated by natural variability, not random error, which provides added confidence in the scientific value of single‐profile measurements.
Resumo:
In this study a gridded hourly 1-km precipitation dataset for a meso-scale catchment (4,062 km2) of the Upper Severn River, UK was constructed using rainfall radar data to disaggregate a daily precipitation (rain gauge) dataset. The dataset was compared to an hourly precipitation dataset created entirely from rainfall radar data. Results found that when assessed against gauge readings and as input to the Lisflood-RR hydrological model, the rain gauge/radar disaggregated dataset performed the best suggesting that this simple method of combining rainfall radar data with rain gauge readings can provide temporally detailed precipitation datasets for calibrating hydrological models.
Resumo:
In this paper we report on a study conducted using the Middle Atmospheric Nitrogen TRend Assessment (MANTRA) balloon measurements of stratospheric constituents and temperature and the Canadian Middle Atmosphere Model (CMAM). Three different kinds of data are used to assess the inter-consistency of the combined dataset: single profiles of long-lived species from MANTRA 1998, sparse climatologies from the ozonesonde measurements during the four MANTRA campaigns and from HALOE satellite measurements, and the CMAM climatology. In doing so, we evaluate the ability of the model to reproduce the measured fields and to thereby test our ability to describe mid-latitude summertime stratospheric processes. The MANTRA campaigns were conducted at Vanscoy, Saskatchewan, Canada (52◦ N, 107◦ W)in late August and early September of 1998, 2000, 2002 and 2004. During late summer at mid-latitudes, the stratosphere is close to photochemical control, providing an ideal scenario for the study reported here. From this analysis we find that: (1) reducing the value for the vertical diffusion coefficient in CMAM to a more physically reasonable value results in the model better reproducing the measured profiles of long-lived species; (2) the existence of compact correlations among the constituents, as expected from independent measurements in the literature and from models, confirms the self-consistency of the MANTRA measurements; and (3) the 1998 measurements show structures in the chemical species profiles that can be associated with transport, adding to the growing evidence that the summertime stratosphere can be much more disturbed than anticipated. The mechanisms responsible for such disturbances need to be understood in order to assess the representativeness of the measurements and to isolate longterm trends.
Resumo:
Remote sensing observations often have correlated errors, but the correlations are typically ignored in data assimilation for numerical weather prediction. The assumption of zero correlations is often used with data thinning methods, resulting in a loss of information. As operational centres move towards higher-resolution forecasting, there is a requirement to retain data providing detail on appropriate scales. Thus an alternative approach to dealing with observation error correlations is needed. In this article, we consider several approaches to approximating observation error correlation matrices: diagonal approximations, eigendecomposition approximations and Markov matrices. These approximations are applied in incremental variational assimilation experiments with a 1-D shallow water model using synthetic observations. Our experiments quantify analysis accuracy in comparison with a reference or ‘truth’ trajectory, as well as with analyses using the ‘true’ observation error covariance matrix. We show that it is often better to include an approximate correlation structure in the observation error covariance matrix than to incorrectly assume error independence. Furthermore, by choosing a suitable matrix approximation, it is feasible and computationally cheap to include error correlation structure in a variational data assimilation algorithm.
Resumo:
We report numerical results from a study of balance dynamics using a simple model of atmospheric motion that is designed to help address the question of why balance dynamics is so stable. The non-autonomous Hamiltonian model has a chaotic slow degree of freedom (representing vortical modes) coupled to one or two linear fast oscillators (representing inertia-gravity waves). The system is said to be balanced when the fast and slow degrees of freedom are separated. We find adiabatic invariants that drift slowly in time. This drift is consistent with a random-walk behaviour at a speed which qualitatively scales, even for modest time scale separations, as the upper bound given by Neishtadt’s and Nekhoroshev’s theorems. Moreover, a similar type of scaling is observed for solutions obtained using a singular perturbation (‘slaving’) technique in resonant cases where Nekhoroshev’s theorem does not apply. We present evidence that the smaller Lyapunov exponents of the system scale exponentially as well. The results suggest that the observed stability of nearly-slow motion is a consequence of the approximate adiabatic invariance of the fast motion.
Resumo:
Peatland habitats are important carbon stocks that also have the potential to be significant sources of greenhouse gases, particularly when subject to changes such as artificial drainage and application of fertilizer. Models aiming to estimate greenhouse gas release from peatlands require an accurate estimate of the diffusion coefficient of gas transport through soil (Ds). The availability of specific measurements for peatland soils is currently limited. This study measured Ds for a peat soil with an overlying clay horizon and compared values with those from widely available models. The Ds value of a sandy loam reference soil was measured for comparison. Using the Currie (1960) method, Ds was measured between an air-filled porosity (ϵ) range of 0 and 0.5 cm3 cm−3. Values of Ds for the peat cores ranged between 3.2 × 10−4 and 4.4 × 10−3 m2 hour−1, for loamy clay cores between 0 and 4.7 × 10−3 m2 hour−1 and for the sandy reference soil they were between 5.4 × 10−4 and 3.4 × 10−3 m2 hour−1. The agreement of measured and modelled values of relative diffusivity (Ds/D0, with D0 the diffusion coefficient through free air) varied with soil type; however, the Campbell (1985) model provided the best replication of measured values for all soils. This research therefore suggests that the use of the Campbell model in the absence of accurately measured Ds and porosity values for a study soil would be appropriate. Future research into methods to reduce shrinkage of peat during measurement and therefore allow measurement of Ds for a greater range of ϵ would be beneficial.
The Joint UK Land Environment Simulator (JULES), model description – part 1: energy and water fluxes
Resumo:
This manuscript describes the energy and water components of a new community land surface model called the Joint UK Land Environment Simulator (JULES). This is developed from the Met Office Surface Exchange Scheme (MOSES). It can be used as a stand alone land surface model driven by observed forcing data, or coupled to an atmospheric global circulation model. The JULES model has been coupled to the Met Office Unified Model (UM) and as such provides a unique opportunity for the research community to contribute their research to improve both world-leading operational weather forecasting and climate change prediction systems. In addition JULES, and its forerunner MOSES, have been the basis for a number of very high-profile papers concerning the land-surface and climate over the last decade. JULES has a modular structure aligned to physical processes, providing the basis for a flexible modelling platform.
Resumo:
The global cycle of multicomponent aerosols including sulfate, black carbon (BC),organic matter (OM), mineral dust, and sea salt is simulated in the Laboratoire de Me´te´orologie Dynamique general circulation model (LMDZT GCM). The seasonal open biomass burning emissions for simulation years 2000–2001 are scaled from climatological emissions in proportion to satellite detected fire counts. The emissions of dust and sea salt are parameterized online in the model. The comparison of model-predicted monthly mean aerosol optical depth (AOD) at 500 nm with Aerosol Robotic Network (AERONET) shows good agreement with a correlation coefficient of 0.57(N = 1324) and 76% of data points falling within a factor of 2 deviation. The correlation coefficient for daily mean values drops to 0.49 (N = 23,680). The absorption AOD (ta at 670 nm) estimated in the model is poorly correlated with measurements (r = 0.27, N = 349). It is biased low by 24% as compared to AERONET. The model reproduces the prominent features in the monthly mean AOD retrievals from Moderate Resolution Imaging Spectroradiometer (MODIS). The agreement between the model and MODIS is better over source and outflow regions (i.e., within a factor of 2).There is an underestimation of the model by up to a factor of 3 to 5 over some remote oceans. The largest contribution to global annual average AOD (0.12 at 550 nm) is from sulfate (0.043 or 35%), followed by sea salt (0.027 or 23%), dust (0.026 or 22%),OM (0.021 or 17%), and BC (0.004 or 3%). The atmospheric aerosol absorption is predominantly contributed by BC and is about 3% of the total AOD. The globally and annually averaged shortwave (SW) direct aerosol radiative perturbation (DARP) in clear-sky conditions is �2.17 Wm�2 and is about a factor of 2 larger than in all-sky conditions (�1.04 Wm�2). The net DARP (SW + LW) by all aerosols is �1.46 and �0.59 Wm�2 in clear- and all-sky conditions, respectively. Use of realistic, less absorbing in SW, optical properties for dust results in negative forcing over the dust-dominated regions.
Resumo:
A stand-alone sea ice model is tuned and validated using satellite-derived, basinwide observations of sea ice thickness, extent, and velocity from the years 1993 to 2001. This is the first time that basin-scale measurements of sea ice thickness have been used for this purpose. The model is based on the CICE sea ice model code developed at the Los Alamos National Laboratory, with some minor modifications, and forcing consists of 40-yr ECMWF Re-Analysis (ERA-40) and Polar Exchange at the Sea Surface (POLES) data. Three parameters are varied in the tuning process: Ca, the air–ice drag coefficient; P*, the ice strength parameter; and α, the broadband albedo of cold bare ice, with the aim being to determine the subset of this three-dimensional parameter space that gives the best simultaneous agreement with observations with this forcing set. It is found that observations of sea ice extent and velocity alone are not sufficient to unambiguously tune the model, and that sea ice thickness measurements are necessary to locate a unique subset of parameter space in which simultaneous agreement is achieved with all three observational datasets.
Resumo:
In this paper we develop and apply methods for the spectral analysis of non-selfadjoint tridiagonal infinite and finite random matrices, and for the spectral analysis of analogous deterministic matrices which are pseudo-ergodic in the sense of E. B. Davies (Commun. Math. Phys. 216 (2001), 687–704). As a major application to illustrate our methods we focus on the “hopping sign model” introduced by J. Feinberg and A. Zee (Phys. Rev. E 59 (1999), 6433–6443), in which the main objects of study are random tridiagonal matrices which have zeros on the main diagonal and random ±1’s as the other entries. We explore the relationship between spectral sets in the finite and infinite matrix cases, and between the semi-infinite and bi-infinite matrix cases, for example showing that the numerical range and p-norm ε - pseudospectra (ε > 0, p ∈ [1,∞] ) of the random finite matrices converge almost surely to their infinite matrix counterparts, and that the finite matrix spectra are contained in the infinite matrix spectrum Σ. We also propose a sequence of inclusion sets for Σ which we show is convergent to Σ, with the nth element of the sequence computable by calculating smallest singular values of (large numbers of) n×n matrices. We propose similar convergent approximations for the 2-norm ε -pseudospectra of the infinite random matrices, these approximations sandwiching the infinite matrix pseudospectra from above and below.
Resumo:
A parameterization of mesoscale eddies in coarse-resolution ocean general circulation models (GCM) is formulated and implemented using a residual-mean formalism. In that framework, mean buoyancy is advected by the residual velocity (the sum of the Eulerian and eddy-induced velocities) and modified by a residual flux which accounts for the diabatic effects of mesoscale eddies. The residual velocity is obtained by stepping forward a residual-mean momentum equation in which eddy stresses appear as forcing terms. Study of the spatial distribution of eddy stresses, derived by using them as control parameters to ‘‘fit’’ the residual-mean model to observations, supports the idea that eddy stresses can be likened to a vertical down-gradient flux of momentum with a coefficient which is constant in the vertical. The residual eddy flux is set to zero in the ocean interior, where mesoscale eddies are assumed to be quasi-adiabatic, but is parameterized by a horizontal down-gradient diffusivity near the surface where eddies develop a diabatic component as they stir properties horizontally across steep isopycnals. The residual-mean model is implemented and tested in the MIT general circulation model. It is shown that the resulting model (1) has a climatology that is superior to that obtained using the Gent and McWilliams parameterization scheme with a spatially uniform diffusivity and (2) allows one to significantly reduce the (spurious) horizontal viscosity used in coarse resolution GCMs.
Resumo:
Flash floods pose a significant danger for life and property. Unfortunately, in arid and semiarid environment the runoff generation shows a complex non-linear behavior with a strong spatial and temporal non-uniformity. As a result, the predictions made by physically-based simulations in semiarid areas are subject to great uncertainty, and a failure in the predictive behavior of existing models is common. Thus better descriptions of physical processes at the watershed scale need to be incorporated into the hydrological model structures. For example, terrain relief has been systematically considered static in flood modelling at the watershed scale. Here, we show that the integrated effect of small distributed relief variations originated through concurrent hydrological processes within a storm event was significant on the watershed scale hydrograph. We model these observations by introducing dynamic formulations of two relief-related parameters at diverse scales: maximum depression storage, and roughness coefficient in channels. In the final (a posteriori) model structure these parameters are allowed to be both time-constant or time-varying. The case under study is a convective storm in a semiarid Mediterranean watershed with ephemeral channels and high agricultural pressures (the Rambla del Albujón watershed; 556 km 2 ), which showed a complex multi-peak response. First, to obtain quasi-sensible simulations in the (a priori) model with time-constant relief-related parameters, a spatially distributed parameterization was strictly required. Second, a generalized likelihood uncertainty estimation (GLUE) inference applied to the improved model structure, and conditioned to observed nested hydrographs, showed that accounting for dynamic relief-related parameters led to improved simulations. The discussion is finally broadened by considering the use of the calibrated model both to analyze the sensitivity of the watershed to storm motion and to attempt the flood forecasting of a stratiform event with highly different behavior.