157 resultados para Simple overlap model
Resumo:
A Lagrangian model of photochemistry and mixing is described (CiTTyCAT, stemming from the Cambridge Tropospheric Trajectory model of Chemistry And Transport), which is suitable for transport and chemistry studies throughout the troposphere. Over the last five years, the model has been developed in parallel at several different institutions and here those developments have been incorporated into one "community" model and documented for the first time. The key photochemical developments include a new scheme for biogenic volatile organic compounds and updated emissions schemes. The key physical development is to evolve composition following an ensemble of trajectories within neighbouring air-masses, including a simple scheme for mixing between them via an evolving "background profile", both within the boundary layer and free troposphere. The model runs along trajectories pre-calculated using winds and temperature from meteorological analyses. In addition, boundary layer height and precipitation rates, output from the analysis model, are interpolated to trajectory points and used as inputs to the mixing and wet deposition schemes. The model is most suitable in regimes when the effects of small-scale turbulent mixing are slow relative to advection by the resolved winds so that coherent air-masses form with distinct composition and strong gradients between them. Such air-masses can persist for many days while stretching, folding and thinning. Lagrangian models offer a useful framework for picking apart the processes of air-mass evolution over inter-continental distances, without being hindered by the numerical diffusion inherent to global Eulerian models. The model, including different box and trajectory modes, is described and some output for each of the modes is presented for evaluation. The model is available for download from a Subversion-controlled repository by contacting the corresponding authors.
Resumo:
A reduced dynamical model is derived which describes the interaction of weak inertia–gravity waves with nonlinear vortical motion in the context of rotating shallow–water flow. The formal scaling assumptions are (i) that there is a separation in timescales between the vortical motion and the inertia–gravity waves, and (ii) that the divergence is weak compared to the vorticity. The model is Hamiltonian, and possesses conservation laws analogous to those in the shallow–water equations. Unlike the shallow–water equations, the energy invariant is quadratic. Nonlinear stability theorems are derived for this system, and its linear eigenvalue properties are investigated in the context of some simple basic flows.
Resumo:
Neurovascular coupling in response to stimulation of the rat barrel cortex was investigated using concurrent multichannel electrophysiology and laser Doppler flowmetry. The data were used to build a linear dynamic model relating neural activity to blood flow. Local field potential time series were subject to current source density analysis, and the time series of a layer IV sink of the barrel cortex was used as the input to the model. The model output was the time series of the changes in regional cerebral blood flow (CBF). We show that this model can provide excellent fit of the CBF responses for stimulus durations of up to 16 s. The structure of the model consisted of two coupled components representing vascular dilation and constriction. The complex temporal characteristics of the CBF time series were reproduced by the relatively simple balance of these two components. We show that the impulse response obtained under the 16-s duration stimulation condition generalised to provide a good prediction to the data from the shorter duration stimulation conditions. Furthermore, by optimising three out of the total of nine model parameters, the variability in the data can be well accounted for over a wide range of stimulus conditions. By establishing linearity, classic system analysis methods can be used to generate and explore a range of equivalent model structures (e.g., feed-forward or feedback) to guide the experimental investigation of the control of vascular dilation and constriction following stimulation.
Resumo:
Agro-hydrological models have widely been used for optimizing resources use and minimizing environmental consequences in agriculture. SMCRN is a recently developed sophisticated model which simulates crop response to nitrogen fertilizer for a wide range of crops, and the associated leaching of nitrate from arable soils. In this paper, we describe the improvements of this model by replacing the existing approximate hydrological cascade algorithm with a new simple and explicit algorithm for the basic soil water flow equation, which not only enhanced the model performance in hydrological simulation, but also was essential to extend the model application to the situations where the capillary flow is important. As a result, the updated SMCRN model could be used for more accurate study of water dynamics in the soil-crop system. The success of the model update was demonstrated by the simulated results that the updated model consistently out-performed the original model in drainage simulations and in predicting time course soil water content in different layers in the soil-wheat system. Tests of the updated SMCRN model against data from 4 field crop experiments showed that crop nitrogen offtakes and soil mineral nitrogen in the top 90 cm were in a good agreement with the measured values, indicating that the model could make more reliable predictions of nitrogen fate in the crop-soil system, and thus provides a useful platform to assess the impacts of nitrogen fertilizer on crop yield and nitrogen leaching from different production systems. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
We propose a new sparse model construction method aimed at maximizing a model’s generalisation capability for a large class of linear-in-the-parameters models. The coordinate descent optimization algorithm is employed with a modified l1- penalized least squares cost function in order to estimate a single parameter and its regularization parameter simultaneously based on the leave one out mean square error (LOOMSE). Our original contribution is to derive a closed form of optimal LOOMSE regularization parameter for a single term model, for which we show that the LOOMSE can be analytically computed without actually splitting the data set leading to a very simple parameter estimation method. We then integrate the new results within the coordinate descent optimization algorithm to update model parameters one at the time for linear-in-the-parameters models. Consequently a fully automated procedure is achieved without resort to any other validation data set for iterative model evaluation. Illustrative examples are included to demonstrate the effectiveness of the new approaches.
Resumo:
Geomagnetic activity has long been known to exhibit approximately 27 day periodicity, resulting from solar wind structures repeating each solar rotation. Thus a very simple near-Earth solar wind forecast is 27 day persistence, wherein the near-Earth solar wind conditions today are assumed to be identical to those 27 days previously. Effective use of such a persistence model as a forecast tool, however, requires the performance and uncertainty to be fully characterized. The first half of this study determines which solar wind parameters can be reliably forecast by persistence and how the forecast skill varies with the solar cycle. The second half of the study shows how persistence can provide a useful benchmark for more sophisticated forecast schemes, namely physics-based numerical models. Point-by-point assessment methods, such as correlation and mean-square error, find persistence skill comparable to numerical models during solar minimum, despite the 27 day lead time of persistence forecasts, versus 2–5 days for numerical schemes. At solar maximum, however, the dynamic nature of the corona means 27 day persistence is no longer a good approximation and skill scores suggest persistence is out-performed by numerical models for almost all solar wind parameters. But point-by-point assessment techniques are not always a reliable indicator of usefulness as a forecast tool. An event-based assessment method, which focusses key solar wind structures, finds persistence to be the most valuable forecast throughout the solar cycle. This reiterates the fact that the means of assessing the “best” forecast model must be specifically tailored to its intended use.
Resumo:
An underestimate of atmospheric blocking occurrence is a well-known limitation of many climate models. This article presents an analysis of Northern Hemisphere winter blocking in an atmospheric model with increased horizontal resolution. European blocking frequency increases with model resolution, and this results from an improvement in the atmospheric patterns of variability as well as a simple improvement in the mean state. There is some evidence that the transient eddy momentum forcing of European blocks is increased at high resolution, which could account for this. However, it is also shown that the increase in resolution of the orography is needed to realise the improvement in blocking, consistent with the increase in height of the Rocky Mountains acting to increase the tilt of the Atlantic jet stream and giving higher mean geopotential heights over northern Europe. Blocking frequencies in the Pacific sector are also increased with atmospheric resolution, but in this case the improvement in orography actually leads to a decrease in blocking
Resumo:
Since the advent of wide-angle imaging of the inner heliosphere, a plethora of techniques have been developed to investigate the three-dimensional structure and kinematics of solar wind transients, such as coronal mass ejections, from their signatures in single- and multi-spacecraft imaging observations. These techniques, which range from the highly complex and computationally intensive to methods based on simple curve fitting, all have their inherent advantages and limitations. In the analysis of single-spacecraft imaging observations, much use has been made of the fixed φ fitting (FPF) and harmonic mean fitting (HMF) techniques, in which the solar wind transient is considered to be a radially propagating point source (fixed φ, FP, model) and a radially expanding circle anchored at Sun centre (harmonic mean, HM, model), respectively. Initially, we compare the radial speeds and propagation directions derived from application of the FPF and HMF techniques to a large set of STEREO/Heliospheric Imager (HI) observations. As the geometries on which these two techniques are founded constitute extreme descriptions of solar wind transients in terms of their extent along the line of sight, we describe a single-spacecraft fitting technique based on a more generalized model for which the FP and HM geometries form the limiting cases. In addition to providing estimates of a transient’s speed and propagation direction, the self-similar expansion fitting (SSEF) technique provides, in theory, the capability to estimate the transient’s angular extent in the plane orthogonal to the field of view. Using the HI observations, and also by performing a Monte Carlo simulation, we assess the potential of the SSEF technique.
Resumo:
A number of urban land-surface models have been developed in recent years to satisfy the growing requirements for urban weather and climate interactions and prediction. These models vary considerably in their complexity and the processes that they represent. Although the models have been evaluated, the observational datasets have typically been of short duration and so are not suitable to assess the performance over the seasonal cycle. The First International Urban Land-Surface Model comparison used an observational dataset that spanned a period greater than a year, which enables an analysis over the seasonal cycle, whilst the variety of models that took part in the comparison allows the analysis to include a full range of model complexity. The results show that, in general, urban models do capture the seasonal cycle for each of the surface fluxes, but have larger errors in the summer months than in the winter. The net all-wave radiation has the smallest errors at all times of the year but with a negative bias. The latent heat flux and the net storage heat flux are also underestimated, whereas the sensible heat flux generally has a positive bias throughout the seasonal cycle. A representation of vegetation is a necessary, but not sufficient, condition for modelling the latent heat flux and associated sensible heat flux at all times of the year. Models that include a temporal variation in anthropogenic heat flux show some increased skill in the sensible heat flux at night during the winter, although their daytime values are consistently overestimated at all times of the year. Models that use the net all-wave radiation to determine the net storage heat flux have the best agreement with observed values of this flux during the daytime in summer, but perform worse during the winter months. The latter could result from a bias of summer periods in the observational datasets used to derive the relations with net all-wave radiation. Apart from these models, all of the other model categories considered in the analysis result in a mean net storage heat flux that is close to zero throughout the seasonal cycle, which is not seen in the observations. Models with a simple treatment of the physical processes generally perform at least as well as models with greater complexity.
Resumo:
We derive simple analytic expressions for the continuum light curves and spectra of flaring and flickering events that occur over a wide range of astrophysical systems. We compare these results to data taken from the cataclysmic variable SS Cygni and also from SN 1987A, deriving physical parameters for the material involved. Fits to the data indicate a nearly time-independent photospheric temperature arising from the strong temperature dependence of opacity when hydrogen is partially ionized.
Resumo:
Future climate change projections are often derived from ensembles of simulations from multiple global circulation models using heuristic weighting schemes. This study provides a more rigorous justification for this by introducing a nested family of three simple analysis of variance frameworks. Statistical frameworks are essential in order to quantify the uncertainty associated with the estimate of the mean climate change response. The most general framework yields the “one model, one vote” weighting scheme often used in climate projection. However, a simpler additive framework is found to be preferable when the climate change response is not strongly model dependent. In such situations, the weighted multimodel mean may be interpreted as an estimate of the actual climate response, even in the presence of shared model biases. Statistical significance tests are derived to choose the most appropriate framework for specific multimodel ensemble data. The framework assumptions are explicit and can be checked using simple tests and graphical techniques. The frameworks can be used to test for evidence of nonzero climate response and to construct confidence intervals for the size of the response. The methodology is illustrated by application to North Atlantic storm track data from the Coupled Model Intercomparison Project phase 5 (CMIP5) multimodel ensemble. Despite large variations in the historical storm tracks, the cyclone frequency climate change response is not found to be model dependent over most of the region. This gives high confidence in the response estimates. Statistically significant decreases in cyclone frequency are found on the flanks of the North Atlantic storm track and in the Mediterranean basin.
Resumo:
Global wetlands are believed to be climate sensitive, and are the largest natural emitters of methane (CH4). Increased wetland CH4 emissions could act as a positive feedback to future warming. The Wetland and Wetland CH4 Inter-comparison of Models Project (WETCHIMP) investigated our present ability to simulate large-scale wetland characteristics and corresponding CH4 emissions. To ensure inter-comparability, we used a common experimental protocol driving all models with the same climate and carbon dioxide (CO2) forcing datasets. The WETCHIMP experiments were conducted for model equilibrium states as well as transient simulations covering the last century. Sensitivity experiments investigated model response to changes in selected forcing inputs (precipitation, temperature, and atmospheric CO2 concentration). Ten models participated, covering the spectrum from simple to relatively complex, including models tailored either for regional or global simulations. The models also varied in methods to calculate wetland size and location, with some models simulating wetland area prognostically, while other models relied on remotely sensed inundation datasets, or an approach intermediate between the two. Four major conclusions emerged from the project. First, the suite of models demonstrate extensive disagreement in their simulations of wetland areal extent and CH4 emissions, in both space and time. Simple metrics of wetland area, such as the latitudinal gradient, show large variability, principally between models that use inundation dataset information and those that independently determine wetland area. Agreement between the models improves for zonally summed CH4 emissions, but large variation between the models remains. For annual global CH4 emissions, the models vary by ±40% of the all-model mean (190 Tg CH4 yr−1). Second, all models show a strong positive response to increased atmospheric CO2 concentrations (857 ppm) in both CH4 emissions and wetland area. In response to increasing global temperatures (+3.4 °C globally spatially uniform), on average, the models decreased wetland area and CH4 fluxes, primarily in the tropics, but the magnitude and sign of the response varied greatly. Models were least sensitive to increased global precipitation (+3.9 % globally spatially uniform) with a consistent small positive response in CH4 fluxes and wetland area. Results from the 20th century transient simulation show that interactions between climate forcings could have strong non-linear effects. Third, we presently do not have sufficient wetland methane observation datasets adequate to evaluate model fluxes at a spatial scale comparable to model grid cells (commonly 0.5°). This limitation severely restricts our ability to model global wetland CH4 emissions with confidence. Our simulated wetland extents are also difficult to evaluate due to extensive disagreements between wetland mapping and remotely sensed inundation datasets. Fourth, the large range in predicted CH4 emission rates leads to the conclusion that there is both substantial parameter and structural uncertainty in large-scale CH4 emission models, even after uncertainties in wetland areas are accounted for.
Resumo:
Persistent contrails are believed to currently have a relatively small but significant positive radiative forcing on climate. With air travel predicted to continue its rapid growth over the coming years, the contrail warming effect on climate is expected to increase. Nevertheless, there remains a high level of uncertainty in the current estimates of contrail radiative forcing. Contrail formation depends mostly on the aircraft flying in cold and moist enough air masses. Most studies to date have relied on simple parameterizations using averaged meteorological conditions. In this paper we take into account the short‐term variability in background cloudiness by developing an on‐line contrail parameterization for the UK Met Office climate model. With this parameterization, we estimate that for the air traffic of year 2002 the global mean annual linear contrail coverage was approximately 0.11%. Assuming a global mean contrail optical depth of 0.2 or smaller and assuming hexagonal ice crystals, the corresponding contrail radiative forcing was calculated to be less than 10 mW m−2 in all‐sky conditions. We find that the natural cloud masking effect on contrails may be significantly higher than previously believed. This new result is explained by the fact that contrails seem to preferentially form in cloudy conditions, which ameliorates their overall climate impact by approximately 40%.
Resumo:
Urbanization, the expansion of built-up areas, is an important yet less-studied aspect of land use/land cover change in climate science. To date, most global climate models used to evaluate effects of land use/land cover change on climate do not include an urban parameterization. Here, the authors describe the formulation and evaluation of a parameterization of urban areas that is incorporated into the Community Land Model, the land surface component of the Community Climate System Model. The model is designed to be simple enough to be compatible with structural and computational constraints of a land surface model coupled to a global climate model yet complex enough to explore physically based processes known to be important in determining urban climatology. The city representation is based upon the “urban canyon” concept, which consists of roofs, sunlit and shaded walls, and canyon floor. The canyon floor is divided into pervious (e.g., residential lawns, parks) and impervious (e.g., roads, parking lots, sidewalks) fractions. Trapping of longwave radiation by canyon surfaces and solar radiation absorption and reflection is determined by accounting for multiple reflections. Separate energy balances and surface temperatures are determined for each canyon facet. A one-dimensional heat conduction equation is solved numerically for a 10-layer column to determine conduction fluxes into and out of canyon surfaces. Model performance is evaluated against measured fluxes and temperatures from two urban sites. Results indicate the model does a reasonable job of simulating the energy balance of cities.
Resumo:
Nocturnal cooling of air within a forest canopy and the resulting temperature profile may drive local thermally driven motions, such as drainage flows, which are believed to impact measurements of ecosystem–atmosphere exchange. To model such flows, it is necessary to accurately predict the rate of cooling. Cooling occurs primarily due to radiative heat loss. However, much of the radiative loss occurs at the surface of canopy elements (leaves, branches, and boles of trees), while radiative divergence in the canopy air space is small due to high transmissivity of air. Furthermore, sensible heat exchange between the canopy elements and the air space is slow relative to radiative fluxes. Therefore, canopy elements initially cool much more quickly than the canopy air space after the switch from radiative gain during the day to radiative loss during the night. Thus in modeling air cooling within a canopy, it is not appropriate to neglect the storage change of heat in the canopy elements or even to assume equal rates of cooling of the canopy air and canopy elements. Here a simple parameterization of radiatively driven cooling of air within the canopy is presented, which accounts implicitly for radiative cooling of the canopy volume, heat storage in the canopy elements, and heat transfer between the canopy elements and the air. Simulations using this parameterization are compared to temperature data from the Morgan–Monroe State Forest (IN, USA) FLUXNET site. While the model does not perfectly reproduce the measured rates of cooling, particularly near the top of the canopy, the simulated cooling rates are of the correct order of magnitude.