944 resultados para Developed model
Resumo:
[1] During the Northern Hemisphere summer, absorbed solar radiation melts snow and the upper surface of Arctic sea ice to generate meltwater that accumulates in ponds. The melt ponds reduce the albedo of the sea ice cover during the melting season, with a significant impact on the heat and mass budget of the sea ice and the upper ocean. We have developed a model, designed to be suitable for inclusion into a global circulation model (GCM), which simulates the formation and evolution of the melt pond cover. In order to be compatible with existing GCM sea ice models, our melt pond model builds upon the existing theory of the evolution of the sea ice thickness distribution. Since this theory does not describe the topography of the ice cover, which is crucial to determining the location, extent, and depth of individual ponds, we have needed to introduce some assumptions. We describe our model, present calculations and a sensitivity analysis, and discuss our results.
Resumo:
A multithickness sea ice model explicitly accounting for the ridging and sliding friction contributions to sea ice stress is developed. Both ridging and sliding contributions depend on the deformation type through functions adopted from the Ukita and Moritz kinematic model of floe interaction. In contrast to most previous work, the ice strength of a uniform ice sheet of constant ice thickness is taken to be proportional to the ice thickness raised to the 3/2 power, as is revealed in discrete element simulations by Hopkins. The new multithickness sea ice model for sea ice stress has been implemented into the Los Alamos “CICE” sea ice model code and is shown to improve agreement between model predictions and observed spatial distribution of sea ice thickness in the Arctic.
Resumo:
A stand-alone sea ice model is tuned and validated using satellite-derived, basinwide observations of sea ice thickness, extent, and velocity from the years 1993 to 2001. This is the first time that basin-scale measurements of sea ice thickness have been used for this purpose. The model is based on the CICE sea ice model code developed at the Los Alamos National Laboratory, with some minor modifications, and forcing consists of 40-yr ECMWF Re-Analysis (ERA-40) and Polar Exchange at the Sea Surface (POLES) data. Three parameters are varied in the tuning process: Ca, the air–ice drag coefficient; P*, the ice strength parameter; and α, the broadband albedo of cold bare ice, with the aim being to determine the subset of this three-dimensional parameter space that gives the best simultaneous agreement with observations with this forcing set. It is found that observations of sea ice extent and velocity alone are not sufficient to unambiguously tune the model, and that sea ice thickness measurements are necessary to locate a unique subset of parameter space in which simultaneous agreement is achieved with all three observational datasets.
Resumo:
Radiative forcing and climate sensitivity have been widely used as concepts to understand climate change. This work performs climate change experiments with an intermediate general circulation model (IGCM) to examine the robustness of the radiative forcing concept for carbon dioxide and solar constant changes. This IGCM has been specifically developed as a computationally fast model, but one that allows an interaction between physical processes and large-scale dynamics; the model allows many long integrations to be performed relatively quickly. It employs a fast and accurate radiative transfer scheme, as well as simple convection and surface schemes, and a slab ocean, to model the effects of climate change mechanisms on the atmospheric temperatures and dynamics with a reasonable degree of complexity. The climatology of the IGCM run at T-21 resolution with 22 levels is compared to European Centre for Medium Range Weather Forecasting Reanalysis data. The response of the model to changes in carbon dioxide and solar output are examined when these changes are applied globally and when constrained geographically (e.g. over land only). The CO2 experiments have a roughly 17% higher climate sensitivity than the solar experiments. It is also found that a forcing at high latitudes causes a 40% higher climate sensitivity than a forcing only applied at low latitudes. It is found that, despite differences in the model feedbacks, climate sensitivity is roughly constant over a range of distributions of CO2 and solar forcings. Hence, in the IGCM at least, the radiative forcing concept is capable of predicting global surface temperature changes to within 30%, for the perturbations described here. It is concluded that radiative forcing remains a useful tool for assessing the natural and anthropogenic impact of climate change mechanisms on surface temperature.
Resumo:
Initial results are presented from a middle atmosphere extension to a version of the European Centre For Medium Range Weather Forecasting tropospheric model. The extended version of the model has been developed as part of the UK Universities Global Atmospheric Modelling Project and extends from the ground to approximately 90 km. A comprehensive solar radiation scheme is included which uses monthly averaged climatological ozone values. A linearised infrared cooling scheme is employed. The basic climatology of the model is described; the parametrization of drag due to orographically forced gravity waves is shown to have a dramatic effect on the simulations of the winter hemisphere.
Resumo:
A novel analytical model for mixed-phase, unblocked and unseeded orographic precipitation with embedded convection is developed and evaluated. The model takes an idealised background flow and terrain geometry, and calculates the area-averaged precipitation rate and other microphysical quantities. The results provide insight into key physical processes, including cloud condensation, vapour deposition, evaporation, sublimation, as well as precipitation formation and sedimentation (fallout). To account for embedded convection in nominally stratiform clouds, diagnostics for purely convective and purely stratiform clouds are calculated independently and combined using weighting functions based on relevant dynamical and microphysical time scales. An in-depth description of the model is presented, as well as a quantitative assessment of its performance against idealised, convection-permitting numerical simulations with a sophisticated microphysics parameterisation. The model is found to accurately reproduce the simulation diagnostics over most of the parameter space considered.
Resumo:
A continuum model describing sea ice as a layer of granulated thick ice, consisting of many rigid, brittle floes, intersected by long and narrow regions of thinner ice, known as leads, is developed. We consider the evolution of mesoscale leads, formed under extension, whose lengths span many floes, so that the surrounding ice is treated as a granular plastic. The leads are sufficiently small with respect to basin scales of sea ice deformation that they may be modelled using a continuum approach. The model includes evolution equations for the orientational distribution of leads, their thickness and width expressed through second-rank tensors and terms requiring closures. The closing assumptions are constructed for the case of negligibly small lead ice thickness and the canonical deformation types of pure and simple shear, pure divergence and pure convergence. We present a new continuum-scale sea ice rheology that depends upon the isotropic, material rheology of sea ice, the orientational distribution of lead properties and the thick ice thickness. A new model of lead and thick ice interaction is presented that successfully describes a number of effects: (i) because of its brittle nature, thick ice does not thin under extension and (ii) the consideration of the thick sea ice as a granular material determines finite lead opening under pure shear, when granular dilation is unimportant.
Resumo:
A new model has been developed for assessing multiple sources of nitrogen in catchments. The model (INCA) is process based and uses reaction kinetic equations to simulate the principal mechanisms operating. The model allows for plant uptake, surface and sub-surface pathways and can simulate up to six land uses simultaneously. The model can be applied to catchment as a semi-distributed simulation and has an inbuilt multi-reach structure for river systems. Sources of nitrogen can be from atmospheric deposition, from the terrestrial environment (e.g. agriculture, leakage from forest systems etc.), from urban areas or from direct discharges via sewage or intensive farm units. The model is a daily simulation model and can provide information in the form of time series at key sites, or as profiles down river systems or as statistical distributions. The process model is described and in a companion paper the model is applied to the River Tywi catchment in South Wales and the Great Ouse in Bedfordshire.
Resumo:
In recent years both developed and developing countries have experienced an increasing number of government initiatives dedicated to reducing the administrative costs (AC) imposed on businesses by regulation. We use a bi-linear fixed-effects model to analyze the extent to which government initiatives to reduce AC through the Standard Cost Model (SCM) attract Foreign Direct Investment (FDI) among 32 developing countries. Controlling for standard determinants of the SCM, we find that the SCM in most cases leads to higher FDI and that the benefits are more significant where the SCM has been implemented for a longer period.
Resumo:
A process-based fire regime model (SPITFIRE) has been developed, coupled with ecosystem dynamics in the LPJ Dynamic Global Vegetation Model, and used to explore fire regimes and the current impact of fire on the terrestrial carbon cycle and associated emissions of trace atmospheric constituents. The model estimates an average release of 2.24 Pg C yr−1 as CO2 from biomass burning during the 1980s and 1990s. Comparison with observed active fire counts shows that the model reproduces where fire occurs and can mimic broad geographic patterns in the peak fire season, although the predicted peak is 1–2 months late in some regions. Modelled fire season length is generally overestimated by about one month, but shows a realistic pattern of differences among biomes. Comparisons with remotely sensed burnt-area products indicate that the model reproduces broad geographic patterns of annual fractional burnt area over most regions, including the boreal forest, although interannual variability in the boreal zone is underestimated.
Resumo:
We describe Global Atmosphere 4.0 (GA4.0) and Global Land 4.0 (GL4.0): configurations of the Met Office Unified Model and JULES (Joint UK Land Environment Simulator) community land surface model developed for use in global and regional climate research and weather prediction activities. GA4.0 and GL4.0 are based on the previous GA3.0 and GL3.0 configurations, with the inclusion of developments made by the Met Office and its collaborators during its annual development cycle. This paper provides a comprehensive technical and scientific description of GA4.0 and GL4.0 as well as details of how these differ from their predecessors. We also present the results of some initial evaluations of their performance. Overall, performance is comparable with that of GA3.0/GL3.0; the updated configurations include improvements to the science of several parametrisation schemes, however, and will form a baseline for further ongoing development.
Resumo:
Earthworms are important organisms in soil communities and so are used as model organisms in environmental risk assessments of chemicals. However current risk assessments of soil invertebrates are based on short-term laboratory studies, of limited ecological relevance, supplemented if necessary by site-specific field trials, which sometimes are challenging to apply across the whole agricultural landscape. Here, we investigate whether population responses to environmental stressors and pesticide exposure can be accurately predicted by combining energy budget and agent-based models (ABMs), based on knowledge of how individuals respond to their local circumstances. A simple energy budget model was implemented within each earthworm Eisenia fetida in the ABM, based on a priori parameter estimates. From broadly accepted physiological principles, simple algorithms specify how energy acquisition and expenditure drive life cycle processes. Each individual allocates energy between maintenance, growth and/or reproduction under varying conditions of food density, soil temperature and soil moisture. When simulating published experiments, good model fits were obtained to experimental data on individual growth, reproduction and starvation. Using the energy budget model as a platform we developed methods to identify which of the physiological parameters in the energy budget model (rates of ingestion, maintenance, growth or reproduction) are primarily affected by pesticide applications, producing four hypotheses about how toxicity acts. We tested these hypotheses by comparing model outputs with published toxicity data on the effects of copper oxychloride and chlorpyrifos on E. fetida. Both growth and reproduction were directly affected in experiments in which sufficient food was provided, whilst maintenance was targeted under food limitation. Although we only incorporate toxic effects at the individual level we show how ABMs can readily extrapolate to larger scales by providing good model fits to field population data. The ability of the presented model to fit the available field and laboratory data for E. fetida demonstrates the promise of the agent-based approach in ecology, by showing how biological knowledge can be used to make ecological inferences. Further work is required to extend the approach to populations of more ecologically relevant species studied at the field scale. Such a model could help extrapolate from laboratory to field conditions and from one set of field conditions to another or from species to species.
Resumo:
The magnetization properties of aggregated ferrofluids are calculated by combining the chain formation model developed by Zubarev with the modified mean-field theory. Using moderate assumptions for the inter- and intrachain interactions we obtain expressions for the magnetization and initial susceptibility. When comparing the results of our theory to molecular dynamics simulations of the same model we find that at large dipolar couplings (lambda>3) the chain formation model appears to give better predictions than other analytical approaches. This supports the idea that chain formation is an important structural ingredient of strongly interacting dipolar particles.
Resumo:
Transport of pollution and heatout of streets into the boundary layer above is not currently understood and so fluxes cannot be quantified. Scalar concentration within the street is determined by the flux out of it and so quantifying fluxes for turbulent flow over a rough urban surface is essential. We have developed a naphthalene sublimation technique to measure transfer from a two-dimensional street canyon in a wind tunnel for the case of flow perpendicular to the street. The street was coated with naphthalene, which sublimes at room temperature, so that the vapour represented the scalar source. The transfer velocity wT relates the flux out of the canyon to the concentration within it and is shown to be linearly related to windspeed above the street. The dimensionless transfer coefficient wT/Uδ represents the ventilation efficiency of the canyon (here, wT is a transfer velocity,Uδ is the wind speed at the boundary-layer top). Observed values are between 1.5 and 2.7 ×10-3 and, for the case where H/W→0 (ratio of buildingheight to street width), values are in the same range as estimates of transfer from a flat plate, giving confidence that the technique yields accurate values for street canyon scalar transfer. wT/Uδ varies with aspect ratio (H/W), reaching a maximum in the wake interference regime (0.3 < H/W < 0.65). However, when upstream roughness is increased, the maximum in wT/Uδ reduces, suggesting that street ventilation is less sensitive to H/W when the flow is in equilibrium with the urban surface. The results suggest that using naphthalene sublimation with wind-tunnel models of urban surfaces can provide a direct measure of area-averaged scalar fluxes.
Resumo:
Our digital universe is rapidly expanding,more and more daily activities are digitally recorded, data arrives in streams, it needs to be analyzed in real time and may evolve over time. In the last decade many adaptive learning algorithms and prediction systems, which can automatically update themselves with the new incoming data, have been developed. The majority of those algorithms focus on improving the predictive performance and assume that model update is always desired as soon as possible and as frequently as possible. In this study we consider potential model update as an investment decision, which, as in the financial markets, should be taken only if a certain return on investment is expected. We introduce and motivate a new research problem for data streams ? cost-sensitive adaptation. We propose a reference framework for analyzing adaptation strategies in terms of costs and benefits. Our framework allows to characterize and decompose the costs of model updates, and to asses and interpret the gains in performance due to model adaptation for a given learning algorithm on a given prediction task. Our proof-of-concept experiment demonstrates how the framework can aid in analyzing and managing adaptation decisions in the chemical industry.