952 resultados para subgrid-scale models


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Microbial processes in soil are moisture, nutrient and temperature dependent and, consequently, accurate calculation of soil temperature is important for modelling nitrogen processes. Microbial activity in soil occurs even at sub-zero temperatures so that, in northern latitudes, a method to calculate soil temperature under snow cover and in frozen soils is required. This paper describes a new and simple model to calculate daily values for soil temperature at various depths in both frozen and unfrozen soils. The model requires four parameters average soil thermal conductivity, specific beat capacity of soil, specific heat capacity due to freezing and thawing and an empirical snow parameter. Precipitation, air temperature and snow depth (measured or calculated) are needed as input variables. The proposed model was applied to five sites in different parts of Finland representing different climates and soil types. Observed soil temperatures at depths of 20 and 50 cm (September 1981-August 1990) were used for model calibration. The calibrated model was then tested using observed soil temperatures from September 1990 to August 2001. R-2-values of the calibration period varied between 0.87 and 0.96 at a depth of 20 cm and between 0.78 and 0.97 at 50 cm. R-2 -values of the testing period were between 0.87 and 0.94 at a depth of 20cm. and between 0.80 and 0.98 at 50cm. Thus, despite the simplifications made, the model was able to simulate soil temperature at these study sites. This simple model simulates soil temperature well in the uppermost soil layers where most of the nitrogen processes occur. The small number of parameters required means, that the model is suitable for addition to catchment scale models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Obstacles considerably influence boundary layer processes. Their influences have been included in mesoscale models (MeM) for a long time. Methods used to parameterise obstacle effects in a MeM are summarised in this paper using results of the mesoscale model METRAS as examples. Besides the parameterisation of obstacle influences it is also possible to use a joint modelling approach to describe obstacle induced and mesoscale changes. Three different methods may be used for joint modelling approaches: The first method is a time-slice approach, where steady basic state profiles are used in an obstacle resolving microscale model (MiM, example model MITRAS) and diurnal cycles are derived by joining steady-state MITRAS results. The second joint modelling approach is one-way nesting, where the MeM results are used to initialise the MiM and to drive the boundary values of the MiM dependent on time. The third joint modelling approach is to apply multi-scale models or two-way nesting approaches, which include feedbacks from the MiM to the MeM. The advantages and disadvantages of the different approaches and remaining problems with joint Reynolds-averaged Navier–Stokes modelling approaches are summarised in the paper.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

High quality wind measurements in cities are needed for numerous applications including wind engineering. Such data-sets are rare and measurement platforms may not be optimal for meteorological observations. Two years' wind data were collected on the BT Tower, London, UK, showing an upward deflection on average for all wind directions. Wind tunnel simulations were performed to investigate flow distortion around two scale models of the Tower. Using a 1:160 scale model it was shown that the Tower causes a small deflection (ca. 0.5°) compared to the lattice on top on which the instruments were placed (ca. 0–4°). These deflections may have been underestimated due to wind tunnel blockage. Using a 1:40 model, the observed flow pattern was consistent with streamwise vortex pairs shed from the upstream lattice edge. Correction factors were derived for different wind directions and reduced deflection in the full-scale data-set by <3°. Instrumental tilt caused a sinusoidal variation in deflection of ca. 2°. The residual deflection (ca. 3°) was attributed to the Tower itself. Correction of the wind-speeds was small (average 1%) therefore it was deduced that flow distortion does not significantly affect the measured wind-speeds and the wind climate statistics are reliable.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the present paper we characterize the statistical properties of non-precipitating tropical ice clouds (deep ice anvils resulting from deep convection and cirrus clouds) over Niamey, Niger, West Africa, and Darwin, northern Australia, using ground-based radar–lidar observations from the Atmospheric Radiation Measurement (ARM) programme. The ice cloud properties analysed in this paper are the frequency of ice cloud occurrence, cloud fraction, the morphological properties (cloud-top height, base height, and thickness), the microphysical and radiative properties (ice water content, visible extinction, effective radius, terminal fall speed, and concentration), and the internal cloud dynamics (in-cloud vertical air velocity). The main highlight of the paper is that it characterizes for the first time the probability density functions of the tropical ice cloud properties, their vertical variability and their diurnal variability at the same time. This is particularly important over West Africa, since the ARM deployment in Niamey provides the first vertically resolved observations of non-precipitating ice clouds in this crucial area in terms of redistribution of water and energy in the troposphere. The comparison between the two sites also provides an additional observational basis for the evaluation of the parametrization of clouds in large-scale models, which should be able to reproduce both the statistical properties at each site and the differences between the two sites. The frequency of ice cloud occurrence is found to be much larger over Darwin when compared to Niamey, and with a much larger diurnal variability, which is well correlated with the diurnal cycle of deep convective activity. The diurnal cycle of the ice cloud occurrence over Niamey is also much less correlated with that of deep convective activity than over Darwin, probably owing to the fact that Niamey is further away from the deep convective sources of the region. The frequency distributions of cloud fraction are strongly bimodal and broadly similar over the two sites, with a predominance of clouds characterized either by a very small cloud fraction (less than 0.3) or a very large cloud fraction (larger than 0.9). The ice clouds over Darwin are also much thicker (by 1 km or more statistically) and are characterized by a much larger diurnal variability than ice clouds over Niamey. Ice clouds over Niamey are also characterized by smaller particle sizes and fall speeds but in much larger concentrations, thereby carrying more ice water and producing more visible extinction than the ice clouds over Darwin. It is also found that there is a much larger occurrence of downward in-cloud air motions less than 1 m s−1 over Darwin, which together with the larger fall speeds retrieved over Darwin indicates that the life cycle of ice clouds is probably shorter over Darwin than over Niamey.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a completely new design of a bogie-frame made of glass fibre reinforced composites and its performance under various loading conditions predicted by finite element analysis. The bogie consists of two frames, with one placed on top of the other, and two axle ties connecting the axles. Each frame consists of two side arms and a transom between. The top frame is thinner and more compliant and has a higher curvature compared with the bottom frame. Variable vertical stiffness can be achieved before and after the contact between the two frames at the central section of the bogie to cope with different load levels. Finite element analysis played a very important role in the design of this structure. Stiffness and stress levels of the full scale bogie presented in this paper under various loading conditions have been predicted by using Marc provided by MSC Software. In order to verify the finite element analysis (FEA) models, a fifth scale prototype of the bogie has been made and tested under quasi-static loading conditions. Results of testing on the fifth scale bogie have been used to fine tune details like contact and friction in the fifth scale FEA models. These conditions were then applied to the full scale models. Finite element analysis results show that the stress levels in all directions are low compared with material strengths.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The present workshop constitutes the 5th in the annual series on “Concepts for Convective Parameterizations in Large-Scale Models”. The purpose of the workshop series has been to discuss the fundamental theoretical issues of convection parameterization with a small number of European scientists. The workshop series has been funded by European Cooperation in the Field of Scientific and Technical Research (COST) Action ES0905. The theme of the workshop for the year 2012 was decided from a main conclusion of the previous workshop, which focused on the convective organization problem, seeking a means for implementing such effects into convection parameterizations (Yano et al. 2012). As it turned out, in order to discuss this implementation issue in any concrete manner, we have first to know very well the bells and whistles of convection parameterizations. This was the purpose of the 5th workshop. The title of the workshop is rather metaphorically tagged as “Bulk or Spectrum?”, because this is a typical decision we have to face at the outset of any parameterization development. The following report discusses selected issues of bells and whistles addressed during the meeting.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Regional to global scale modelling of N flux from land to ocean has progressed to date through the development of simple empirical models representing bulk N flux rates from large watersheds, regions, or continents on the basis of a limited selection of model parameters. Watershed scale N flux modelling has developed a range of physically-based approaches ranging from models where N flux rates are predicted through a physical representation of the processes involved, through to catchment scale models which provide a simplified representation of true systems behaviour. Generally, these watershed scale models describe within their structure the dominant process controls on N flux at the catchment or watershed scale, and take into account variations in the extent to which these processes control N flux rates as a function of landscape sensitivity to N cycling and export. This paper addresses the nature of the errors and uncertainties inherent in existing regional to global scale models, and the nature of error propagation associated with upscaling from small catchment to regional scale through a suite of spatial aggregation and conceptual lumping experiments conducted on a validated watershed scale model, the export coefficient model. Results from the analysis support the findings of other researchers developing macroscale models in allied research fields. Conclusions from the study confirm that reliable and accurate regional scale N flux modelling needs to take account of the heterogeneity of landscapes and the impact that this has on N cycling processes within homogenous landscape units.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Simulating spiking neural networks is of great interest to scientists wanting to model the functioning of the brain. However, large-scale models are expensive to simulate due to the number and interconnectedness of neurons in the brain. Furthermore, where such simulations are used in an embodied setting, the simulation must be real-time in order to be useful. In this paper we present NeMo, a platform for such simulations which achieves high performance through the use of highly parallel commodity hardware in the form of graphics processing units (GPUs). NeMo makes use of the Izhikevich neuron model which provides a range of realistic spiking dynamics while being computationally efficient. Our GPU kernel can deliver up to 400 million spikes per second. This corresponds to a real-time simulation of around 40 000 neurons under biologically plausible conditions with 1000 synapses per neuron and a mean firing rate of 10 Hz.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Convective equilibrium is a long-standing and useful concept for understanding many aspects of the behaviour of deep moist convection. For example, it is often invoked in developing parameterizations for large-scale models. However, the equilibrium assumption may begin to break down as models are increasingly used with shorter timesteps and finer resolutions. Here we perform idealized cloud-system resolving model simulations of deep convection with imposed time variations in the surface forcing. A range of rapid forcing timescales from 1 − 36hr are used, in order to induce systematic departures from equilibrium. For the longer forcing timescales, the equilibrium assumption remains valid, in at least the limited sense that cycle-integrated measures of convective activity are very similar from cycle to cycle. For shorter forcing timescales, cycle-integrated convection becomes more variable, with enhanced activity on one cycle being correlated with reduced activity on the next, suggesting a role for convective memory. Further investigation shows that the memory does not appear to be carried by the domain-mean thermodynamic fields but rather by structures on horizontal scales of 5 − 20km. Such structures are produced by the convective clouds and can persist beyond the lifetime of the cloud, even through to the next forcing cycle.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We have extensively evaluated the response of cloud-base drizzle rate (Rcb; mm day–1) in warm clouds to liquid water path (LWP; g m–2) and to cloud condensation nuclei (CCN) number concentration (NCCN; cm–3), an aerosol proxy. This evaluation is based on a 19-month long dataset of Doppler radar, lidar, microwave radiometers and aerosol observing systems from the Atmospheric Radiation Measurement (ARM) Mobile Facility deployments at the Azores and in Germany. Assuming 0.55% supersaturation to calculate NCCN, we found a power law , indicating that Rcb decreases by a factor of 2–3 as NCCN increases from 200 to 1000 cm–3 for fixed LWP. Additionally, the precipitation susceptibility to NCCN ranges between 0.5 and 0.9, in agreement with values from simulations and aircraft measurements. Surprisingly, the susceptibility of the probability of precipitation from our analysis is much higher than that from CloudSat estimates, but agrees well with simulations from a multi-scale high-resolution aerosol-climate model. Although scale issues are not completely resolved in the intercomparisons, our results are encouraging, suggesting that it is possible for multi-scale models to accurately simulate the response of LWP to aerosol perturbations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The activation of aerosols to form cloud droplets is dependent upon vertical velocities whose local variability is not typically resolved at the GCM grid scale. Consequently, it is necessary to represent the subgrid-scale variability of vertical velocity in the calculation of cloud droplet number concentration. This study uses the UK Chemistry and Aerosols community model (UKCA) within the Hadley Centre Global Environmental Model (HadGEM3), coupled for the first time to an explicit aerosol activation parameterisation, and hence known as UKCA-Activate. We explore the range of uncertainty in estimates of the indirect aerosol effects attributable to the choice of parameterisation of the subgrid-scale variability of vertical velocity in HadGEM-UKCA. Results of simulations demonstrate that the use of a characteristic vertical velocity cannot replicate results derived with a distribution of vertical velocities, and is to be discouraged in GCMs. This study focuses on the effect of the variance (σw2) of a Gaussian pdf (probability density function) of vertical velocity. Fixed values of σw (spanning the range measured in situ by nine flight campaigns found in the literature) and a configuration in which σw depends on turbulent kinetic energy are tested. Results from the mid-range fixed σw and TKE-based configurations both compare well with observed vertical velocity distributions and cloud droplet number concentrations. The radiative flux perturbation due to the total effects of anthropogenic aerosol is estimated at −1.9 W m−2 with σw = 0.1 m s−1, −2.1 W m−2 with σw derived from TKE, −2.25 W m−2 with σw = 0.4 m s−1, and −2.3 W m−2 with σw = 0.7 m s−1. The breadth of this range is 0.4 W m−2, which is comparable to a substantial fraction of the total diversity of current aerosol forcing estimates. Reducing the uncertainty in the parameterisation of σw would therefore be an important step towards reducing the uncertainty in estimates of the indirect aerosol effects. Detailed examination of regional radiative flux perturbations reveals that aerosol microphysics can be responsible for some climate-relevant radiative effects, highlighting the importance of including microphysical aerosol processes in GCMs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present ocean model sensitivity experiments aimed at separating the influence of the projected changes in the “thermal” (near-surface air temperature) and “wind” (near-surface winds) forcing on the patterns of sea level and ocean heat content. In the North Atlantic, the distribution of sea level change is more due to the “thermal” forcing, whereas it is more due to the “wind” forcing in the North Pacific; in the Southern Ocean, the “thermal” and “wind” forcing have a comparable influence. In the ocean adjacent to Antarctica the “thermal” forcing leads to an inflow of warmer waters on the continental shelves, which is somewhat attenuated by the “wind” forcing. The structure of the vertically integrated heat uptake is set by different processes at low and high latitudes: at low latitudes it is dominated by the heat transport convergence, whereas at high latitudes it represents a small residual of changes in the surface flux and advection of heat. The structure of the horizontally integrated heat content tendency is set by the increase of downward heat flux by the mean circulation and comparable decrease of upward heat flux by the subgrid-scale processes; the upward eddy heat flux decreases and increases by almost the same magnitude in response to, respectively, the “thermal” and “wind” forcing. Regionally, the surface heat loss and deep convection weaken in the Labrador Sea, but intensify in the Greenland Sea in the region of sea ice retreat. The enhanced heat flux anomaly in the subpolar Atlantic is mainly caused by the “thermal” forcing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introducing a parameterization of the interactions between wind-driven snow depth changes and melt pond evolution allows us to improve large scale models. In this paper we have implemented an explicit melt pond scheme and, for the first time, a wind dependant snow redistribution model and new snow thermophysics into a coupled ocean–sea ice model. The comparison of long-term mean statistics of melt pond fractions against observations demonstrates realistic melt pond cover on average over Arctic sea ice, but a clear underestimation of the pond coverage on the multi-year ice (MYI) of the western Arctic Ocean. The latter shortcoming originates from the concealing effect of persistent snow on forming ponds, impeding their growth. Analyzing a second simulation with intensified snow drift enables the identification of two distinct modes of sensitivity in the melt pond formation process. First, the larger proportion of wind-transported snow that is lost in leads directly curtails the late spring snow volume on sea ice and facilitates the early development of melt ponds on MYI. In contrast, a combination of higher air temperatures and thinner snow prior to the onset of melting sometimes make the snow cover switch to a regime where it melts entirely and rapidly. In the latter situation, seemingly more frequent on first-year ice (FYI), a smaller snow volume directly relates to a reduced melt pond cover. Notwithstanding, changes in snow and water accumulation on seasonal sea ice is naturally limited, which lessens the impacts of wind-blown snow redistribution on FYI, as compared to those on MYI. At the basin scale, the overall increased melt pond cover results in decreased ice volume via the ice-albedo feedback in summer, which is experienced almost exclusively by MYI.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The degree to which habitat fragmentation affects bird incidence is species specific and may depend on varying spatial scales. Selecting the correct scale of measurement is essential to appropriately assess the effects of habitat fragmentation on bird occurrence. Our objective was to determine which spatial scale of landscape measurement best describes the incidence of three bird species (Pyriglena leucoptera, Xiphorhynchus fuscus and Chiroxiphia caudata) in the fragmented Brazilian Atlantic forest and test if multi-scalar models perform better than single-scalar ones. Bird incidence was assessed in 80 forest fragments. The surrounding landscape structure was described with four indices measured at four spatial scales (400-, 600-, 800- and 1,000-m buffers around the sample points). The explanatory power of each scale in predicting bird incidence was assessed using logistic regression, bootstrapped with 1,000 repetitions. The best results varied between species (1,000-m radius for P. leucoptera; 800-m for X. fuscus and 600-m for C. caudata), probably due to their distinct feeding habits and foraging strategies. Multi-scale models always resulted in better predictions than single-scale models, suggesting that different aspects of the landscape structure are related to different ecological processes influencing bird incidence. In particular, our results suggest that local extinction and (re)colonisation processes might simultaneously act at different scales. Thus, single-scale models may not be good enough to properly describe complex pattern-process relationships. Selecting variables at multiple ecologically relevant scales is a reasonable procedure to optimise the accuracy of species incidence models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In Brazil and around the world, oil companies are looking for, and expected development of new technologies and processes that can increase the oil recovery factor in mature reservoirs, in a simple and inexpensive way. So, the latest research has developed a new process called Gas Assisted Gravity Drainage (GAGD) which was classified as a gas injection IOR. The process, which is undergoing pilot testing in the field, is being extensively studied through physical scale models and core-floods laboratory, due to high oil recoveries in relation to other gas injection IOR. This process consists of injecting gas at the top of a reservoir through horizontal or vertical injector wells and displacing the oil, taking advantage of natural gravity segregation of fluids, to a horizontal producer well placed at the bottom of the reservoir. To study this process it was modeled a homogeneous reservoir and a model of multi-component fluid with characteristics similar to light oil Brazilian fields through a compositional simulator, to optimize the operational parameters. The model of the process was simulated in GEM (CMG, 2009.10). The operational parameters studied were the gas injection rate, the type of gas injection, the location of the injector and production well. We also studied the presence of water drive in the process. The results showed that the maximum vertical spacing between the two wells, caused the maximum recovery of oil in GAGD. Also, it was found that the largest flow injection, it obtained the largest recovery factors. This parameter controls the speed of the front of the gas injected and determined if the gravitational force dominates or not the process in the recovery of oil. Natural gas had better performance than CO2 and that the presence of aquifer in the reservoir was less influential in the process. In economic analysis found that by injecting natural gas is obtained more economically beneficial than CO2