112 resultados para Microscopic simulation models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The parameterization of surface heat-flux variability in urban areas relies on adequate representation of surface characteristics. Given the horizontal resolutions (e.g. ≈0.1–1km) currently used in numerical weather prediction (NWP) models, properties of the urban surface (e.g. vegetated/built surfaces, street-canyon geometries) often have large spatial variability. Here, a new approach based on Urban Zones to characterize Energy partitioning (UZE) is tested within a NWP model (Weather Research and Forecasting model;WRF v3.2.1) for Greater London. The urban land-surface scheme is the Noah/Single-Layer Urban Canopy Model (SLUCM). Detailed surface information (horizontal resolution 1 km)in central London shows that the UZE offers better characterization of surface properties and their variability compared to default WRF-SLUCM input parameters. In situ observations of the surface energy fluxes and near-surface meteorological variables are used to select the radiation and turbulence parameterization schemes and to evaluate the land-surface scheme

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents single-column model (SCM) simulations of a tropical squall-line case observed during the Coupled Ocean-Atmosphere Response Experiment of the Tropical Ocean/Global Atmosphere Programme. This case-study was part of an international model intercomparison project organized by Working Group 4 ‘Precipitating Convective Cloud Systems’ of the GEWEX (Global Energy and Water-cycle Experiment) Cloud System Study. Eight SCM groups using different deep-convection parametrizations participated in this project. The SCMs were forced by temperature and moisture tendencies that had been computed from a reference cloud-resolving model (CRM) simulation using open boundary conditions. The comparison of the SCM results with the reference CRM simulation provided insight into the ability of current convection and cloud schemes to represent organized convection. The CRM results enabled a detailed evaluation of the SCMs in terms of the thermodynamic structure and the convective mass flux of the system, the latter being closely related to the surface convective precipitation. It is shown that the SCMs could reproduce reasonably well the time evolution of the surface convective and stratiform precipitation, the convective mass flux, and the thermodynamic structure of the squall-line system. The thermodynamic structure simulated by the SCMs depended on how the models partitioned the precipitation between convective and stratiform. However, structural differences persisted in the thermodynamic profiles simulated by the SCMs and the CRM. These differences could be attributed to the fact that the total mass flux used to compute the SCM forcing differed from the convective mass flux. The SCMs could not adequately represent these organized mesoscale circulations and the microphysicallradiative forcing associated with the stratiform region. This issue is generally known as the ‘scale-interaction’ problem that can only be properly addressed in fully three-dimensional simulations. Sensitivity simulations run by several groups showed that the time evolution of the surface convective precipitation was considerably smoothed when the convective closure was based on convective available potential energy instead of moisture convergence. Finally, additional SCM simulations without using a convection parametrization indicated that the impact of a convection parametrization in forced SCM runs was more visible in the moisture profiles than in the temperature profiles because convective transport was particularly important in the moisture budget.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Earth system models (ESMs) are increasing in complexity by incorporating more processes than their predecessors, making them potentially important tools for studying the evolution of climate and associated biogeochemical cycles. However, their coupled behaviour has only recently been examined in any detail, and has yielded a very wide range of outcomes. For example, coupled climate–carbon cycle models that represent land-use change simulate total land carbon stores at 2100 that vary by as much as 600 Pg C, given the same emissions scenario. This large uncertainty is associated with differences in how key processes are simulated in different models, and illustrates the necessity of determining which models are most realistic using rigorous methods of model evaluation. Here we assess the state-of-the-art in evaluation of ESMs, with a particular emphasis on the simulation of the carbon cycle and associated biospheric processes. We examine some of the new advances and remaining uncertainties relating to (i) modern and palaeodata and (ii) metrics for evaluation. We note that the practice of averaging results from many models is unreliable and no substitute for proper evaluation of individual models. We discuss a range of strategies, such as the inclusion of pre-calibration, combined process- and system-level evaluation, and the use of emergent constraints, that can contribute to the development of more robust evaluation schemes. An increasingly data-rich environment offers more opportunities for model evaluation, but also presents a challenge. Improved knowledge of data uncertainties is still necessary to move the field of ESM evaluation away from a "beauty contest" towards the development of useful constraints on model outcomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the sources of systematic errors in climate models is challenging because of coupled feedbacks and errors compensation. The developing seamless approach proposes that the identification and the correction of short term climate model errors have the potential to improve the modeled climate on longer time scales. In previous studies, initialised atmospheric simulations of a few days have been used to compare fast physics processes (convection, cloud processes) among models. The present study explores how initialised seasonal to decadal hindcasts (re-forecasts) relate transient week-to-month errors of the ocean and atmospheric components to the coupled model long-term pervasive SST errors. A protocol is designed to attribute the SST biases to the source processes. It includes five steps: (1) identify and describe biases in a coupled stabilized simulation, (2) determine the time scale of the advent of the bias and its propagation, (3) find the geographical origin of the bias, (4) evaluate the degree of coupling in the development of the bias, (5) find the field responsible for the bias. This strategy has been implemented with a set of experiments based on the initial adjustment of initialised simulations and exploring various degrees of coupling. In particular, hindcasts give the time scale of biases advent, regionally restored experiments show the geographical origin and ocean-only simulations isolate the field responsible for the bias and evaluate the degree of coupling in the bias development. This strategy is applied to four prominent SST biases of the IPSLCM5A-LR coupled model in the tropical Pacific, that are largely shared by other coupled models, including the Southeast Pacific warm bias and the equatorial cold tongue bias. Using the proposed protocol, we demonstrate that the East Pacific warm bias appears in a few months and is caused by a lack of upwelling due to too weak meridional coastal winds off Peru. The cold equatorial bias, which surprisingly takes 30 years to develop, is the result of an equatorward advection of midlatitude cold SST errors. Despite large development efforts, the current generation of coupled models shows only little improvement. The strategy proposed in this study is a further step to move from the current random ad hoc approach, to a bias-targeted, priority setting, systematic model development approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Global syntheses of palaeoenvironmental data are required to test climate models under conditions different from the present. Data sets for this purpose contain data from spatially extensive networks of sites. The data are either directly comparable to model output or readily interpretable in terms of modelled climate variables. Data sets must contain sufficient documentation to distinguish between raw (primary) and interpreted (secondary, tertiary) data, to evaluate the assumptions involved in interpretation of the data, to exercise quality control, and to select data appropriate for specific goals. Four data bases for the Late Quaternary, documenting changes in lake levels since 30 kyr BP (the Global Lake Status Data Base), vegetation distribution at 18 kyr and 6 kyr BP (BIOME 6000), aeolian accumulation rates during the last glacial-interglacial cycle (DIRTMAP), and tropical terrestrial climates at the Last Glacial Maximum (the LGM Tropical Terrestrial Data Synthesis) are summarised. Each has been used to evaluate simulations of Last Glacial Maximum (LGM: 21 calendar kyr BP) and/or mid-Holocene (6 cal. kyr BP) environments. Comparisons have demonstrated that changes in radiative forcing and orography due to orbital and ice-sheet variations explain the first-order, broad-scale (in space and time) features of global climate change since the LGM. However, atmospheric models forced by 6 cal. kyr BP orbital changes with unchanged surface conditions fail to capture quantitative aspects of the observed climate, including the greatly increased magnitude and northward shift of the African monsoon during the early to mid-Holocene. Similarly, comparisons with palaeoenvironmental datasets show that atmospheric models have underestimated the magnitude of cooling and drying of much of the land surface at the LGM. The inclusion of feedbacks due to changes in ocean- and land-surface conditions at both times, and atmospheric dust loading at the LGM, appears to be required in order to produce a better simulation of these past climates. The development of Earth system models incorporating the dynamic interactions among ocean, atmosphere, and vegetation is therefore mandated by Quaternary science results as well as climatological principles. For greatest scientific benefit, this development must be paralleled by continued advances in palaeodata analysis and synthesis, which in turn will help to define questions that call for new focused data collection efforts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study assesses the influence of the El Niño–Southern Oscillation (ENSO) on global tropical cyclone activity using a 150-yr-long integration with a high-resolution coupled atmosphere–ocean general circulation model [High-Resolution Global Environmental Model (HiGEM); with N144 resolution: ~90 km in the atmosphere and ~40 km in the ocean]. Tropical cyclone activity is compared to an atmosphere-only simulation using the atmospheric component of HiGEM (HiGAM). Observations of tropical cyclones in the International Best Track Archive for Climate Stewardship (IBTrACS) and tropical cyclones identified in the Interim ECMWF Re-Analysis (ERA-Interim) are used to validate the models. Composite anomalies of tropical cyclone activity in El Niño and La Niña years are used. HiGEM is able to capture the shift in tropical cyclone locations to ENSO in the Pacific and Indian Oceans. However, HiGEM does not capture the expected ENSO–tropical cyclone teleconnection in the North Atlantic. HiGAM shows more skill in simulating the global ENSO–tropical cyclone teleconnection; however, variability in the Pacific is overpronounced. HiGAM is able to capture the ENSO–tropical cyclone teleconnection in the North Atlantic more accurately than HiGEM. An investigation into the large-scale environmental conditions, known to influence tropical cyclone activity, is used to further understand the response of tropical cyclone activity to ENSO in the North Atlantic and western North Pacific. The vertical wind shear response over the Caribbean is not captured in HiGEM compared to HiGAM and ERA-Interim. Biases in the mean ascent at 500 hPa in HiGEM remain in HiGAM over the western North Pacific; however, a more realistic low-level vorticity in HiGAM results in a more accurate ENSO–tropical cyclone teleconnection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The UPSCALE (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk) project, using PRACE (Partnership for Advanced Computing in Europe) resources, constructed and ran an ensemble of atmosphere-only global climate model simulations, using the Met Office Unified Model GA3 configuration. Each simulation is 27 years in length for both the present climate and an end-of-century future climate, at resolutions of N96 (130 km), N216 (60 km) and N512 (25 km), in order to study the impact of model resolution on high impact climate features such as tropical cyclones. Increased model resolution is found to improve the simulated frequency of explicitly tracked tropical cyclones, and correlations of interannual variability in the North Atlantic and North West Pacific lie between 0.6 and 0.75. Improvements in the deficit of genesis in the eastern North Atlantic as resolution increases appear to be related to the representation of African Easterly Waves and the African Easterly Jet. However, the intensity of the modelled tropical cyclones as measured by 10 m wind speed remain weak, and there is no indication of convergence over this range of resolutions. In the future climate ensemble, there is a reduction of 50% in the frequency of Southern Hemisphere tropical cyclones, while in the Northern Hemisphere there is a reduction in the North Atlantic, and a shift in the Pacific with peak intensities becoming more common in the Central Pacific. There is also a change in tropical cyclone intensities, with the future climate having fewer weak storms and proportionally more stronger storms

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many of the next generation of global climate models will include aerosol schemes which explicitly simulate the microphysical processes that determine the particle size distribution. These models enable aerosol optical properties and cloud condensation nuclei (CCN) concentrations to be determined by fundamental aerosol processes, which should lead to a more physically based simulation of aerosol direct and indirect radiative forcings. This study examines the global variation in particle size distribution simulated by 12 global aerosol microphysics models to quantify model diversity and to identify any common biases against observations. Evaluation against size distribution measurements from a new European network of aerosol supersites shows that the mean model agrees quite well with the observations at many sites on the annual mean, but there are some seasonal biases common to many sites. In particular, at many of these European sites, the accumulation mode number concentration is biased low during winter and Aitken mode concentrations tend to be overestimated in winter and underestimated in summer. At high northern latitudes, the models strongly underpredict Aitken and accumulation particle concentrations compared to the measurements, consistent with previous studies that have highlighted the poor performance of global aerosol models in the Arctic. In the marine boundary layer, the models capture the observed meridional variation in the size distribution, which is dominated by the Aitken mode at high latitudes, with an increasing concentration of accumulation particles with decreasing latitude. Considering vertical profiles, the models reproduce the observed peak in total particle concentrations in the upper troposphere due to new particle formation, although modelled peak concentrations tend to be biased high over Europe. Overall, the multi-model-mean data set simulates the global variation of the particle size distribution with a good degree of skill, suggesting that most of the individual global aerosol microphysics models are performing well, although the large model diversity indicates that some models are in poor agreement with the observations. Further work is required to better constrain size-resolved primary and secondary particle number sources, and an improved understanding of nucleation and growth (e.g. the role of nitrate and secondary organics) will improve the fidelity of simulated particle size distributions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper evaluates the current status of global modeling of the organic aerosol (OA) in the troposphere and analyzes the differences between models as well as between models and observations. Thirty-one global chemistry transport models (CTMs) and general circulation models (GCMs) have participated in this intercomparison, in the framework of AeroCom phase II. The simulation of OA varies greatly between models in terms of the magnitude of primary emissions, secondary OA (SOA) formation, the number of OA species used (2 to 62), the complexity of OA parameterizations (gas-particle partitioning, chemical aging, multiphase chemistry, aerosol microphysics), and the OA physical, chemical and optical properties. The diversity of the global OA simulation results has increased since earlier AeroCom experiments, mainly due to the increasing complexity of the SOA parameterization in models, and the implementation of new, highly uncertain, OA sources. Diversity of over one order of magnitude exists in the modeled vertical distribution of OA concentrations that deserves a dedicated future study. Furthermore, although the OA / OC ratio depends on OA sources and atmospheric processing, and is important for model evaluation against OA and OC observations, it is resolved only by a few global models. The median global primary OA (POA) source strength is 56 Tg a−1 (range 34–144 Tg a−1) and the median SOA source strength (natural and anthropogenic) is 19 Tg a−1 (range 13–121 Tg a−1). Among the models that take into account the semi-volatile SOA nature, the median source is calculated to be 51 Tg a−1 (range 16–121 Tg a−1), much larger than the median value of the models that calculate SOA in a more simplistic way (19 Tg a−1; range 13–20 Tg a−1, with one model at 37 Tg a−1). The median atmospheric burden of OA is 1.4 Tg (24 models in the range of 0.6–2.0 Tg and 4 between 2.0 and 3.8 Tg), with a median OA lifetime of 5.4 days (range 3.8–9.6 days). In models that reported both OA and sulfate burdens, the median value of the OA/sulfate burden ratio is calculated to be 0.77; 13 models calculate a ratio lower than 1, and 9 models higher than 1. For 26 models that reported OA deposition fluxes, the median wet removal is 70 Tg a−1 (range 28–209 Tg a−1), which is on average 85% of the total OA deposition. Fine aerosol organic carbon (OC) and OA observations from continuous monitoring networks and individual field campaigns have been used for model evaluation. At urban locations, the model–observation comparison indicates missing knowledge on anthropogenic OA sources, both strength and seasonality. The combined model–measurements analysis suggests the existence of increased OA levels during summer due to biogenic SOA formation over large areas of the USA that can be of the same order of magnitude as the POA, even at urban locations, and contribute to the measured urban seasonal pattern. Global models are able to simulate the high secondary character of OA observed in the atmosphere as a result of SOA formation and POA aging, although the amount of OA present in the atmosphere remains largely underestimated, with a mean normalized bias (MNB) equal to −0.62 (−0.51) based on the comparison against OC (OA) urban data of all models at the surface, −0.15 (+0.51) when compared with remote measurements, and −0.30 for marine locations with OC data. The mean temporal correlations across all stations are low when compared with OC (OA) measurements: 0.47 (0.52) for urban stations, 0.39 (0.37) for remote stations, and 0.25 for marine stations with OC data. The combination of high (negative) MNB and higher correlation at urban stations when compared with the low MNB and lower correlation at remote sites suggests that knowledge about the processes that govern aerosol processing, transport and removal, on top of their sources, is important at the remote stations. There is no clear change in model skill with increasing model complexity with regard to OC or OA mass concentration. However, the complexity is needed in models in order to distinguish between anthropogenic and natural OA as needed for climate mitigation, and to calculate the impact of OA on climate accurately.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new frontier in weather forecasting is emerging by operational forecast models now being run at convection-permitting resolutions at many national weather services. However, this is not a panacea; significant systematic errors remain in the character of convective storms and rainfall distributions. The DYMECS project (Dynamical and Microphysical Evolution of Convective Storms) is taking a fundamentally new approach to evaluate and improve such models: rather than relying on a limited number of cases, which may not be representative, we have gathered a large database of 3D storm structures on 40 convective days using the Chilbolton radar in southern England. We have related these structures to storm life-cycles derived by tracking features in the rainfall from the UK radar network, and compared them statistically to storm structures in the Met Office model, which we ran at horizontal grid length between 1.5 km and 100 m, including simulations with different subgrid mixing length. We also evaluated the scale and intensity of convective updrafts using a new radar technique. We find that the horizontal size of simulated convective storms and the updrafts within them is much too large at 1.5-km resolution, such that the convective mass flux of individual updrafts can be too large by an order of magnitude. The scale of precipitation cores and updrafts decreases steadily with decreasing grid lengths, as does the typical storm lifetime. The 200-m grid-length simulation with standard mixing length performs best over all diagnostics, although a greater mixing length improves the representation of deep convective storms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simulation of the lifting of dust from the planetary surface is of substantially greater importance on Mars than on Earth, due to the fundamental role that atmospheric dust plays in the former’s climate, yet the dust emission parameterisations used to date in martian global climate models (MGCMs) lag, understandably, behind their terrestrial counterparts in terms of sophistication. Recent developments in estimating surface roughness length over all martian terrains and in modelling atmospheric circulations at regional to local scales (less than O(100 km)) presents an opportunity to formulate an improved wind stress lifting parameterisation. We have upgraded the conventional scheme by including the spatially varying roughness length in the lifting parameterisation in a fully consistent manner (thereby correcting a possible underestimation of the true threshold level for wind stress lifting), and used a modification to account for deviations from neutral stability in the surface layer. Following these improvements, it is found that wind speeds at typical MGCM resolution never reach the lifting threshold at most gridpoints: winds fall particularly short in the southern midlatitudes, where mean roughness is large. Sub-grid scale variability, manifested in both the near-surface wind field and the surface roughness, is then considered, and is found to be a crucial means of bridging the gap between model winds and thresholds. Both forms of small-scale variability contribute to the formation of dust emission ‘hotspots’: areas within the model gridbox with particularly favourable conditions for lifting, namely a smooth surface combined with strong near-surface gusts. Such small-scale emission could in fact be particularly influential on Mars, due both to the intense positive radiative feedbacks that can drive storm growth and a strong hysteresis effect on saltation. By modelling this variability, dust lifting is predicted at the locations at which dust storms are frequently observed, including the flushing storm sources of Chryse and Utopia, and southern midlatitude areas from which larger storms tend to initiate, such as Hellas and Solis Planum. The seasonal cycle of emission, which includes a double-peaked structure in northern autumn and winter, also appears realistic. Significant increases to lifting rates are produced for any sensible choices of parameters controlling the sub-grid distributions used, but results are sensitive to the smallest scale of variability considered, which high-resolution modelling suggests should be O(1 km) or less. Use of such models in future will permit the use of a diagnosed (rather than prescribed) variable gustiness intensity, which should further enhance dust lifting in the southern hemisphere in particular.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, Bond Graphs are employed to develop a novel mathematical model of conventional switched-mode DC-DC converters valid for both continuous and discontinuous conduction modes. A unique causality bond graph model of hybrid models is suggested with the operation of the switch and the diode to be represented by a Modulated Transformer with a binary input and a resistor with fixed conductance causality. The operation of the diode is controlled using an if-then function within the model. The extracted hybrid model is implemented on a Boost and Buck converter with their operations to change from CCM to DCM and to return to CCM. The vector fields of the models show validity in a wide operation area and comparison with the simulation of the converters using PSPICE reveals high accuracy of the proposed model, with the Normalised Root Means Square Error and the Maximum Absolute Error remaining adequately low. The model is also experimentally tested on a Buck topology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Instrumental observations, palaeo-proxies, and climate models suggest significant decadal variability within the North Atlantic subpolar gyre (NASPG). However, a poorly sampled observational record and a diversity of model behaviours mean that the precise nature and mechanisms of this variability are unclear. Here, we analyse an exceptionally large multi-model ensemble of 42 present-generation climate models to test whether NASPG mean state biases systematically affect the representation of decadal variability. Temperature and salinity biases in the Labrador Sea co-vary and influence whether density variability is controlled by temperature or salinity variations. Ocean horizontal resolution is a good predictor of the biases and the location of the dominant dynamical feedbacks within the NASPG. However, we find no link to the spectral characteristics of the variability. Our results suggest that the mean state and mechanisms of variability within the NASPG are not independent. This represents an important caveat for decadal predictions using anomaly-assimilation methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the challenge of representing structural differences in river channel cross-section geometry for regional to global scale river hydraulic models and the effect this can have on simulations of wave dynamics. Classically, channel geometry is defined using data, yet at larger scales the necessary information and model structures do not exist to take this approach. We therefore propose a fundamentally different approach where the structural uncertainty in channel geometry is represented using a simple parameterization, which could then be estimated through calibration or data assimilation. This paper first outlines the development of a computationally efficient numerical scheme to represent generalised channel shapes using a single parameter, which is then validated using a simple straight channel test case and shown to predict wetted perimeter to within 2% for the channels tested. An application to the River Severn, UK is also presented, along with an analysis of model sensitivity to channel shape, depth and friction. The channel shape parameter was shown to improve model simulations of river level, particularly for more physically plausible channel roughness and depth parameter ranges. Calibrating channel Manning’s coefficient in a rectangular channel provided similar water level simulation accuracy in terms of Nash-Sutcliffe efficiency to a model where friction and shape or depth were calibrated. However, the calibrated Manning coefficient in the rectangular channel model was ~2/3 greater than the likely physically realistic value for this reach and this erroneously slowed wave propagation times through the reach by several hours. Therefore, for large scale models applied in data sparse areas, calibrating channel depth and/or shape may be preferable to assuming a rectangular geometry and calibrating friction alone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

How tropical cyclone (TC) activity in the northwestern Pacific might change in a future climate is assessed using multidecadal Atmospheric Model Intercomparison Project (AMIP)-style and time-slice simulations with the ECMWF Integrated Forecast System (IFS) at 16-km and 125-km global resolution. Both models reproduce many aspects of the present-day TC climatology and variability well, although the 16-km IFS is far more skillful in simulating the full intensity distribution and genesis locations, including their changes in response to El Niño–Southern Oscillation. Both IFS models project a small change in TC frequency at the end of the twenty-first century related to distinct shifts in genesis locations. In the 16-km IFS, this shift is southward and is likely driven by the southeastward penetration of the monsoon trough/subtropical high circulation system and the southward shift in activity of the synoptic-scale tropical disturbances in response to the strengthening of deep convective activity over the central equatorial Pacific in a future climate. The 16-km IFS also projects about a 50% increase in the power dissipation index, mainly due to significant increases in the frequency of the more intense storms, which is comparable to the natural variability in the model. Based on composite analysis of large samples of supertyphoons, both the development rate and the peak intensities of these storms increase in a future climate, which is consistent with their tendency to develop more to the south, within an environment that is thermodynamically more favorable for faster development and higher intensities. Coherent changes in the vertical structure of supertyphoon composites show system-scale amplification of the primary and secondary circulations with signs of contraction, a deeper warm core, and an upward shift in the outflow layer and the frequency of the most intense updrafts. Considering the large differences in the projections of TC intensity change between the 16-km and 125-km IFS, this study further emphasizes the need for high-resolution modeling in assessing potential changes in TC activity.