951 resultados para Sellen, Abigail J.: The myth of the paperless office


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many climate models have problems simulating Indian summer monsoon rainfall and its variability, resulting in considerable uncertainty in future projections. Problems may relate to many factors, such as local effects of the formulation of physical parametrisation schemes, while common model biases that develop elsewhere within the climate system may also be important. Here we examine the extent and impact of cold sea surface temperature (SST) biases developing in the northern Arabian Sea in the CMIP5 multi-model ensemble, where such SST biases are shown to be common. Such biases have previously been shown to reduce monsoon rainfall in the Met Office Unified Model (MetUM) by weakening moisture fluxes incident upon India. The Arabian Sea SST biases in CMIP5 models consistently develop in winter, via strengthening of the winter monsoon circulation, and persist into spring and summer. A clear relationship exists between Arabian Sea cold SST bias and weak monsoon rainfall in CMIP5 models, similar to effects in the MetUM. Part of this effect may also relate to other factors, such as forcing of the early monsoon by spring-time excessive equatorial precipitation. Atmosphere-only future time-slice experiments show that Arabian Sea cold SST biases have potential to weaken future monsoon rainfall increases by limiting moisture flux acceleration through non-linearity of the Clausius-Clapeyron relationship. Analysis of CMIP5 model future scenario simulations suggests that, while such effects are likely small compared to other sources of uncertainty, models with large Arabian Sea cold SST biases suppress the range of potential outcomes for changes to future early monsoon rainfall.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many factors, both mesoscale and larger scale, often come together in order for a particular convective initiation to take place. The authors describe a modeling study of a case from the Convective Storms Initiation Project (CSIP) in which a single thunderstorm formed behind a front in the southern United Kingdom. The key features of the case were a tongue of low-level high θw air associated with a forward-sloping split front (overrunning lower θw air above), a convergence line, and a “lid” of high static stability air, which the shower was initially constrained below but later broke through. In this paper, the authors analyze the initiation of the storm, which can be traced back to a region of high ground (Dartmoor) at around 0700 UTC, in more detail using model sensitivity studies with the Met Office Unified Model (MetUM). It is established that the convergence line was initially caused by roughness effects but had a significant thermal component later. Dartmoor had a key role in the development of the thunderstorm. A period of asymmetric flow over the high ground, with stronger low-level descent in the lee, led to a hole in a layer of low-level clouds downstream. The surface solar heating through this hole, in combination with the tongue of low-level high θw air associated with the front, caused the shower to initiate with sufficient lifting to enable it later to break through the lid.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The formulation and performance of the Met Office visibility analysis and prediction system are described. The visibility diagnostic within the limited-area Unified Model is a function of humidity and a prognostic aerosol content. The aerosol model includes advection, industrial and general urban sources, plus boundary-layer mixing and removal by rain. The assimilation is a 3-dimensional variational scheme in which the visibility observation operator is a very nonlinear function of humidity, aerosol and temperature. A quality control scheme for visibility data is included. Visibility observations can give rise to humidity increments of significant magnitude compared with the direct impact of humidity observations. We present the results of sensitivity studies which show the contribution of different components of the system to improved skill in visibility forecasts. Visibility assimilation is most important within the first 6-12 hours of the forecast and for visibilities below 1 km, while modelling of aerosol sources and advection is important for slightly higher visibilities (1-5 km) and is still significant at longer forecast times

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We compare the characteristics of synthetic European droughts generated by the HiGEM1 coupled climate model run with present day atmospheric composition with observed drought events extracted from the CRU TS3 data set. The results demonstrate consistency in both the rate of drought occurrence and the spatiotemporal structure of the events. Estimates of the probability density functions for event area, duration and severity are shown to be similar with confidence > 90%. Encouragingly, HiGEM is shown to replicate the extreme tails of the observed distributions and thus the most damaging European drought events. The soil moisture state is shown to play an important role in drought development. Once a large-scale drought has been initiated it is found to be 50% more likely to continue if the local soil moisture is below the 40th percentile. In response to increased concentrations of atmospheric CO2, the modelled droughts are found to increase in duration, area and severity. The drought response can be largely attributed to temperature driven changes in relative humidity. 1 HiGEM is based on the latest climate configuration of the Met Office Hadley Centre Unified Model (HadGEM1) with the horizontal resolution increased to 1.25 x 0.83 degrees in longitude and latitude in the atmosphere and 1/3 x 1/3 degrees in the ocean.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

On the 8 January 2005 the city of Carlisle in north-west England was severely flooded following 2 days of almost continuous rain over the nearby hills. Orographic enhancement of the rain through the seeder–feeder mechanism led to the very high rainfall totals. This paper shows the impact of running the Met Office Unified Model (UM) with a grid spacing of 4 and 1 km compared to the 12 km available at the time of the event. These forecasts, and forecasts from the Nimrod nowcasting system, were fed into the Probability Distributed Model (PDM) to predict river flow at the outlets of two catchments important for flood warning. The results show the benefit of increased resolution in the UM, the benefit of coupling the high-resolution rainfall forecasts to the PDM and the improvement in timeliness of flood warning that might have been possible. Copyright © 2008 Royal Meteorological Society

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is becoming increasingly important to be able to verify the spatial accuracy of precipitation forecasts, especially with the advent of high-resolution numerical weather prediction (NWP) models. In this article, the fractions skill score (FSS) approach has been used to perform a scale-selective evaluation of precipitation forecasts during 2003 from the Met Office mesoscale model (12 km grid length). The investigation shows how skill varies with spatial scale, the scales over which the data assimilation (DA) adds most skill, and how the loss of that skill is dependent on both the spatial scale and the rainfall coverage being examined. Although these results come from a specific model, they demonstrate how this verification approach can provide a quantitative assessment of the spatial behaviour of new finer-resolution models and DA techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the results of simulations carried out with the Met Office Unified Model at 12km, 4km and 1.5km resolution for a large region centred on West Africa using several different representations of the convection processes. These span the range of resolutions from much coarser than the size of the convection processes to the cloud-system resolving and thus encompass the intermediate "grey-zone". The diurnal cycle in the extent of convective regions in the models is tested against observations from the Geostationary Earth Radiation Budget instrument on Meteosat-8. By this measure, the two best-performing simulations are a 12km model without convective parametrization, using Smagorinsky style sub-grid scale mixing in all three dimensions and a 1.5km simulations with two-dimensional Smagorinsky mixing. Of these, the 12km model produces a better match to the magnitude of the total cloud fraction but the 1.5km results in better timing for its peak value. The results suggest that the previously-reported improvement in the representation of the diurnal cycle of convective organisation in the 4km model compared to the standard 12km configuration is principally a result of the convection scheme employed rather than the improved resolution per se. The details of and implications for high-resolution model simulations are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The RAPID-MOCHA array has observed the Atlantic Meridional overturning circulation (AMOC) at 26.5°N since 2004. During 2009/2010, there was a transient 30% weakening of the AMOC driven by anomalies in geostrophic and Ekman transports. Here, we use simulations based on the Met Office Forecast Ocean Assimilation Model (FOAM) to diagnose the relative importance of atmospheric forcings and internal ocean dynamics in driving the anomalous geostrophic circulation of 2009/10. Data assimilating experiments with FOAM accurately reproduce the mean strength and depth of the AMOC at 26.5°N. In addition, agreement between simulated and observed stream functions in the deep ocean is improved when we calculate the AMOC using a method that approximates the RAPID observations. The main features of the geostrophic circulation anomaly are captured by an ensemble of simulations without data-assimilation. These model results suggest that the atmosphere played a dominant role in driving recent interannual variability of the AMOC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are significant discrepancies between observational datasets of Arctic sea ice concentrations covering the last three decades, which result in differences of over 20% in Arctic summer sea ice extent/area and 5%–10% in winter. Previous modeling studies have shown that idealized sea ice anomalies have the potential for making a substantial impact on climate. In this paper, this theory is further developed by performing a set of simulations using the third Hadley Centre Coupled Atmospheric Model (HadAM3). The model was driven with monthly climatologies of sea ice fractions derived from three of these records to investigate potential implications of sea ice inaccuracies for climate simulations. The standard sea ice climatology from the Met Office provided a control. This study focuses on the effects of actual inaccuracies of concentration retrievals, which vary spatially and are larger in summer than winter. The smaller sea ice discrepancies in winter have a much larger influence on climate than the much greater summer sea ice differences. High sensitivity to sea ice prescription was observed, even though no SST feedbacks were included. Significant effects on surface fields were observed in the Arctic, North Atlantic, and North Pacific. Arctic average surface air temperature anomalies in winter vary by 2.5°C, and locally exceed 12°C. Arctic mean sea level pressure varies by up to 5 mb locally. Anomalies extend to 45°N over North America and Eurasia but not to lower latitudes, and with limited changes in circulation above the boundary layer. No statistically significant impact on climate variability was simulated, in terms of the North Atlantic Oscillation. Results suggest that the uncertainty in summer sea ice prescription is not critical but that winter values require greater accuracy, with the caveats that the influences of ocean–sea ice feedbacks were not included in this study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The warm conveyor belt (WCB) of an extratropical cyclone generally splits into two branches. One branch (WCB1) turns anticyclonically into the downstream upper-level tropospheric ridge, while the second branch (WCB2) wraps cyclonically around the cyclone centre. Here, the WCB split in a typical North Atlantic cold-season cyclone is analysed using two numerical models: the Met Office Unified Model and the COSMO model. The WCB flow is defined using off-line trajectory analysis. The two models represent the WCB split consistently. The split occurs early in the evolution of the WCB with WCB1 experiencing maximum ascent at lower latitudes and with higher moisture content than WCB2. WCB1 ascends abruptly along the cold front where the resolved ascent rates are greatest and there is also line convection. In contrast, WCB2 remains at lower levels for longer before undergoing saturated large-scale ascent over the system's warm front. The greater moisture in WCB1 inflow results in greater net potential temperature change from latent heat release, which determines the final isentropic level of each branch. WCB1 also exhibits lower outflow potential vorticity values than WCB2. Complementary diagnostics in the two models are utilised to study the influence of individual diabatic processes on the WCB. Total diabatic heating rates along the WCB branches are comparable in the two models with microphysical processes in the large-scale cloud schemes being the major contributor to this heating. However, the different convective parameterisation schemes used by the models cause significantly different contributions to the total heating. These results have implications for studies on the influence of the WCB outflow in Rossby wave evolution and breaking. Key aspects are the net potential temperature change and the isentropic level of the outflow which together will influence the relative mass going into each WCB branch and the associated negative PV anomalies at the tropopause-level flow.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The scientific understanding of the Earth’s climate system, including the central question of how the climate system is likely to respond to human-induced perturbations, is comprehensively captured in GCMs and Earth System Models (ESM). Diagnosing the simulated climate response, and comparing responses across different models, is crucially dependent on transparent assumptions of how the GCM/ESM has been driven – especially because the implementation can involve subjective decisions and may differ between modelling groups performing the same experiment. This paper outlines the climate forcings and setup of the Met Office Hadley Centre ESM, HadGEM2-ES for the CMIP5 set of centennial experiments. We document the prescribed greenhouse gas concentrations, aerosol precursors, stratospheric and tropospheric ozone assumptions, as well as implementation of land-use change and natural forcings for the HadGEM2-ES historical and future experiments following the Representative Concentration Pathways. In addition, we provide details of how HadGEM2-ES ensemble members were initialised from the control run and how the palaeoclimate and AMIP experiments, as well as the “emission driven” RCP experiments were performed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Useful probabilistic climate forecasts on decadal timescales should be reliable (i.e. forecast probabilities match the observed relative frequencies) but this is seldom examined. This paper assesses a necessary condition for reliability, that the ratio of ensemble spread to forecast error being close to one, for seasonal to decadal sea surface temperature retrospective forecasts from the Met Office Decadal Prediction System (DePreSys). Factors which may affect reliability are diagnosed by comparing this spread-error ratio for an initial condition ensemble and two perturbed physics ensembles for initialized and uninitialized predictions. At lead times less than 2 years, the initialized ensembles tend to be under-dispersed, and hence produce overconfident and hence unreliable forecasts. For longer lead times, all three ensembles are predominantly over-dispersed. Such over-dispersion is primarily related to excessive inter-annual variability in the climate model. These findings highlight the need to carefully evaluate simulated variability in seasonal and decadal prediction systems.Useful probabilistic climate forecasts on decadal timescales should be reliable (i.e. forecast probabilities match the observed relative frequencies) but this is seldom examined. This paper assesses a necessary condition for reliability, that the ratio of ensemble spread to forecast error being close to one, for seasonal to decadal sea surface temperature retrospective forecasts from the Met Office Decadal Prediction System (DePreSys). Factors which may affect reliability are diagnosed by comparing this spread-error ratio for an initial condition ensemble and two perturbed physics ensembles for initialized and uninitialized predictions. At lead times less than 2 years, the initialized ensembles tend to be under-dispersed, and hence produce overconfident and hence unreliable forecasts. For longer lead times, all three ensembles are predominantly over-dispersed. Such over-dispersion is primarily related to excessive inter-annual variability in the climate model. These findings highlight the need to carefully evaluate simulated variability in seasonal and decadal prediction systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research is a preliminary study which examines the impact of connectivity on value in the managed office and conventional office sectors. The research is based on an extensive literature review, online survey and case studies carried out during 2005. The research shows, for the first time in the UK, how connectivity can increase value in office buildings, and how technology is priced in the market place.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper analyses the evolving corporate real estate supply chain and the interaction of this evolution with emerging business models in the serviced office sector. An enhanced model of the corporate real estate portfolio is first presented incorporating vacant, alienated and transitory space. It is argued that the serviced office sector has evolved in response to an increasingly diverse corporate real estate portfolio. For the peripheral corporate real estate portfolio, the core serviced workspace product provides the ability to rapidly acquire high-quality workspace and associated support services on very flexible bases. Whilst it is arguably a beta product, the core workspace offer is now being augmented by managed office or back-to-back leases which enables clients to complement the advantages of serviced offices with a wider choice of premises. Joint venture business models are aligned with solutions to problems of vacant space.