997 resultados para PANEL D-15
Resumo:
The first multi-model study to estimate the predictability of a boreal Sudden Stratospheric Warming (SSW) is performed using five NWP systems. During the 2012-2013 boreal winter, anomalous upward propagating planetary wave activity was observed towards the end of December, which followed by a rapid deceleration of the westerly circulation around 2 January 2013, and on 7 January 2013 the zonal mean zonal wind at 60°N and 10 hPa reversed to easterly. This stratospheric dynamical activity was followed by an equatorward shift of the tropospheric jet stream and by a high pressure anomaly over the North Atlantic, which resulted in severe cold conditions in the UK and Northern Europe. In most of the five models, the SSW event was predicted 10 days in advance. However, only some ensemble members in most of the models predicted weakening of westerly wind when the models were initialized 15 days in advance of the SSW. Further dynamical analysis of the SSW shows that this event was characterized by the anomalous planetary wave-1 amplification followed by the anomalous wave-2 amplification in the stratosphere, which resulted in a split vortex occurring between 6 January 2013 and 8 January 2013. The models have some success in reproducing wave-1 activity when initialized 15 days in advance, they but generally failed to produce the wave-2 activity during the final days of the event. Detailed analysis shows that models have reasonably good skill in forecasting tropospheric blocking features that stimulate wave-2 amplification in the troposphere, but they have limited skill in reproducing wave-2 amplification in the stratosphere.
Resumo:
Human induced land-use change (LUC) alters the biogeophysical characteristics of the land surface influencing the surface energy balance. The level of atmospheric CO2 is expected to increase in the coming century and beyond, modifying temperature and precipitation patterns and altering the distribution and physiology of natural vegetation. It is important to constrain how CO2-induced climate and vegetation change may influence the regional extent to which LUC alters climate. This sensitivity study uses the HadCM3 coupled climate model under a range of equilibrium forcings to show that the impact of LUC declines under increasing atmospheric CO2, specifically in temperate and boreal regions. A surface energy balance analysis is used to diagnose how these changes occur. In Northern Hemisphere winter this pattern is attributed in part to the decline in winter snow cover and in the summer due to a reduction in latent cooling with higher levels of CO2. The CO2-induced change in natural vegetation distribution is also shown to play a significant role. Simulations run at elevated CO2 yet present day vegetation show a significantly increased sensitivity to LUC, driven in part by an increase in latent cooling. This study shows that modelling the impact of LUC needs to accurately simulate CO2 driven changes in precipitation and snowfall, and incorporate accurate, dynamic vegetation distribution.
Resumo:
Terrain following coordinates are widely used in operational models but the cut cell method has been proposed as an alternative that can more accurately represent atmospheric dynamics over steep orography. Because the type of grid is usually chosen during model implementation, it becomes necessary to use different models to compare the accuracy of different grids. In contrast, here a C-grid finite volume model enables a like-for-like comparison of terrain following and cut cell grids. A series of standard two-dimensional tests using idealised terrain are performed: tracer advection in a prescribed horizontal velocity field, a test starting from resting initial conditions, and orographically induced gravity waves described by nonhydrostatic dynamics. In addition, three new tests are formulated: a more challenging resting atmosphere case, and two new advection tests having a velocity field that is everywhere tangential to the terrain following coordinate surfaces. These new tests present a challenge on cut cell grids. The results of the advection tests demonstrate that accuracy depends primarily upon alignment of the flow with the grid rather than grid orthogonality. A resting atmosphere is well-maintained on all grids. In the gravity waves test, results on all grids are in good agreement with existing results from the literature, although terrain following velocity fields lead to errors on cut cell grids. Due to semi-implicit timestepping and an upwind-biased, explicit advection scheme, there are no timestep restrictions associated with small cut cells. We do not find the significant advantages of cut cells or smoothed coordinates that other authors find.
Resumo:
The general 1-D theory of waves propagating on a zonally varying flow is developed from basic wave theory, and equations are derived for the variation of wavenumber and energy along ray paths. Different categories of behaviour are found, depending on the sign of the group velocity (cg) and a wave property, B. For B positive the wave energy and the wave number vary in the same sense, with maxima in relative easterlies or westerlies, depending on the sign of cg. Also the wave accumulation of Webster and Chang (1988) occurs where cg goes to zero. However for B negative they behave in opposite senses and wave accumulation does not occur. The zonal propagation of the gravest equatorial waves is analysed in detail using the theory. For non-dispersive Kelvin waves, B reduces to 2, and analytic solution is possible. B is positive for all the waves considered, except for the westward moving mixed Rossby-gravity (WMRG) wave which can have negative as well as positive B. Comparison is made between the observed climatologies of the individual equatorial waves and the result of pure propagation on the climatological upper tropospheric flow. The Kelvin wave distribution is in remarkable agreement, considering the approximations made. Some aspects of the WMRG and Rossby wave distributions are also in qualitative agreement. However the observed maxima in these waves in the winter westerlies in the eastern Pacific and Atlantic are not consistent with the theory. This is consistent with the importance of the sources of equatorial waves in these westerly duct regions due to higher latitude wave activity.
Resumo:
This study uses large-eddy simulation to investigate the structure of the ocean surface boundary layer (OSBL) in the presence of Langmuir turbulence and stabilizing surface heat fluxes. The OSBL consists of a weakly stratified layer, despite a surface heat flux, above a stratified thermocline. The weakly stratified (mixed) layer is maintained by a combination of a turbulent heat flux produced by the wave-driven Stokes drift and downgradient turbulent diffusion. The scaling of turbulence statistics, such as dissipation and vertical velocity variance, is only affected by the surface heat flux through changes in the mixed layer depth. Diagnostic models are proposed for the equilibrium boundary layer and mixed layer depths in the presence of surface heating. The models are a function of the initial mixed layer depth before heating is imposed and the Langmuir stability length. In the presence of radiative heating, the models are extended to account for the depth profile of the heating.
Resumo:
The variation of wind-optimal transatlantic flight routes and their turbulence potential is investigated to understand how upper-level winds and large-scale flow patterns can affect the efficiency and safety of long-haul flights. In this study, the wind-optimal routes (WORs) that minimize the total flight time by considering wind variations are modeled for flights between John F. Kennedy International Airport (JFK) in New York, New York, and Heathrow Airport (LHR) in London, United Kingdom, during two distinct winter periods of abnormally high and low phases of North Atlantic Oscillation (NAO) teleconnection patterns. Eastbound WORs approximate the JFK–LHR great circle (GC) route following northerly shifted jets in the +NAO period. Those WORs deviate southward following southerly shifted jets during the −NAO period, because eastbound WORs fly closely to the prevailing westerly jets to maximize tailwinds. Westbound WORs, however, spread meridionally to avoid the jets near the GC in the +NAO period to minimize headwinds. In the −NAO period, westbound WORs are north of the GC because of the southerly shifted jets. Consequently, eastbound WORs are faster but have higher probabilities of encountering clear-air turbulence than westbound ones, because eastbound WORs are close to the jet streams, especially near the cyclonic shear side of the jets in the northern (southern) part of the GC in the +NAO (−NAO) period. This study suggests how predicted teleconnection weather patterns can be used for long-haul strategic flight planning, ultimately contributing to minimizing aviation’s impact on the environment
Resumo:
On 23 November 1981, a strong cold front swept across the U.K., producing tornadoes from the west to the east coasts. An extensive campaign to collect tornado reports by the Tornado and Storm Research Organisation (TORRO) resulted in 104 reports, the largest U.K. outbreak. The front was simulated with a convection-permitting numerical model down to 200-m horizontal grid spacing to better understand its evolution and meteorological environment. The event was typical of tornadoes in the U.K., with convective available potential energy (CAPE) less than 150 J kg-1, 0-1-km wind shear of 10-20 m s-1, and a narrow cold-frontal rainband forming precipitation cores and gaps. A line of cyclonic absolute vorticity existed along the front, with maxima as large as 0.04 s-1. Some hook-shaped misovortices bore kinematic similarity to supercells. The narrow swath along which the line was tornadic was bounded on the equatorward side by weak vorticity along the line and on the poleward side by zero CAPE, enclosing a region where the environment was otherwise favorable for tornadogenesis. To determine if the 104 tornado reports were plausible, first possible duplicate reports were eliminated, resulting in as few as 58 tornadoes to as many as 90. Second, the number of possible parent misovortices that may have spawned tornadoes is estimated from model output. The number of plausible tornado reports in the 200-m grid-spacing domain was 22 and as many as 44, whereas the model simulation was used to estimate 30 possible parent misovortices within this domain. These results suggest that 90 reports was plausible.
Resumo:
The co-polar correlation coefficient (ρhv) has many applications, including hydrometeor classification, ground clutter and melting layer identification, interpretation of ice microphysics and the retrieval of rain drop size distributions (DSDs). However, we currently lack the quantitative error estimates that are necessary if these applications are to be fully exploited. Previous error estimates of ρhv rely on knowledge of the unknown "true" ρhv and implicitly assume a Gaussian probability distribution function of ρhv samples. We show that frequency distributions of ρhv estimates are in fact highly negatively skewed. A new variable: L = -log10(1 - ρhv) is defined, which does have Gaussian error statistics, and a standard deviation depending only on the number of independent radar pulses. This is verified using observations of spherical drizzle drops, allowing, for the first time, the construction of rigorous confidence intervals in estimates of ρhv. In addition, we demonstrate how the imperfect co-location of the horizontal and vertical polarisation sample volumes may be accounted for. The possibility of using L to estimate the dispersion parameter (µ) in the gamma drop size distribution is investigated. We find that including drop oscillations is essential for this application, otherwise there could be biases in retrieved µ of up to ~8. Preliminary results in rainfall are presented. In a convective rain case study, our estimates show µ to be substantially larger than 0 (an exponential DSD). In this particular rain event, rain rate would be overestimated by up to 50% if a simple exponential DSD is assumed.
Resumo:
At the beginning of the Medieval Climate Anomaly, in the ninth and tenth century, the medieval eastern Roman empire, more usually known as Byzantium, was recovering from its early medieval crisis and experiencing favourable climatic conditions for the agricultural and demographic growth. Although in the Balkans and Anatolia such favourable climate conditions were prevalent during the eleventh century, parts of the imperial territories were facing significant challenges as a result of external political/military pressure. The apogee of medieval Byzantine socio-economic development, around AD 1150, coincides with a period of adverse climatic conditions for its economy, so it becomes obvious that the winter dryness and high climate variability at this time did not hinder Byzantine society and economy from achieving that level of expansion. Soon after this peak, towards the end of the twelfth century, the populations of the Byzantine world were experiencing unusual climatic conditions with marked dryness and cooler phases. The weakened Byzantine socio-political system must have contributed to the events leading to the fall of Constantinople in AD 1204 and the sack of the city. The final collapse of the Byzantine political control over western Anatolia took place half century later, thus contemporaneous with the strong cooling effect after a tropical volcanic eruption in AD 1257. We suggest that, regardless of a range of other influential factors, climate change was also an important contributing factor to the socio-economic changes that took place in Byzantium during the Medieval Climate Anomaly. Crucially, therefore, while the relatively sophisticated and complex Byzantine society was certainly influenced by climatic conditions, and while it nevertheless displayed a significant degree of resilience, external pressures as well as tensions within the Byzantine society more broadly contributed to an increasing vulnerability in respect of climate impacts. Our interdisciplinary analysis is based on all available sources of information on the climate and society of Byzantium, that is textual (documentary), archaeological, environmental, climate and climate model-based evidence about the nature and extent of climate variability in the eastern Mediterranean. The key challenge was, therefore, to assess the relative influence to be ascribed to climate variability and change on the one hand, and on the other to the anthropogenic factors in the evolution of Byzantine state and society (such as invasions, changes in international or regional market demand and patterns of production and consumption, etc.). The focus of this interdisciplinary
Resumo:
Food industry is critical to any nation’s health and well-being; it is also critical to the economic health of a nation, since it can typically constitute over a fifth of the nation’s manufacturing GDP. Food Engineering is a discipline that ought to be at the heart of the food industry. Unfortunately, this discipline is not playing its rightful role today: engineering has been relegated to play the role of a service provider to the food industry, instead of it being a strategic driver for the very growth of the industry. This paper hypothesises that food engineering discipline, today, seems to be continuing the way it was in the last century, and has not risen to the challenges that it really faces. This paper therefore categorises the challenges as those being posed by: 1. Business dynamics, 2. Market forces, 3. Manufacturing environment and 4. Environmental Considerations, and finds the current scope and subject-knowledge competencies of food engineering to be inadequate in meeting these challenges. The paper identifies: a) health, b) environment and c) security as the three key drivers of the discipline, and proposes a new definition of food engineering. This definition requires food engineering to have a broader science base which includes biophysical, biochemical and health sciences, in addition to engineering sciences. This definition, in turn, leads to the discipline acquiring a new set of subject-knowledge competencies that is fit-for-purpose for this day and age, and hopefully for the foreseeable future. The possibility of this approach leading to the development of a higher education program in food engineering is demonstrated by adopting a theme based curriculum development with five core themes, supplemented by appropriate enabling and knowledge integrating courses. At the heart of this theme based approach is an attempt to combine engineering of process and product in a purposeful way, termed here as Food Product Realisation Engineering. Finally, the paper also recommends future development of two possible niche specialisation programs in Nutrition and Functional Food Engineering and Gastronomic Engineering. It is hoped that this reconceptualization of the discipline will not only make it more purposeful for the food industry, but it will also make the subject more intellectually challenging and attract bright young minds to the discipline.
Resumo:
Atmosphere only and ocean only variational data assimilation (DA) schemes are able to use window lengths that are optimal for the error growth rate, non-linearity and observation density of the respective systems. Typical window lengths are 6-12 hours for the atmosphere and 2-10 days for the ocean. However, in the implementation of coupled DA schemes it has been necessary to match the window length of the ocean to that of the atmosphere, which may potentially sacrifice the accuracy of the ocean analysis in order to provide a more balanced coupled state. This paper investigates how extending the window length in the presence of model error affects both the analysis of the coupled state and the initialized forecast when using coupled DA with differing degrees of coupling. Results are illustrated using an idealized single column model of the coupled atmosphere-ocean system. It is found that the analysis error from an uncoupled DA scheme can be smaller than that from a coupled analysis at the initial time, due to faster error growth in the coupled system. However, this does not necessarily lead to a more accurate forecast due to imbalances in the coupled state. Instead coupled DA is more able to update the initial state to reduce the impact of the model error on the accuracy of the forecast. The effect of model error is potentially most detrimental in the weakly coupled formulation due to the inconsistency between the coupled model used in the outer loop and uncoupled models used in the inner loop.
Resumo:
The impact of two different coupled cirrus microphysics-radiation parameterizations on the zonally averaged temperature and humidity biases in the tropical tropopause layer (TTL) of a Met Office climate model configuration is assessed. One parameterization is based on a linear coupling between a model prognostic variable, the ice mass mixing ratio, qi, and the integral optical properties. The second is based on the integral optical properties being parameterized as functions of qi and temperature, Tc, where the mass coefficients (i.e. scattering and extinction) are parameterized as nonlinear functions of the ratio between qi and Tc. The cirrus microphysics parameterization is based on a moment estimation parameterization of the particle size distribution (PSD), which relates the mass moment (i.e. second moment if mass is proportional to size raised to the power of 2 ) of the PSD to all other PSD moments through the magnitude of the second moment and Tc. This same microphysics PSD parameterization is applied to calculate the integral optical properties used in both radiation parameterizations and, thus, ensures PSD and mass consistency between the cirrus microphysics and radiation schemes. In this paper, the temperature-non-dependent and temperature-dependent parameterizations are shown to increase and decrease the zonally averaged temperature biases in the TTL by about 1 K, respectively. The temperature-dependent radiation parameterization is further demonstrated to have a positive impact on the specific humidity biases in the TTL, as well as decreasing the shortwave and longwave biases in the cloudy radiative effect. The temperature-dependent radiation parameterization is shown to be more consistent with TTL and global radiation observations.
Resumo:
The Southern Ocean is a critical region for global climate, yet large cloud and solar radiation biases over the Southern Ocean are a long-standing problem in climate models and are poorly understood, leading to biases in simulated sea surface temperatures. This study shows that supercooled liquid clouds are central to understanding and simulating the Southern Ocean environment. A combination of satellite observational data and detailed radiative transfer calculations is used to quantify the impact of cloud phase and cloud vertical structure on the reflected solar radiation in the Southern Hemisphere summer. It is found that clouds with supercooled liquid tops dominate the population of liquid clouds. The observations show that clouds with supercooled liquid tops contribute between 27% and 38% to the total reflected solar radiation between 40° and 70°S, and climate models are found to poorly simulate these clouds. The results quantify the importance of supercooled liquid clouds in the Southern Ocean environment and highlight the need to improve understanding of the physical processes that control these clouds in order to improve their simulation in numerical models. This is not only important for improving the simulation of present-day climate and climate variability, but also relevant for increasing confidence in climate feedback processes and future climate projections.
Resumo:
Network diagnosis in Wireless Sensor Networks (WSNs) is a difficult task due to their improvisational nature, invisibility of internal running status, and particularly since the network structure can frequently change due to link failure. To solve this problem, we propose a Mobile Sink (MS) based distributed fault diagnosis algorithm for WSNs. An MS, or mobile fault detector is usually a mobile robot or vehicle equipped with a wireless transceiver that performs the task of a mobile base station while also diagnosing the hardware and software status of deployed network sensors. Our MS mobile fault detector moves through the network area polling each static sensor node to diagnose the hardware and software status of nearby sensor nodes using only single hop communication. Therefore, the fault detection accuracy and functionality of the network is significantly increased. In order to maintain an excellent Quality of Service (QoS), we employ an optimal fault diagnosis tour planning algorithm. In addition to saving energy and time, the tour planning algorithm excludes faulty sensor nodes from the next diagnosis tour. We demonstrate the effectiveness of the proposed algorithms through simulation and real life experimental results.
Resumo:
This article presents a method employing stir bar sorptive extraction (SBSE) with in situ derivatization, in combination with either thermal or liquid desorption on-line coupled to gas chromatography-mass spectrometry for the analysis of fluoxetine in plasma samples. Ethyl chloroformate was employed as derivatizing agent producing symmetrical peaks. Parameters such as solvent polarity, time for analyte desorption, and extraction time, were evaluated. During the validation process, the developed method presented specificity, linearity (R-2 > 0.99), precision (R.S.D. < 15%), and limits of quantification (LOQ) of 30 and 1.37 pg mL(-1), when liquid and thermal desorption were employed, respectively. This simple and highly sensitive method showed to be adequate for the measurement-of fluoxetine in typical and trace concentration levels. (c) 2008 Elsevier B.V. All rights reserved.