876 resultados para 2447: modelling and forecasting


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Weather Research and Forecasting model was applied to analyze variations in the planetary boundary layer (PBL) structure over Southeast England including central and suburban London. The parameterizations and predictive skills of two nonlocal mixing PBL schemes, YSU and ACM2, and two local mixing PBL schemes, MYJ and MYNN2, were evaluated over a variety of stability conditions, with model predictions at a 3 km grid spacing. The PBL height predictions, which are critical for scaling turbulence and diffusion in meteorological and air quality models, show significant intra-scheme variance (> 20%), and the reasons are presented. ACM2 diagnoses the PBL height thermodynamically using the bulk Richardson number method, which leads to a good agreement with the lidar data for both unstable and stable conditions. The modeled vertical profiles in the PBL, such as wind speed, turbulent kinetic energy (TKE), and heat flux, exhibit large spreads across the PBL schemes. The TKE predicted by MYJ were found to be too small and show much less diurnal variation as compared with observations over London. MYNN2 produces better TKE predictions at low levels than MYJ, but its turbulent length scale increases with height in the upper part of the strongly convective PBL, where it should decrease. The local PBL schemes considerably underestimate the entrainment heat fluxes for convective cases. The nonlocal PBL schemes exhibit stronger mixing in the mean wind fields under convective conditions than the local PBL schemes and agree better with large-eddy simulation (LES) studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is becoming increasingly important that we can understand and model flow processes in urban areas. Applications such as weather forecasting, air quality and sustainable urban development rely on accurate modelling of the interface between an urban surface and the atmosphere above. This review gives an overview of current understanding of turbulence generated by an urban surface up to a few building heights, the layer called the roughness sublayer (RSL). High quality datasets are also identified which can be used in the development of suitable parameterisations of the urban RSL. Datasets derived from physical and numerical modelling, and full-scale observations in urban areas now exist across a range of urban-type morphologies (e.g. street canyons, cubes, idealised and realistic building layouts). Results show that the urban RSL depth falls within 2 – 5 times mean building height and is not easily related to morphology. Systematic perturbations away from uniform layouts (e.g. varying building heights) have a significant impact on RSL structure and depth. Considerable fetch is required to develop an overlying inertial sublayer, where turbulence is more homogeneous, and some authors have suggested that the “patchiness” of urban areas may prevent inertial sublayers from developing at all. Turbulence statistics suggest similarities between vegetation and urban canopies but key differences are emerging. There is no consensus as to suitable scaling variables, e.g. friction velocity above canopy vs. square root of maximum Reynolds stress, mean vs. maximum building height. The review includes a summary of existing modelling practices and highlights research priorities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The quantification of uncertainty is an increasingly popular topic, with clear importance for climate change policy. However, uncertainty assessments are open to a range of interpretations, each of which may lead to a different policy recommendation. In the EQUIP project researchers from the UK climate modelling, statistical modelling, and impacts communities worked together on ‘end-to-end’ uncertainty assessments of climate change and its impacts. Here, we use an experiment in peer review amongst project members to assess variation in the assessment of uncertainties between EQUIP researchers. We find overall agreement on key sources of uncertainty but a large variation in the assessment of the methods used for uncertainty assessment. Results show that communication aimed at specialists makes the methods used harder to assess. There is also evidence of individual bias, which is partially attributable to disciplinary backgrounds. However, varying views on the methods used to quantify uncertainty did not preclude consensus on the consequential results produced using those methods. Based on our analysis, we make recommendations for developing and presenting statements on climate and its impacts. These include the use of a common uncertainty reporting format in order to make assumptions clear; presentation of results in terms of processes and trade-offs rather than only numerical ranges; and reporting multiple assessments of uncertainty in order to elucidate a more complete picture of impacts and their uncertainties. This in turn implies research should be done by teams of people with a range of backgrounds and time for interaction and discussion, with fewer but more comprehensive outputs in which the range of opinions is recorded.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The statistical properties and skill in predictions of objectively identified and tracked cyclonic features (frontal waves and cyclones) are examined in MOGREPS-15, the global 15-day version of the Met Office Global and Regional Ensemble Prediction System (MOGREPS). The number density of cyclonic features is found to decline with increasing lead-time, with analysis fields containing weak features which are not sustained past the first day of the forecast. This loss of cyclonic features is associated with a decline in area averaged enstrophy with increasing lead time. Both feature number density and area averaged enstrophy saturate by around 7 days into the forecast. It is found that the feature number density and area averaged enstrophy of forecasts produced using model versions that include stochastic energy backscatter saturate at higher values than forecasts produced without stochastic physics. The ability of MOGREPS-15 to predict the locations of cyclonic features of different strengths is evaluated at different spatial scales by examining the Brier Skill (relative to the analysis climatology) of strike probability forecasts: the probability that a cyclonic feature center is located within a specified radius. The radius at which skill is maximised increases with lead time from 650km at 12h to 950km at 7 days. The skill is greatest for the most intense features. Forecast skill remains above zero at these scales out to 14 days for the most intense cyclonic features, but only out to 8 days when all features are included irrespective of intensity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An improved understanding of present-day climate variability and change relies on high-quality data sets from the past 2 millennia. Global efforts to model regional climate modes are in the process of being validated against, and integrated with, records of past vegetation change. For South America, however, the full potential of vegetation records for evaluating and improving climate models has hitherto not been sufficiently acknowledged due to an absence of information on the spatial and temporal coverage of study sites. This paper therefore serves as a guide to high-quality pollen records that capture environmental variability during the last 2 millennia. We identify 60 vegetation (pollen) records from across South America which satisfy geochronological requirements set out for climate modelling, and we discuss their sensitivity to the spatial signature of climate modes throughout the continent. Diverse patterns of vegetation response to climate change are observed, with more similar patterns of change in the lowlands and varying intensity and direction of responses in the highlands. Pollen records display local-scale responses to climate modes; thus, it is necessary to understand how vegetation–climate interactions might diverge under variable settings. We provide a qualitative translation from pollen metrics to climate variables. Additionally, pollen is an excellent indicator of human impact through time. We discuss evidence for human land use in pollen records and provide an overview considered useful for archaeological hypothesis testing and important in distinguishing natural from anthropogenically driven vegetation change. We stress the need for the palynological community to be more familiar with climate variability patterns to correctly attribute the potential causes of observed vegetation dynamics. This manuscript forms part of the wider LOng-Term multi-proxy climate REconstructions and Dynamics in South America – 2k initiative that provides the ideal framework for the integration of the various palaeoclimatic subdisciplines and palaeo-science, thereby jump-starting and fostering multidisciplinary research into environmental change on centennial and millennial timescales.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the increase in e-commerce and the digitisation of design data and information,the construction sector has become reliant upon IT infrastructure and systems. The design and production process is more complex, more interconnected, and reliant upon greater information mobility, with seamless exchange of data and information in real time. Construction small and medium-sized enterprises (CSMEs), in particular,the speciality contractors, can effectively utilise cost-effective collaboration-enabling technologies, such as cloud computing, to help in the effective transfer of information and data to improve productivity. The system dynamics (SD) approach offers a perspective and tools to enable a better understanding of the dynamics of complex systems. This research focuses upon system dynamics methodology as a modelling and analysis tool in order to understand and identify the key drivers in the absorption of cloud computing for CSMEs. The aim of this paper is to determine how the use of system dynamics (SD) can improve the management of information flow through collaborative technologies leading to improved productivity. The data supporting the use of system dynamics was obtained through a pilot study consisting of questionnaires and interviews from five CSMEs in the UK house-building sector.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The topography of many floodplains in the developed world has now been surveyed with high resolution sensors such as airborne LiDAR (Light Detection and Ranging), giving accurate Digital Elevation Models (DEMs) that facilitate accurate flood inundation modelling. This is not always the case for remote rivers in developing countries. However, the accuracy of DEMs produced for modelling studies on such rivers should be enhanced in the near future by the high resolution TanDEM-X WorldDEM. In a parallel development, increasing use is now being made of flood extents derived from high resolution Synthetic Aperture Radar (SAR) images for calibrating, validating and assimilating observations into flood inundation models in order to improve these. This paper discusses an additional use of SAR flood extents, namely to improve the accuracy of the TanDEM-X DEM in the floodplain covered by the flood extents, thereby permanently improving this DEM for future flood modelling and other studies. The method is based on the fact that for larger rivers the water elevation generally changes only slowly along a reach, so that the boundary of the flood extent (the waterline) can be regarded locally as a quasi-contour. As a result, heights of adjacent pixels along a small section of waterline can be regarded as samples with a common population mean. The height of the central pixel in the section can be replaced with the average of these heights, leading to a more accurate estimate. While this will result in a reduction in the height errors along a waterline, the waterline is a linear feature in a two-dimensional space. However, improvements to the DEM heights between adjacent pairs of waterlines can also be made, because DEM heights enclosed by the higher waterline of a pair must be at least no higher than the corrected heights along the higher waterline, whereas DEM heights not enclosed by the lower waterline must in general be no lower than the corrected heights along the lower waterline. In addition, DEM heights between the higher and lower waterlines can also be assigned smaller errors because of the reduced errors on the corrected waterline heights. The method was tested on a section of the TanDEM-X Intermediate DEM (IDEM) covering an 11km reach of the Warwickshire Avon, England. Flood extents from four COSMO-SKyMed images were available at various stages of a flood in November 2012, and a LiDAR DEM was available for validation. In the area covered by the flood extents, the original IDEM heights had a mean difference from the corresponding LiDAR heights of 0.5 m with a standard deviation of 2.0 m, while the corrected heights had a mean difference of 0.3 m with standard deviation 1.2 m. These figures show that significant reductions in IDEM height bias and error can be made using the method, with the corrected error being only 60% of the original. Even if only a single SAR image obtained near the peak of the flood was used, the corrected error was only 66% of the original. The method should also be capable of improving the final TanDEM-X DEM and other DEMs, and may also be of use with data from the SWOT (Surface Water and Ocean Topography) satellite.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Iso-score curves graph (iSCG) and mathematical relationships between Scoring Parameters (SP) and Forecasting Parameters (FP) can be used in Economic Scoring Formulas (ESF) used in tendering to distribute the score among bidders in the economic part of a proposal. Each contracting authority must set an ESF when publishing tender specifications and the strategy of each bidder will differ depending on the ESF selected and the weight of the overall proposal scoring. The various mathematical relationships and density distributions that describe the main SPs and FPs, and the representation of tendering data by means of iSCGs, enable the generation of two new types of graphs that can be very useful for bidders who want to be more competitive: the scoring and position probability graphs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

On 23 November 1981, a strong cold front swept across the U.K., producing tornadoes from the west to the east coasts. An extensive campaign to collect tornado reports by the Tornado and Storm Research Organisation (TORRO) resulted in 104 reports, the largest U.K. outbreak. The front was simulated with a convection-permitting numerical model down to 200-m horizontal grid spacing to better understand its evolution and meteorological environment. The event was typical of tornadoes in the U.K., with convective available potential energy (CAPE) less than 150 J kg-1, 0-1-km wind shear of 10-20 m s-1, and a narrow cold-frontal rainband forming precipitation cores and gaps. A line of cyclonic absolute vorticity existed along the front, with maxima as large as 0.04 s-1. Some hook-shaped misovortices bore kinematic similarity to supercells. The narrow swath along which the line was tornadic was bounded on the equatorward side by weak vorticity along the line and on the poleward side by zero CAPE, enclosing a region where the environment was otherwise favorable for tornadogenesis. To determine if the 104 tornado reports were plausible, first possible duplicate reports were eliminated, resulting in as few as 58 tornadoes to as many as 90. Second, the number of possible parent misovortices that may have spawned tornadoes is estimated from model output. The number of plausible tornado reports in the 200-m grid-spacing domain was 22 and as many as 44, whereas the model simulation was used to estimate 30 possible parent misovortices within this domain. These results suggest that 90 reports was plausible.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Insect digestive chymotrypsins are present in a large variety of insect orders but their substrate specificity still remains unclear. Ewer insect chymotrypsins from 3 different insect orders (Dictyoptera, Coleoptera and two Lepidoptera) were isolated using affinity chromatography. Enzymes presented molecular masses in the range of 20 to 31 kDa and pH optima in the range of 7.5 to 10.0. Kinetic characterization. using different, colorimetric and fluorescent substrates indicated that insect chymotrypsins differ from, bovine chymotrypsin in their primary specificity toward small substrates (like N-benzoyl-L-Tyr p-nitroanilide) rather than on their preference for large substrates (exemplified by Succynil-Ala-Ala-Pro-Phe P-nitroanilide). Chloromethyl ketones (TPCK, N-alpha-tosyl-L-Phe chloromethyl ketone and Z-GGF-CK, N-carbobenzoxy-Gly-Gly-phe-CK) inactivated all chymotrypsins legated. Inactivation rates follow apparent first-order kinetics with variable second order rates (TPCK, 42 to 130 M(-1)s(-1); Z-GGF-CK, 150 to 450 M(-1)s(-1) that may be remarkably low for S. frugiperda chymotrypsin (TPCK, 6 M(-1)s(-1); Z-GGF-CK, 6.1 M(-1) s(-1)). Homology modelling and sequence alignment showed that. in lepidopteran chymotrypsins, differences in the amino acid residues in the neighborhood of the catalytic His 57 may affect its pKa, value. This is Proposed as the cause of the decrease in His 57 reactivity toward chloromethyl ketones. Such amino acid replacement in the active site is proposed. to be an adaptation to the presence of dietary ketones. (C) 2009 Wiley Periodicals, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is concerned with the cost efficiency in achieving the Swedish national air quality objectives under uncertainty. To realize an ecologically sustainable society, the parliament has approved a set of interim and long-term pollution reduction targets. However, there are considerable quantification uncertainties on the effectiveness of the proposed pollution reduction measures. In this paper, we develop a multivariate stochastic control framework to deal with the cost efficiency problem with multiple pollutants. Based on the cost and technological data collected by several national authorities, we explore the implications of alternative probabilistic constraints. It is found that a composite probabilistic constraint induces considerably lower abatement cost than separable probabilistic restrictions. The trend is reinforced by the presence of positive correlations between reductions in the multiple pollutants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many solutions to AI problems require the task to be represented in one of a multitude of rigorous mathematical formalisms. The construction of such mathematical models forms a difficult problem which is often left to the user of the problem solver. This void between problem solvers and the problems is studied by the eclectic field of automated modelling. Within this field, compositional modelling, a knowledge-based methodology for system modelling, has established itself as a leading approach. In general, a compositional modeller organises knowledge in a structure of composable fragments that relate to particular system components or processes. Its embedded inference mechanism chooses the appropriate fragments with respect to a given problem, instantiates and assembles them into a consistent system model. Many different types of compositional modeller exist, however, with significant differences in their knowledge representation and approach to inference. This paper examines compositional modelling. It presents a general framework for building and analysing compositional modellers. Based on this framework, a number of influential compositional modellers are examined and compared. The paper also identifies the strengths and weaknesses of compositional modelling and discusses some typical applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cooperation is the fundamental underpinning of multi-agent systems, allowing agents to interact to achieve their goals. Where agents are self-interested, or potentially unreliable, there must be appropriate mechanisms to cope with the uncertainty that arises. In particular, agents must manage the risk associated with interacting with others who have different objectives, or who may fail to fulfil their commitments. Previous work has utilised the notions of motivation and trust in engendering successful cooperation between self-interested agents. Motivations provide a means for representing and reasoning about agents' overall objectives, and trust offers a mechanism for modelling and reasoning about reliability, honesty, veracity and so forth. This paper extends that work to address some of its limitations. In particular, we introduce the concept of a clan: a group of agents who trust each other and have similar objectives. Clan members treat each other favourably when making private decisions about cooperation, in order to gain mutual benefit. We describe mechanisms for agents to form, maintain, and dissolve clans in accordance with their self-interested nature, along with giving details of how clan membership influences individual decision making. Finally, through some simulation experiments we illustrate the effectiveness of clan formation in addressing some of the inherent problems with cooperation among self-interested agents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We discuss the development and performance of a low-power sensor node (hardware, software and algorithms) that autonomously controls the sampling interval of a suite of sensors based on local state estimates and future predictions of water flow. The problem is motivated by the need to accurately reconstruct abrupt state changes in urban watersheds and stormwater systems. Presently, the detection of these events is limited by the temporal resolution of sensor data. It is often infeasible, however, to increase measurement frequency due to energy and sampling constraints. This is particularly true for real-time water quality measurements, where sampling frequency is limited by reagent availability, sensor power consumption, and, in the case of automated samplers, the number of available sample containers. These constraints pose a significant barrier to the ubiquitous and cost effective instrumentation of large hydraulic and hydrologic systems. Each of our sensor nodes is equipped with a low-power microcontroller and a wireless module to take advantage of urban cellular coverage. The node persistently updates a local, embedded model of flow conditions while IP-connectivity permits each node to continually query public weather servers for hourly precipitation forecasts. The sampling frequency is then adjusted to increase the likelihood of capturing abrupt changes in a sensor signal, such as the rise in the hydrograph – an event that is often difficult to capture through traditional sampling techniques. Our architecture forms an embedded processing chain, leveraging local computational resources to assess uncertainty by analyzing data as it is collected. A network is presently being deployed in an urban watershed in Michigan and initial results indicate that the system accurately reconstructs signals of interest while significantly reducing energy consumption and the use of sampling resources. We also expand our analysis by discussing the role of this approach for the efficient real-time measurement of stormwater systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Parametric term structure models have been successfully applied to innumerous problems in fixed income markets, including pricing, hedging, managing risk, as well as studying monetary policy implications. On their turn, dynamic term structure models, equipped with stronger economic structure, have been mainly adopted to price derivatives and explain empirical stylized facts. In this paper, we combine flavors of those two classes of models to test if no-arbitrage affects forecasting. We construct cross section (allowing arbitrages) and arbitrage-free versions of a parametric polynomial model to analyze how well they predict out-of-sample interest rates. Based on U.S. Treasury yield data, we find that no-arbitrage restrictions significantly improve forecasts. Arbitrage-free versions achieve overall smaller biases and Root Mean Square Errors for most maturities and forecasting horizons. Furthermore, a decomposition of forecasts into forward-rates and holding return premia indicates that the superior performance of no-arbitrage versions is due to a better identification of bond risk premium.