91 resultados para E27 - Forecasting and Simulation
Resumo:
An extensive experimental and simulation study is carried out in conventional magnetorheological fluids formulated by dispersion of mixtures of carbonyl iron particles having different sizes in Newtonian carriers. Apparent yield stress data are reported for a wide range of polydispersity indexes (PDI) from PDI = 1.63 to PDI = 3.31, which for a log-normal distribution corresponds to the standard deviation ranging from to . These results demonstrate that the effect of polydispersity is negligible in this range in spite of exhibiting very different microstructures. Experimental data in the magnetic saturation regime are in quantitative good agreement with particle-level simulations under the assumption of dipolar magnetostatic forces. The insensitivity of the yield stresses to the polydispersity can be understood from the interplay between the particle cluster size distribution and the packing density of particles inside the clusters.
Resumo:
The use of kilometre-scale ensembles in operational forecasting provides new challenges for forecast interpretation and evaluation to account for uncertainty on the convective scale. A new neighbourhood based method is presented for evaluating and characterising the local predictability variations from convective scale ensembles. Spatial scales over which ensemble forecasts agree (agreement scales, S^A) are calculated at each grid point ij, providing a map of the spatial agreement between forecasts. By comparing the average agreement scale obtained from ensemble member pairs (S^A(mm)_ij), with that between members and radar observations (S^A(mo)_ij), this approach allows the location-dependent spatial spread-skill relationship of the ensemble to be assessed. The properties of the agreement scales are demonstrated using an idealised experiment. To demonstrate the methods in an operational context the S^A(mm)_ij and S^A(mo)_ij are calculated for six convective cases run with the Met Office UK Ensemble Prediction System. The S^A(mm)_ij highlight predictability differences between cases, which can be linked to physical processes. Maps of S^A(mm)_ij are found to summarise the spatial predictability in a compact and physically meaningful manner that is useful for forecasting and for model interpretation. Comparison of S^A(mm)_ij and S^A(mo)_ij demonstrates the case-by-case and temporal variability of the spatial spread-skill, which can again be linked to physical processes.
Resumo:
The Gauss–Newton algorithm is an iterative method regularly used for solving nonlinear least squares problems. It is particularly well suited to the treatment of very large scale variational data assimilation problems that arise in atmosphere and ocean forecasting. The procedure consists of a sequence of linear least squares approximations to the nonlinear problem, each of which is solved by an “inner” direct or iterative process. In comparison with Newton’s method and its variants, the algorithm is attractive because it does not require the evaluation of second-order derivatives in the Hessian of the objective function. In practice the exact Gauss–Newton method is too expensive to apply operationally in meteorological forecasting, and various approximations are made in order to reduce computational costs and to solve the problems in real time. Here we investigate the effects on the convergence of the Gauss–Newton method of two types of approximation used commonly in data assimilation. First, we examine “truncated” Gauss–Newton methods where the inner linear least squares problem is not solved exactly, and second, we examine “perturbed” Gauss–Newton methods where the true linearized inner problem is approximated by a simplified, or perturbed, linear least squares problem. We give conditions ensuring that the truncated and perturbed Gauss–Newton methods converge and also derive rates of convergence for the iterations. The results are illustrated by a simple numerical example. A practical application to the problem of data assimilation in a typical meteorological system is presented.
Resumo:
Data assimilation – the set of techniques whereby information from observing systems and models is combined optimally – is rapidly becoming prominent in endeavours to exploit Earth Observation for Earth sciences, including climate prediction. This paper explains the broad principles of data assimilation, outlining different approaches (optimal interpolation, three-dimensional and four-dimensional variational methods, the Kalman Filter), together with the approximations that are often necessary to make them practicable. After pointing out a variety of benefits of data assimilation, the paper then outlines some practical applications of the exploitation of Earth Observation by data assimilation in the areas of operational oceanography, chemical weather forecasting and carbon cycle modelling. Finally, some challenges for the future are noted.
Resumo:
A new field of study, “decadal prediction,” is emerging in climate science. Decadal prediction lies between seasonal/interannual forecasting and longer-term climate change projections, and focuses on time-evolving regional climate conditions over the next 10–30 yr. Numerous assessments of climate information user needs have identified this time scale as being important to infrastructure planners, water resource managers, and many others. It is central to the information portfolio required to adapt effectively to and through climatic changes. At least three factors influence time-evolving regional climate at the decadal time scale: 1) climate change commitment (further warming as the coupled climate system comes into adjustment with increases of greenhouse gases that have already occurred), 2) external forcing, particularly from future increases of greenhouse gases and recovery of the ozone hole, and 3) internally generated variability. Some decadal prediction skill has been demonstrated to arise from the first two of these factors, and there is evidence that initialized coupled climate models can capture mechanisms of internally generated decadal climate variations, thus increasing predictive skill globally and particularly regionally. Several methods have been proposed for initializing global coupled climate models for decadal predictions, all of which involve global time-evolving three-dimensional ocean data, including temperature and salinity. An experimental framework to address decadal predictability/prediction is described in this paper and has been incorporated into the coordinated Coupled Model Intercomparison Model, phase 5 (CMIP5) experiments, some of which will be assessed for the IPCC Fifth Assessment Report (AR5). These experiments will likely guide work in this emerging field over the next 5 yr.
Resumo:
The level of insolvencies in the construction industry is high, when compared to other industry sectors. Given the management expertise and experience that is available to the construction industry, it seems strange that, according to the literature, the major causes of failure are lack of financial control and poor management. This indicates that with a good cash flow management, companies could be kept operating and financially healthy. It is possible to prevent failure. Although there are financial models that can be used to predict failure, they are based on company accounts, which have been shown to be an unreliable source of data. There are models available for cash flow management and forecasting and these could be used as a starting point for managers in rethinking their cash flow management practices. The research reported here has reached the stage of formulating researchable questions for an in-depth study including issues such as how contractors manage their cash flow, how payment practices can be managed without damaging others in the supply chain and the relationships between companies" financial structures and the payment regimes to which they are subjected.
Resumo:
The level of insolvencies in the construction industry is high, when compared to other industry sectors. Given the management expertise and experience that is available to the construction industry, it seems strange that, according to the literature, the major causes of failure are lack of financial control and poor management. This indicates that with a good cash flow management, companies could be kept operating and financially healthy. It is possible to prevent failure. Although there are financial models that can be used to predict failure, they are based on company accounts, which have been shown to be an unreliable source of data. There are models available for cash flow management and forecasting and these could be used as a starting point for managers in rethinking their cash flow management practices. The research reported here has reached the stage of formulating researchable questions for an in-depth study including issues such as how contractors manage their cash flow, how payment practices can be managed without damaging others in the supply chain and the relationships between companies’ financial structures and the payment regimes to which they are subjected.
Resumo:
This paper describes a novel numerical algorithm for simulating the evolution of fine-scale conservative fields in layer-wise two-dimensional flows, the most important examples of which are the earth's atmosphere and oceans. the algorithm combines two radically different algorithms, one Lagrangian and the other Eulerian, to achieve an unexpected gain in computational efficiency. The algorithm is demonstrated for multi-layer quasi-geostrophic flow, and results are presented for a simulation of a tilted stratospheric polar vortex and of nearly-inviscid quasi-geostrophic turbulence. the turbulence results contradict previous arguments and simulation results that have suggested an ultimate two-dimensional, vertically-coherent character of the flow. Ongoing extensions of the algorithm to the generally ageostrophic flows characteristic of planetary fluid dynamics are outlined.
Resumo:
The effects of the 2003 European heat wave have highlighted the need for society to prepare itself for and cope more effectively with heat waves. This is particularly important in the context of predicted climate change and the likelihood of more frequent extreme climate events; to date, heat as a natural hazard has been largely ignored. In order to develop better coping strategies, this report explores the factors that shape the social impacts of heat waves, and sets out a programme of research to address the considerable knowledge gaps in this area. Heat waves, or periods of anomalous warmth, do not affect everyone; it is the vulnerable individuals or sectors of society who will most experience their effects. The main factors of vulnerability are being elderly, living alone, having a pre-existing disease, being immobile or suffering from mental illness and being economically disadvantaged. The synergistic effects of such factors may prove fatal for some. Heat waves have discernible impacts on society including a rise in mortality, an increased strain on infrastructure (power, water and transport) and a possible rise in social disturbance. Wider impacts may include effects on the retail industry, ecosystem services and tourism. Adapting to more frequent heat waves should include soft engineering options and, where possible, avoid the widespread use of air conditioning which could prove unsustainable in energy terms. Strategies for coping with heat include changing the way in which urban areas are developed or re-developed, and setting up heat watch warning systems based around weather and seasonal climate forecasting and intervention strategies. Although heat waves have discernible effects on society, much remains unknown about their wider social impacts, diffuse health issues and how to manage them.
Resumo:
The Konstanz Information Miner is a modular environment which enables easy visual assembly and interactive execution of a data pipeline. It is designed as a teaching, research and collaboration platform, which enables easy integration of new algorithms, data manipulation or visualization methods as new modules or nodes. In this paper we describe some of the design aspects of the underlying architecture and briefly sketch how new nodes can be incorporated.
Resumo:
A score test is developed for binary clinical trial data, which incorporates patient non-compliance while respecting randomization. It is assumed in this paper that compliance is all-or-nothing, in the sense that a patient either accepts all of the treatment assigned as specified in the protocol, or none of it. Direct analytic comparisons of the adjusted test statistic for both the score test and the likelihood ratio test are made with the corresponding test statistics that adhere to the intention-to-treat principle. It is shown that no gain in power is possible over the intention-to-treat analysis, by adjusting for patient non-compliance. Sample size formulae are derived and simulation studies are used to demonstrate that the sample size approximation holds. Copyright © 2003 John Wiley & Sons, Ltd.
Resumo:
Uplands around the world are facing significant social, economic and environmental changes, and decision-makers need to better understand what the future may hold if they are to adapt and maintain upland goods and services. This paper draws together all major research comprising eight studies that have used scenarios to describe possible futures for UK uplands. The paper evaluates which scenarios are perceived by stakeholders to be most likely and desirable, and assesses the benefits and drawbacks of the scenario methods used in UK uplands to date. Stakeholders agreed that the most desirable and likely scenario would be a continuation of hill farming (albeit at reduced levels) based on cross-compliance with environmental measures. The least desirable scenario is a withdrawal of government financial support for hill farming. Although this was deemed by stakeholders to be the least likely scenario, the loss of government support warrants close attention due to its potential implications for the local economy. Stakeholders noted that the environmental implications of this scenario are much less clear-cut. As such, there is an urgent need to understand the full implications of this scenario, so that upland stakeholders can adequately prepare, and policy-makers can better evaluate the likely implications of different policy options. The paper concludes that in future, upland scenario research needs to: (1) better integrate in-depth and representative participation from stakeholders during both scenario development and evaluation; and (2) make more effective use of visualisation techniques and simulation models. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Heterogeneity in lifetime data may be modelled by multiplying an individual's hazard by an unobserved frailty. We test for the presence of frailty of this kind in univariate and bivariate data with Weibull distributed lifetimes, using statistics based on the ordered Cox-Snell residuals from the null model of no frailty. The form of the statistics is suggested by outlier testing in the gamma distribution. We find through simulation that the sum of the k largest or k smallest order statistics, for suitably chosen k , provides a powerful test when the frailty distribution is assumed to be gamma or positive stable, respectively. We provide recommended values of k for sample sizes up to 100 and simple formulae for estimated critical values for tests at the 5% level.
Resumo:
A supersaturated design (SSD) is an experimental plan, useful for evaluating the main effects of m factors with n experimental units when m > n - 1, each factor has two levels and when the first-order effects of only a few factors are expected to have dominant effects on the response. Use of these plans can be extremely cost-effective when it is necessary to screen hundreds or thousands of factors with a limited amount of resources. In this article we describe how to use cyclic balanced incomplete block designs and regular graph designs to construct E (s(2)) optimal and near optimal SSDs when m is a multiple of n - 1. We also provide a table that can be used to construct these designs for screening thousands of factors. We also explain how to obtain SSDs when m is not a multiple of n - 1. Using the table and the approaches given in this paper, SSDs can be developed for designs with up to 24 runs and up to 12,190 factors.
Resumo:
Information technology in construction (ITC) has been gaining wide acceptance and is being implemented in the construction research domains as a tool to assist decision makers. Most of the research into visualization technologies (VT) has been on the wide range of 3D and simulation applications suitable for construction processes. Despite its development with interoperability and standardization of products, VT usage has remained very low when it comes to communicating and addressing the needs of building end-users (BEU). This paper argues that building end users are a source of experience and expertise that can be brought into the briefing stage for the evaluation of design proposals. It also suggests that the end user is a source of new ideas promoting innovation. In this research a positivistic methodology that includes the comparison of 3D models and the traditional 2D methods is proposed. It will help to identify "how much", if anything, a non-spatial specialist can gain in terms Of "understanding" of a particular design proposal presented, using both methods.