899 resultados para Models performance
Resumo:
Several methods are examined which allow to produce forecasts for time series in the form of probability assignments. The necessary concepts are presented, addressing questions such as how to assess the performance of a probabilistic forecast. A particular class of models, cluster weighted models (CWMs), is given particular attention. CWMs, originally proposed for deterministic forecasts, can be employed for probabilistic forecasting with little modification. Two examples are presented. The first involves estimating the state of (numerically simulated) dynamical systems from noise corrupted measurements, a problem also known as filtering. There is an optimal solution to this problem, called the optimal filter, to which the considered time series models are compared. (The optimal filter requires the dynamical equations to be known.) In the second example, we aim at forecasting the chaotic oscillations of an experimental bronze spring system. Both examples demonstrate that the considered time series models, and especially the CWMs, provide useful probabilistic information about the underlying dynamical relations. In particular, they provide more than just an approximation to the conditional mean.
Resumo:
This paper reports the results of a parametric CFD study on idealized city models to investigate the potential of slope flow in ventilating a city located in a mountainous region when the background synoptic wind is absent. Examples of such a city include Tokyo in Japan, Los Angeles and Phoenix in the US, and Hong Kong. Two types of buoyancy-driven flow are considered, i.e., slope flow from the mountain slope (katabatic wind at night and anabatic wind in the daytime), and wall flow due to heated/cooled urban surfaces. The combined buoyancy-driven flow system can serve the purpose of dispersing the accumulated urban air pollutants when the background wind is weak or absent. The microscopic picture of ventilation performance within the urban structures was evaluated in terms of air change rate (ACH) and age of air. The simulation results reveal that the slope flow plays an important role in ventilating the urban area, especially in calm conditions. Katabatic flow at night is conducive to mitigating the nocturnal urban heat island. In the present parametric study, the mountain slope angle and mountain height are assumed to be constant, and the changing variables are heating/cooling intensity and building height. For a typical mountain of 500 m inclined at an angle of 20° to the horizontal level, the interactive structure is very much dependent on the ratio of heating/cooling intensity as well as building height. When the building is lower than 60 m, the slope wind dominates. When the building is as high as 100 m, the contribution from the urban wall flow cannot be ignored. It is found that katabatic wind can be very beneficial to the thermal environment as well as air quality at the pedestrian level. The air change rate for the pedestrian volume can be as high as 300 ACH.
Resumo:
The Kalpana Very High Resolution Radiometer (VHRR) water vapour (WV) channel is very similar to the WV channel of the Meteosat Visible and Infrared Radiation Imager (MVIRI) on Meteosat-7, and both satellites observe the Indian subcontinent. Thus it is possible to compare the performance of VHRR and MVIRI in numerical weather prediction (NWP) models. In order to do so, the impact of Kalpana- and Meteosat-7-measured WV radiances was evaluated using analyses and forecasts of moisture, temperature, geopotential and winds, using the European Centre for Medium-range Weather Forecasts (ECMWF) NWP model. Compared with experiments using Meteosat-7, the experiments using Kalpana WV radiances show a similar fit to all observations and produce very similar forecasts.
Resumo:
The extra-tropical response to El Niño in configurations of a coupled model with increased horizontal resolution in the oceanic component is shown to be more realistic than in configurations with a low resolution oceanic component. This general conclusion is independent of the atmospheric resolution. Resolving small-scale processes in the ocean produces a more realistic oceanic mean state, with a reduced cold tongue bias, which in turn allows the atmospheric model component to be forced more realistically. A realistic atmospheric basic state is critical in order to represent Rossby wave propagation in response to El Niño, and hence the extra-tropical response to El Niño. Through the use of high and low resolution configurations of the forced atmospheric-only model component we show that, in isolation, atmospheric resolution does not significantly affect the simulation of the extra-tropical response to El Niño. It is demonstrated, through perturbations to the SST forcing of the atmospheric model component, that biases in the climatological SST field typical of coupled model configurations with low oceanic resolution can account for the erroneous atmospheric basic state seen in these coupled model configurations. These results highlight the importance of resolving small-scale oceanic processes in producing a realistic large-scale mean climate in coupled models, and suggest that it might may be possible to “squeeze out” valuable extra performance from coupled models through increases to oceanic resolution alone.
Resumo:
Recent activity in the development of future weather data for building performance simulation follows recognition of the limitations of traditional methods, which have been based on a stationary (observed) climate. In the UK, such developments have followed on from the availability of regional climate models as delivered in UKCIP02 and recently the probabilistic projections released under UKCP09. One major area of concern is the future performance and adaptability of buildings which employ exclusively passive or low-energy cooling systems. One such method which can be employed in an integral or retrofit situation is direct or indirect evaporative cooling. The effectiveness of evaporative cooling is most strongly influenced by the wet-bulb depression of the ambient air, hence is generally regarded as most suited to hot, dry climates. However, this technology has been shown to be effective in the UK, primarily in mixed-mode buildings or as a retrofit to industrial/commercial applications. Climate projections for the UK generally indicate an increase in the summer wet-bulb depression, suggesting an enhanced potential for the application of evaporative cooling. The paper illustrates this potential by an analysis of the probabilistic scenarios released under UKCP09, together with a detailed building/plant simulation of case study building located in the South-East of England. The results indicate a high probability that evaporative cooling will still be a viable low-energy technique in the 2050s.
Resumo:
This paper evaluates the extent to which the performance of English Premier League football club managers can be attributed to skill or luck when measured separately from the characteristics of the team. We first use a specification that models managerial skill as a fixed effect and we examine the relationship between the number of points earned in league matches and the club’s wage bill, transfer spending, and the extent to which they were hit by absent players through injuries, suspensions or unavailability. We next implement a bootstrapping approach to generate a simulated distribution of average points that could have taken place after the impact of the manager has been removed. The findings suggest that there are a considerable number of highly skilled managers but also several who perform below expectations. The paper proceeds to illustrate how the approach adopted could be used to determine the optimal time for a club to part company with its manager. We are able to identify in advance several managers who the analysis suggests could have been fired earlier and others whose sackings were hard to justify based on their performances.
Resumo:
The internal variability and coupling between the stratosphere and troposphere in CCMVal‐2 chemistry‐climate models are evaluated through analysis of the annular mode patterns of variability. Computation of the annular modes in long data sets with secular trends requires refinement of the standard definition of the annular mode, and a more robust procedure that allows for slowly varying trends is established and verified. The spatial and temporal structure of the models’ annular modes is then compared with that of reanalyses. As a whole, the models capture the key features of observed intraseasonal variability, including the sharp vertical gradients in structure between stratosphere and troposphere, the asymmetries in the seasonal cycle between the Northern and Southern hemispheres, and the coupling between the polar stratospheric vortices and tropospheric midlatitude jets. It is also found that the annular mode variability changes little in time throughout simulations of the 21st century. There are, however, both common biases and significant differences in performance in the models. In the troposphere, the annular mode in models is generally too persistent, particularly in the Southern Hemisphere summer, a bias similar to that found in CMIP3 coupled climate models. In the stratosphere, the periods of peak variance and coupling with the troposphere are delayed by about a month in both hemispheres. The relationship between increased variability of the stratosphere and increased persistence in the troposphere suggests that some tropospheric biases may be related to stratospheric biases and that a well‐simulated stratosphere can improve simulation of tropospheric intraseasonal variability.
Resumo:
Current methods and techniques used in designing organisational performance measurement systems do not consider the multiple aspects of business processes or the semantics of data generated during the lifecycle of a product. In this paper, we propose an organisational performance measurement systems design model that is based on the semantics of an organisation, business process and products lifecycle. Organisational performance measurement is examined from academic and practice disciplines. The multi-discipline approach is used as a research tool to explore the weaknesses of current models that are used to design organisational performance measurement systems. This helped in identifying the gaps in research and practice concerning the issues and challenges in designing information systems for measuring the performance of an organisation. The knowledge sources investigated include on-going and completed research project reports; scientific and management literature; and practitioners’ magazines.
Resumo:
There is large uncertainty about the magnitude of warming and how rainfall patterns will change in response to any given scenario of future changes in atmospheric composition and land use. The models used for future climate projections were developed and calibrated using climate observations from the past 40 years. The geologic record of environmental responses to climate changes provides a unique opportunity to test model performance outside this limited climate range. Evaluation of model simulations against palaeodata shows that models reproduce the direction and large-scale patterns of past changes in climate, but tend to underestimate the magnitude of regional changes. As part of the effort to reduce model-related uncertainty and produce more reliable estimates of twenty-first century climate, the Palaeoclimate Modelling Intercomparison Project is systematically applying palaeoevaluation techniques to simulations of the past run with the models used to make future projections. This evaluation will provide assessments of model performance, including whether a model is sufficiently sensitive to changes in atmospheric composition, as well as providing estimates of the strength of biosphere and other feedbacks that could amplify the model response to these changes and modify the characteristics of climate variability.
Resumo:
We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover, composition and 5 height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, 10 and are compared to scores based on the temporal or spatial mean value of the observations and a “random” model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM), and the Lund-Potsdam-Jena (LPJ) and Land Processes and eXchanges (LPX) dynamic global 15 vegetation models (DGVMs). SDBM reproduces observed CO2 seasonal cycles, but its simulation of independent measurements of net primary production (NPP) is too high. The two DGVMs show little difference for most benchmarks (including the interannual variability in the growth rate and seasonal cycle of atmospheric CO2), but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified 20 several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change 25 impacts and feedbacks.
Resumo:
Activities like the Coupled Model Intercomparison Project (CMIP) have revolutionized climate modelling in terms of our ability to compare models and to process information about climate projections and their uncertainties. The evaluation of models against observations is now considered a key component of multi-model studies. While there are a number of outstanding scientific issues surrounding model evaluation, notably the open question of how to link model performance to future projections, here we highlight a specific but growing problem in model evaluation - that of uncertainties in the observational data that are used to evaluate the models. We highlight the problem using an example obtained from studies of the South Asian Monsoon but we believe the problem is a generic one which arises in many different areas of climate model evaluation and which requires some attention by the community.
Resumo:
Geomagnetic activity has long been known to exhibit approximately 27 day periodicity, resulting from solar wind structures repeating each solar rotation. Thus a very simple near-Earth solar wind forecast is 27 day persistence, wherein the near-Earth solar wind conditions today are assumed to be identical to those 27 days previously. Effective use of such a persistence model as a forecast tool, however, requires the performance and uncertainty to be fully characterized. The first half of this study determines which solar wind parameters can be reliably forecast by persistence and how the forecast skill varies with the solar cycle. The second half of the study shows how persistence can provide a useful benchmark for more sophisticated forecast schemes, namely physics-based numerical models. Point-by-point assessment methods, such as correlation and mean-square error, find persistence skill comparable to numerical models during solar minimum, despite the 27 day lead time of persistence forecasts, versus 2–5 days for numerical schemes. At solar maximum, however, the dynamic nature of the corona means 27 day persistence is no longer a good approximation and skill scores suggest persistence is out-performed by numerical models for almost all solar wind parameters. But point-by-point assessment techniques are not always a reliable indicator of usefulness as a forecast tool. An event-based assessment method, which focusses key solar wind structures, finds persistence to be the most valuable forecast throughout the solar cycle. This reiterates the fact that the means of assessing the “best” forecast model must be specifically tailored to its intended use.
Effects of temporal resolution of input precipitation on the performance of hydrological forecasting
Resumo:
Flood prediction systems rely on good quality precipitation input data and forecasts to drive hydrological models. Most precipitation data comes from daily stations with a good spatial coverage. However, some flood events occur on sub-daily time scales and flood prediction systems could benefit from using models calibrated on the same time scale. This study compares precipitation data aggregated from hourly stations (HP) and data disaggregated from daily stations (DP) with 6-hourly forecasts from ECMWF over the time period 1 October 2006–31 December 2009. The HP and DP data sets were then used to calibrate two hydrological models, LISFLOOD-RR and HBV, and the latter was used in a flood case study. The HP scored better than the DP when evaluated against the forecast for lead times up to 4 days. However, this was not translated in the same way to the hydrological modelling, where the models gave similar scores for simulated runoff with the two datasets. The flood forecasting study showed that both datasets gave similar hit rates whereas the HP data set gave much smaller false alarm rates (FAR). This indicates that using sub-daily precipitation in the calibration and initiation of hydrological models can improve flood forecasting.
Resumo:
We consider the forecasting performance of two SETAR exchange rate models proposed by Kräger and Kugler [J. Int. Money Fin. 12 (1993) 195]. Assuming that the models are good approximations to the data generating process, we show that whether the non-linearities inherent in the data can be exploited to forecast better than a random walk depends on both how forecast accuracy is assessed and on the ‘state of nature’. Evaluation based on traditional measures, such as (root) mean squared forecast errors, may mask the superiority of the non-linear models. Generalized impulse response functions are also calculated as a means of portraying the asymmetric response to shocks implied by such models.
Resumo:
We consider the impact of data revisions on the forecast performance of a SETAR regime-switching model of U.S. output growth. The impact of data uncertainty in real-time forecasting will affect a model's forecast performance via the effect on the model parameter estimates as well as via the forecast being conditioned on data measured with error. We find that benchmark revisions do affect the performance of the non-linear model of the growth rate, and that the performance relative to a linear comparator deteriorates in real-time compared to a pseudo out-of-sample forecasting exercise.