944 resultados para Marches (Organ with instrumental ensemble)
Resumo:
The requirement to forecast volcanic ash concentrations was amplified as a response to the 2010 Eyjafjallajökull eruption when ash safety limits for aviation were introduced in the European area. The ability to provide accurate quantitative forecasts relies to a large extent on the source term which is the emissions of ash as a function of time and height. This study presents source term estimations of the ash emissions from the Eyjafjallajökull eruption derived with an inversion algorithm which constrains modeled ash emissions with satellite observations of volcanic ash. The algorithm is tested with input from two different dispersion models, run on three different meteorological input data sets. The results are robust to which dispersion model and meteorological data are used. Modeled ash concentrations are compared quantitatively to independent measurements from three different research aircraft and one surface measurement station. These comparisons show that the models perform reasonably well in simulating the ash concentrations, and simulations using the source term obtained from the inversion are in overall better agreement with the observations (rank correlation = 0.55, Figure of Merit in Time (FMT) = 25–46%) than simulations using simplified source terms (rank correlation = 0.21, FMT = 20–35%). The vertical structures of the modeled ash clouds mostly agree with lidar observations, and the modeled ash particle size distributions agree reasonably well with observed size distributions. There are occasionally large differences between simulations but the model mean usually outperforms any individual model. The results emphasize the benefits of using an ensemble-based forecast for improved quantification of uncertainties in future ash crises.
Resumo:
The prediction of Northern Hemisphere (NH) extratropical cyclones by nine different ensemble prediction systems(EPSs), archived as part of The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE), has recently been explored using a cyclone tracking approach. This paper provides a continuation of this work, extending the analysis to the Southern Hemisphere (SH). While the EPSs have larger error in all cyclone properties in the SH, the relative performance of the different EPSs remains broadly consistent between the two hemispheres. Some interesting differences are also shown. The Chinese Meteorological Administration (CMA) EPS has a significantly lower level of performance in the SH compared to the NH. Previous NH results showed that the Centro de Previsao de Tempo e Estudos Climaticos (CPTEC) EPS underpredicts cyclone intensity. The results of this current study show that this bias is significantly larger in the SH. The CPTEC EPS also has very little spread in both hemispheres. As with the NH results, cyclone propagation speed is underpredicted by all the EPSs in the SH. To investigate this further, the bias was also computed for theECMWFhigh-resolution deterministic forecast. The bias was significantly smaller than the lower resolution ECMWF EPS.
Resumo:
In numerical weather prediction (NWP) data assimilation (DA) methods are used to combine available observations with numerical model estimates. This is done by minimising measures of error on both observations and model estimates with more weight given to data that can be more trusted. For any DA method an estimate of the initial forecast error covariance matrix is required. For convective scale data assimilation, however, the properties of the error covariances are not well understood. An effective way to investigate covariance properties in the presence of convection is to use an ensemble-based method for which an estimate of the error covariance is readily available at each time step. In this work, we investigate the performance of the ensemble square root filter (EnSRF) in the presence of cloud growth applied to an idealised 1D convective column model of the atmosphere. We show that the EnSRF performs well in capturing cloud growth, but the ensemble does not cope well with discontinuities introduced into the system by parameterised rain. The state estimates lose accuracy, and more importantly the ensemble is unable to capture the spread (variance) of the estimates correctly. We also find, counter-intuitively, that by reducing the spatial frequency of observations and/or the accuracy of the observations, the ensemble is able to capture the states and their variability successfully across all regimes.
Resumo:
This paper describes the implementation of a 3D variational (3D-Var) data assimilation scheme for a morphodynamic model applied to Morecambe Bay, UK. A simple decoupled hydrodynamic and sediment transport model is combined with a data assimilation scheme to investigate the ability of such methods to improve the accuracy of the predicted bathymetry. The inverse forecast error covariance matrix is modelled using a Laplacian approximation which is calibrated for the length scale parameter required. Calibration is also performed for the Soulsby-van Rijn sediment transport equations. The data used for assimilation purposes comprises waterlines derived from SAR imagery covering the entire period of the model run, and swath bathymetry data collected by a ship-borne survey for one date towards the end of the model run. A LiDAR survey of the entire bay carried out in November 2005 is used for validation purposes. The comparison of the predictive ability of the model alone with the model-forecast-assimilation system demonstrates that using data assimilation significantly improves the forecast skill. An investigation of the assimilation of the swath bathymetry as well as the waterlines demonstrates that the overall improvement is initially large, but decreases over time as the bathymetry evolves away from that observed by the survey. The result of combining the calibration runs into a pseudo-ensemble provides a higher skill score than for a single optimized model run. A brief comparison of the Optimal Interpolation assimilation method with the 3D-Var method shows that the two schemes give similar results.
Resumo:
High-resolution ensemble simulations (Δx = 1 km) are performed with the Met Office Unified Model for the Boscastle (Cornwall, UK) flash-flooding event of 16 August 2004. Forecast uncertainties arising from imperfections in the forecast model are analysed by comparing the simulation results produced by two types of perturbation strategy. Motivated by the meteorology of the event, one type of perturbation alters relevant physics choices or parameter settings in the model's parametrization schemes. The other type of perturbation is designed to account for representativity error in the boundary-layer parametrization. It makes direct changes to the model state and provides a lower bound against which to judge the spread produced by other uncertainties. The Boscastle has genuine skill at scales of approximately 60 km and an ensemble spread which can be estimated to within ∼ 10% with only eight members. Differences between the model-state perturbation and physics modification strategies are discussed, the former being more important for triggering and the latter for subsequent cell development, including the average internal structure of convective cells. Despite such differences, the spread in rainfall evaluated at skilful scales is shown to be only weakly sensitive to the perturbation strategy. This suggests that relatively simple strategies for treating model uncertainty may be sufficient for practical, convective-scale ensemble forecasting.
Resumo:
The ability to run General Circulation Models (GCMs) at ever-higher horizontal resolutions has meant that tropical cyclone simulations are increasingly credible. A hierarchy of atmosphere-only GCMs, based on the Hadley Centre Global Environmental Model (HadGEM1), with horizontal resolution increasing from approximately 270km to 60km (at 50N), is used to systematically investigate the impact of spatial resolution on the simulation of global tropical cyclone activity, independent of model formulation. Tropical cyclones are extracted from ensemble simulations and reanalyses of comparable resolutions using a feature-tracking algorithm. Resolution is critical for simulating storm intensity and convergence to observed storm intensities is not achieved with the model hierarchy. Resolution is less critical for simulating the annual number of tropical cyclones and their geographical distribution, which are well captured at resolutions of 135km or higher, particularly for Northern Hemisphere basins. Simulating the interannual variability of storm occurrence requires resolutions of 100km or higher; however, the level of skill is basin dependent. Higher resolution GCMs are increasingly able to capture the interannual variability of the large-scale environmental conditions that contribute to tropical cyclogenesis. Different environmental factors contribute to the interannual variability of tropical cyclones in the different basins: in the North Atlantic basin the vertical wind shear, potential intensity and low-level absolute vorticity are dominant, while in the North Pacific basins mid-level relative humidity and low-level absolute vorticity are dominant. Model resolution is crucial for a realistic simulation of tropical cyclone behaviour, and high-resolution GCMs are found to be valuable tools for investigating the global location and frequency of tropical cyclones.
Resumo:
This dissertation deals with aspects of sequential data assimilation (in particular ensemble Kalman filtering) and numerical weather forecasting. In the first part, the recently formulated Ensemble Kalman-Bucy (EnKBF) filter is revisited. It is shown that the previously used numerical integration scheme fails when the magnitude of the background error covariance grows beyond that of the observational error covariance in the forecast window. Therefore, we present a suitable integration scheme that handles the stiffening of the differential equations involved and doesn’t represent further computational expense. Moreover, a transform-based alternative to the EnKBF is developed: under this scheme, the operations are performed in the ensemble space instead of in the state space. Advantages of this formulation are explained. For the first time, the EnKBF is implemented in an atmospheric model. The second part of this work deals with ensemble clustering, a phenomenon that arises when performing data assimilation using of deterministic ensemble square root filters in highly nonlinear forecast models. Namely, an M-member ensemble detaches into an outlier and a cluster of M-1 members. Previous works may suggest that this issue represents a failure of EnSRFs; this work dispels that notion. It is shown that ensemble clustering can be reverted also due to nonlinear processes, in particular the alternation between nonlinear expansion and compression of the ensemble for different regions of the attractor. Some EnSRFs that use random rotations have been developed to overcome this issue; these formulations are analyzed and their advantages and disadvantages with respect to common EnSRFs are discussed. The third and last part contains the implementation of the Robert-Asselin-Williams (RAW) filter in an atmospheric model. The RAW filter is an improvement to the widely popular Robert-Asselin filter that successfully suppresses spurious computational waves while avoiding any distortion in the mean value of the function. Using statistical significance tests both at the local and field level, it is shown that the climatology of the SPEEDY model is not modified by the changed time stepping scheme; hence, no retuning of the parameterizations is required. It is found the accuracy of the medium-term forecasts is increased by using the RAW filter.
Resumo:
Ensemble clustering (EC) can arise in data assimilation with ensemble square root filters (EnSRFs) using non-linear models: an M-member ensemble splits into a single outlier and a cluster of M−1 members. The stochastic Ensemble Kalman Filter does not present this problem. Modifications to the EnSRFs by a periodic resampling of the ensemble through random rotations have been proposed to address it. We introduce a metric to quantify the presence of EC and present evidence to dispel the notion that EC leads to filter failure. Starting from a univariate model, we show that EC is not a permanent but transient phenomenon; it occurs intermittently in non-linear models. We perform a series of data assimilation experiments using a standard EnSRF and a modified EnSRF by a resampling though random rotations. The modified EnSRF thus alleviates issues associated with EC at the cost of traceability of individual ensemble trajectories and cannot use some of algorithms that enhance performance of standard EnSRF. In the non-linear regimes of low-dimensional models, the analysis root mean square error of the standard EnSRF slowly grows with ensemble size if the size is larger than the dimension of the model state. However, we do not observe this problem in a more complex model that uses an ensemble size much smaller than the dimension of the model state, along with inflation and localisation. Overall, we find that transient EC does not handicap the performance of the standard EnSRF.
Resumo:
A set of random variables is exchangeable if its joint distribution function is invariant under permutation of the arguments. The concept of exchangeability is discussed, with a view towards potential application in evaluating ensemble forecasts. It is argued that the paradigm of ensembles being an independent draw from an underlying distribution function is probably too narrow; allowing ensemble members to be merely exchangeable might be a more versatile model. The question is discussed whether established methods of ensemble evaluation need alteration under this model, with reliability being given particular attention. It turns out that the standard methodology of rank histograms can still be applied. As a first application of the exchangeability concept, it is shown that the method of minimum spanning trees to evaluate the reliability of high dimensional ensembles is mathematically sound.
Resumo:
An ensemble forecast is a collection of runs of a numerical dynamical model, initialized with perturbed initial conditions. In modern weather prediction for example, ensembles are used to retrieve probabilistic information about future weather conditions. In this contribution, we are concerned with ensemble forecasts of a scalar quantity (say, the temperature at a specific location). We consider the event that the verification is smaller than the smallest, or larger than the largest ensemble member. We call these events outliers. If a K-member ensemble accurately reflected the variability of the verification, outliers should occur with a base rate of 2/(K + 1). In operational forecast ensembles though, this frequency is often found to be higher. We study the predictability of outliers and find that, exploiting information available from the ensemble, forecast probabilities for outlier events can be calculated which are more skilful than the unconditional base rate. We prove this analytically for statistically consistent forecast ensembles. Further, the analytical results are compared to the predictability of outliers in an operational forecast ensemble by means of model output statistics. We find the analytical and empirical results to agree both qualitatively and quantitatively.
Resumo:
The continuous ranked probability score (CRPS) is a frequently used scoring rule. In contrast with many other scoring rules, the CRPS evaluates cumulative distribution functions. An ensemble of forecasts can easily be converted into a piecewise constant cumulative distribution function with steps at the ensemble members. This renders the CRPS a convenient scoring rule for the evaluation of ‘raw’ ensembles, obviating the need for sophisticated ensemble model output statistics or dressing methods prior to evaluation. In this article, a relation between the CRPS score and the quantile score is established. The evaluation of ‘raw’ ensembles using the CRPS is discussed in this light. It is shown that latent in this evaluation is an interpretation of the ensemble as quantiles but with non-uniform levels. This needs to be taken into account if the ensemble is evaluated further, for example with rank histograms.
Resumo:
Generally classifiers tend to overfit if there is noise in the training data or there are missing values. Ensemble learning methods are often used to improve a classifier's classification accuracy. Most ensemble learning approaches aim to improve the classification accuracy of decision trees. However, alternative classifiers to decision trees exist. The recently developed Random Prism ensemble learner for classification aims to improve an alternative classification rule induction approach, the Prism family of algorithms, which addresses some of the limitations of decision trees. However, Random Prism suffers like any ensemble learner from a high computational overhead due to replication of the data and the induction of multiple base classifiers. Hence even modest sized datasets may impose a computational challenge to ensemble learners such as Random Prism. Parallelism is often used to scale up algorithms to deal with large datasets. This paper investigates parallelisation for Random Prism, implements a prototype and evaluates it empirically using a Hadoop computing cluster.
Resumo:
The translation of an ensemble of model runs into a probability distribution is a common task in model-based prediction. Common methods for such ensemble interpretations proceed as if verification and ensemble were draws from the same underlying distribution, an assumption not viable for most, if any, real world ensembles. An alternative is to consider an ensemble as merely a source of information rather than the possible scenarios of reality. This approach, which looks for maps between ensembles and probabilistic distributions, is investigated and extended. Common methods are revisited, and an improvement to standard kernel dressing, called ‘affine kernel dressing’ (AKD), is introduced. AKD assumes an affine mapping between ensemble and verification, typically not acting on individual ensemble members but on the entire ensemble as a whole, the parameters of this mapping are determined in parallel with the other dressing parameters, including a weight assigned to the unconditioned (climatological) distribution. These amendments to standard kernel dressing, albeit simple, can improve performance significantly and are shown to be appropriate for both overdispersive and underdispersive ensembles, unlike standard kernel dressing which exacerbates over dispersion. Studies are presented using operational numerical weather predictions for two locations and data from the Lorenz63 system, demonstrating both effectiveness given operational constraints and statistical significance given a large sample.
Resumo:
The atmospheric response to the evolution of the global sea surface temperatures from 1979 to 1992 is studied using the Max-Planck-Institut 19 level atmospheric general circulation model, ECHAM3 at T 42 resolution. Five separate 14-year integrations are performed and results are presented for each individual realization and for the ensemble-averaged response. The results are compared to a 30-year control integration using a climate monthly mean state of the sea surface temperatures and to analysis data. It is found that the ECHAM3 model, by and large, does reproduce the observed response pattern to El Nin˜o and La Nin˜a. During the El Nin˜ o events, the subtropical jet streams in both hemispheres are intensified and displaced equatorward, and there is a tendency towards weak upper easterlies over the equator. The Southern Oscillation is a very stable feature of the integrations and is accurately reproduced in all experiments. The inter-annual variability at middle- and high-latitudes, on the other hand, is strongly dominated by chaotic dynamics, and the tropical SST forcing only modulates the atmospheric circulation. The potential predictability of the model is investigated for six different regions. Signal to noise ratio is large in most parts of the tropical belt, of medium strength in the western hemisphere and generally small over the European area. The ENSO signal is most pronounced during the boreal spring. A particularly strong signal in the precipitation field in the extratropics during spring can be found over the southern United States. Western Canada is normally warmer during the warm ENSO phase, while northern Europe is warmer than normal during the ENSO cold phase. The reason is advection of warm air due to a more intense Pacific low than normal during the warm ENSO phase and a more intense Icelandic low than normal during the cold ENSO phase, respectively.
Resumo:
Although ensemble prediction systems (EPS) are increasingly promoted as the scientific state-of-the-art for operational flood forecasting, the communication, perception, and use of the resulting alerts have received much less attention. Using a variety of qualitative research methods, including direct user feedback at training workshops, participant observation during site visits to 25 forecasting centres across Europe, and in-depth interviews with 69 forecasters, civil protection officials, and policy makers involved in operational flood risk management in 17 European countries, this article discusses the perception, communication, and use of European Flood Alert System (EFAS) alerts in operational flood management. In particular, this article describes how the design of EFAS alerts has evolved in response to user feedback and desires for a hydrographic-like way of visualizing EFAS outputs. It also documents a variety of forecaster perceptions about the value and skill of EFAS forecasts and the best way of using them to inform operational decision making. EFAS flood alerts were generally welcomed by flood forecasters as a sort of ‘pre-alert’ to spur greater internal vigilance. In most cases, however, they did not lead, by themselves, to further preparatory action or to earlier warnings to the public or emergency services. Their hesitancy to act in response to medium-term, probabilistic alerts highlights some wider institutional obstacles to the hopes in the research community that EPS will be readily embraced by operational forecasters and lead to immediate improvements in flood incident management. The EFAS experience offers lessons for other hydrological services seeking to implement EPS operationally for flood forecasting and warning. Copyright © 2012 John Wiley & Sons, Ltd.