30 resultados para Full spatial domain computation
em CentAUR: Central Archive University of Reading - UK
Resumo:
Satellite-based rainfall monitoring is widely used for climatological studies because of its full global coverage but it is also of great importance for operational purposes especially in areas such as Africa where there is a lack of ground-based rainfall data. Satellite rainfall estimates have enormous potential benefits as input to hydrological and agricultural models because of their real time availability, low cost and full spatial coverage. One issue that needs to be addressed is the uncertainty on these estimates. This is particularly important in assessing the likely errors on the output from non-linear models (rainfall-runoff or crop yield) which make use of the rainfall estimates, aggregated over an area, as input. Correct assessment of the uncertainty on the rainfall is non-trivial as it must take account of • the difference in spatial support of the satellite information and independent data used for calibration • uncertainties on the independent calibration data • the non-Gaussian distribution of rainfall amount • the spatial intermittency of rainfall • the spatial correlation of the rainfall field This paper describes a method for estimating the uncertainty on satellite-based rainfall values taking account of these factors. The method involves firstly a stochastic calibration which completely describes the probability of rainfall occurrence and the pdf of rainfall amount for a given satellite value, and secondly the generation of ensemble of rainfall fields based on the stochastic calibration but with the correct spatial correlation structure within each ensemble member. This is achieved by the use of geostatistical sequential simulation. The ensemble generated in this way may be used to estimate uncertainty at larger spatial scales. A case study of daily rainfall monitoring in the Gambia, west Africa for the purpose of crop yield forecasting is presented to illustrate the method.
Resumo:
Williams syndrome (WS) is a rare genetic disorder. At a cognitive level, this population display poor visuo-spatial cognition when compared to verbal ability. Within the visuo-spatial domain, it is now accepted that individuals with WS are able to perceive both local and global aspects of an image, albeit at a low level. The present study examines the manner in which local elements are grouped into a global whole in WS. Fifteen individuals with WS and 15 typically developing controls, matched for non-verbal ability, were presented with a matrix of local elements and asked whether these elements were perceptually grouped horizontally or vertically. The WS group was at the same level as the control group when grouping by luminance, closure, and alignment. However, their ability to group by shape, orientation and proximity was significantly poorer than controls. This unusual profile of grouping abilities in WS suggests that these individuals do not form a global percept in a typical manner. (c) 2004 Published by Elsevier Ltd.
Resumo:
A new database of weather and circulation type catalogs is presented comprising 17 automated classification methods and five subjective classifications. It was compiled within COST Action 733 "Harmonisation and Applications of Weather Type Classifications for European regions" in order to evaluate different methods for weather and circulation type classification. This paper gives a technical description of the included methods using a new conceptual categorization for classification methods reflecting the strategy for the definition of types. Methods using predefined types include manual and threshold based classifications while methods producing types derived from the input data include those based on eigenvector techniques, leader algorithms and optimization algorithms. In order to allow direct comparisons between the methods, the circulation input data and the methods' configuration were harmonized for producing a subset of standard catalogs of the automated methods. The harmonization includes the data source, the climatic parameters used, the classification period as well as the spatial domain and the number of types. Frequency based characteristics of the resulting catalogs are presented, including variation of class sizes, persistence, seasonal and inter-annual variability as well as trends of the annual frequency time series. The methodological concept of the classifications is partly reflected by these properties of the resulting catalogs. It is shown that the types of subjective classifications compared to automated methods show higher persistence, inter-annual variation and long-term trends. Among the automated classifications optimization methods show a tendency for longer persistence and higher seasonal variation. However, it is also concluded that the distance metric used and the data preprocessing play at least an equally important role for the properties of the resulting classification compared to the algorithm used for type definition and assignment.
Resumo:
The realistic representation of rainfall on the local scale in climate models remains a key challenge. Realism encompasses the full spatial and temporal structure of rainfall, and is a key indicator of model skill in representing the underlying processes. In particular, if rainfall is more realistic in a climate model, there is greater confidence in its projections of future change. In this study, the realism of rainfall in a very high-resolution (1.5 km) regional climate model (RCM) is compared to a coarser-resolution 12-km RCM. This is the first time a convection-permitting model has been run for an extended period (1989–2008) over a region of the United Kingdom, allowing the characteristics of rainfall to be evaluated in a climatological sense. In particular, the duration and spatial extent of hourly rainfall across the southern United Kingdom is examined, with a key focus on heavy rainfall. Rainfall in the 1.5-km RCM is found to be much more realistic than in the 12-km RCM. In the 12-km RCM, heavy rain events are not heavy enough, and tend to be too persistent and widespread. While the 1.5-km model does have a tendency for heavy rain to be too intense, it still gives a much better representation of its duration and spatial extent. Long-standing problems in climate models, such as the tendency for too much persistent light rain and errors in the diurnal cycle, are also considerably reduced in the 1.5-km RCM. Biases in the 12-km RCM appear to be linked to deficiencies in the representation of convection.
Resumo:
Global NDVI data are routinely derived from the AVHRR, SPOT-VGT, and MODIS/Terra earth observation records for a range of applications from terrestrial vegetation monitoring to climate change modeling. This has led to a substantial interest in the harmonization of multisensor records. Most evaluations of the internal consistency and continuity of global multisensor NDVI products have focused on time-series harmonization in the spectral domain, often neglecting the spatial domain. We fill this void by applying variogram modeling (a) to evaluate the differences in spatial variability between 8-km AVHRR, 1-km SPOT-VGT, and 1-km, 500-m, and 250-m MODIS NDVI products over eight EOS (Earth Observing System) validation sites, and (b) to characterize the decay of spatial variability as a function of pixel size (i.e. data regularization) for spatially aggregated Landsat ETM+ NDVI products and a real multisensor dataset. First, we demonstrate that the conjunctive analysis of two variogram properties – the sill and the mean length scale metric – provides a robust assessment of the differences in spatial variability between multiscale NDVI products that are due to spatial (nominal pixel size, point spread function, and view angle) and non-spatial (sensor calibration, cloud clearing, atmospheric corrections, and length of multi-day compositing period) factors. Next, we show that as the nominal pixel size increases, the decay of spatial information content follows a logarithmic relationship with stronger fit value for the spatially aggregated NDVI products (R2 = 0.9321) than for the native-resolution AVHRR, SPOT-VGT, and MODIS NDVI products (R2 = 0.5064). This relationship serves as a reference for evaluation of the differences in spatial variability and length scales in multiscale datasets at native or aggregated spatial resolutions. The outcomes of this study suggest that multisensor NDVI records cannot be integrated into a long-term data record without proper consideration of all factors affecting their spatial consistency. Hence, we propose an approach for selecting the spatial resolution, at which differences in spatial variability between NDVI products from multiple sensors are minimized. This approach provides practical guidance for the harmonization of long-term multisensor datasets.
Resumo:
Optimal estimation (OE) is applied as a technique for retrieving sea surface temperature (SST) from thermal imagery obtained by the Spinning Enhanced Visible and Infra-Red Imager (SEVIRI) on Meteosat 9. OE requires simulation of observations as part of the retrieval process, and this is done here using numerical weather prediction fields and a fast radiative transfer model. Bias correction of the simulated brightness temperatures (BTs) is found to be a necessary step before retrieval, and is achieved by filtered averaging of simulations minus observations over a time period of 20 days and spatial scale of 2.5° in latitude and longitude. Throughout this study, BT observations are clear-sky averages over cells of size 0.5° in latitude and longitude. Results for the OE SST are compared to results using a traditional non-linear retrieval algorithm (“NLSST”), both validated against a set of 30108 night-time matches with drifting buoy observations. For the OE SST the mean difference with respect to drifter SSTs is − 0.01 K and the standard deviation is 0.47 K, compared to − 0.38 K and 0.70 K respectively for the NLSST algorithm. Perhaps more importantly, systematic biases in NLSST with respect to geographical location, atmospheric water vapour and satellite zenith angle are greatly reduced for the OE SST. However, the OE SST is calculated to have a lower sensitivity of retrieved SST to true SST variations than the NLSST. This feature would be a disadvantage for observing SST fronts and diurnal variability, and raises questions as to how best to exploit OE techniques at SEVIRI's full spatial resolution.
Resumo:
In the present paper we study the approximation of functions with bounded mixed derivatives by sparse tensor product polynomials in positive order tensor product Sobolev spaces. We introduce a new sparse polynomial approximation operator which exhibits optimal convergence properties in L2 and tensorized View the MathML source simultaneously on a standard k-dimensional cube. In the special case k=2 the suggested approximation operator is also optimal in L2 and tensorized H1 (without essential boundary conditions). This allows to construct an optimal sparse p-version FEM with sparse piecewise continuous polynomial splines, reducing the number of unknowns from O(p2), needed for the full tensor product computation, to View the MathML source, required for the suggested sparse technique, preserving the same optimal convergence rate in terms of p. We apply this result to an elliptic differential equation and an elliptic integral equation with random loading and compute the covariances of the solutions with View the MathML source unknowns. Several numerical examples support the theoretical estimates.
Resumo:
The European Centre for Medium-range Weather Forecast (ECMWF) provides an aerosol re-analysis starting from year 2003 for the Monitoring Atmospheric Composition and Climate (MACC) project. The re-analysis assimilates total aerosol optical depth retrieved by the Moderate Resolution Imaging Spectroradiometer (MODIS) to correct for model departures from observed aerosols. The reanalysis therefore combines satellite retrievals with the full spatial coverage of a numerical model. Re-analysed products are used here to estimate the shortwave direct and first indirect radiative forcing of anthropogenic aerosols over the period 2003–2010, using methods previously applied to satellite retrievals of aerosols and clouds. The best estimate of globally-averaged, all-sky direct radiative forcing is −0.7±0.3Wm−2. The standard deviation is obtained by a Monte-Carlo analysis of uncertainties, which accounts for uncertainties in the aerosol anthropogenic fraction, aerosol absorption, and cloudy-sky effects. Further accounting for differences between the present-day natural and pre-industrial aerosols provides a direct radiative forcing estimate of −0.4±0.3Wm−2. The best estimate of globally-averaged, all-sky first indirect radiative forcing is −0.6±0.4Wm−2. Its standard deviation accounts for uncertainties in the aerosol anthropogenic fraction, and in cloud albedo and cloud droplet number concentration susceptibilities to aerosol changes. The distribution of first indirect radiative forcing is asymmetric and is bounded by −0.1 and −2.0Wm−2. In order to decrease uncertainty ranges, better observational constraints on aerosol absorption and sensitivity of cloud droplet number concentrations to aerosol changes are required.
Resumo:
In spite of trying to understand processes in the same spatial domain, the catchment hydrology and water quality scientific communities are relatively disconnected and so are their respective models. This is emphasized by an inadequate representation of transport processes, in both catchment-scale hydrological and water quality models. While many hydrological models at the catchment scale only account for pressure propagation and not for mass transfer, catchment scale water quality models are typically limited by overly simplistic representations of flow processes. With the objective of raising awareness for this issue and outlining potential ways forward we provide a non-technical overview of (1) the importance of hydrology-controlled transport through catchment systems as the link between hydrology and water quality; (2) the limitations of current generation catchment-scale hydrological and water quality models; (3) the concept of transit times as tools to quantify transport and (4) the benefits of transit time based formulations of solute transport for catchment-scale hydrological and water quality models. There is emerging evidence that an explicit formulation of transport processes, based on the concept of transit times has the potential to improve the understanding of the integrated system dynamics of catchments and to provide a stronger link between catchment-scale hydrological and water quality models.
Resumo:
Approximate Bayesian computation (ABC) is a highly flexible technique that allows the estimation of parameters under demographic models that are too complex to be handled by full-likelihood methods. We assess the utility of this method to estimate the parameters of range expansion in a two-dimensional stepping-stone model, using samples from either a single deme or multiple demes. A minor modification to the ABC procedure is introduced, which leads to an improvement in the accuracy of estimation. The method is then used to estimate the expansion time and migration rates for five natural common vole populations in Switzerland typed for a sex-linked marker and a nuclear marker. Estimates based on both markers suggest that expansion occurred < 10,000 years ago, after the most recent glaciation, and that migration rates are strongly male biased.
Resumo:
The climate belongs to the class of non-equilibrium forced and dissipative systems, for which most results of quasi-equilibrium statistical mechanics, including the fluctuation-dissipation theorem, do not apply. In this paper we show for the first time how the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, can be applied with great success to analyze the climatic response to general forcings. The crucial value of the Ruelle theory lies in the fact that it allows to compute the response of the system in terms of expectation values of explicit and computable functions of the phase space averaged over the invariant measure of the unperturbed state. We choose as test bed a classical version of the Lorenz 96 model, which, in spite of its simplicity, has a well-recognized prototypical value as it is a spatially extended one-dimensional model and presents the basic ingredients, such as dissipation, advection and the presence of an external forcing, of the actual atmosphere. We recapitulate the main aspects of the general response theory and propose some new general results. We then analyze the frequency dependence of the response of both local and global observables to perturbations having localized as well as global spatial patterns. We derive analytically several properties of the corresponding susceptibilities, such as asymptotic behavior, validity of Kramers-Kronig relations, and sum rules, whose main ingredient is the causality principle. We show that all the coefficients of the leading asymptotic expansions as well as the integral constraints can be written as linear function of parameters that describe the unperturbed properties of the system, such as its average energy. Some newly obtained empirical closure equations for such parameters allow to define such properties as an explicit function of the unperturbed forcing parameter alone for a general class of chaotic Lorenz 96 models. We then verify the theoretical predictions from the outputs of the simulations up to a high degree of precision. The theory is used to explain differences in the response of local and global observables, to define the intensive properties of the system, which do not depend on the spatial resolution of the Lorenz 96 model, and to generalize the concept of climate sensitivity to all time scales. We also show how to reconstruct the linear Green function, which maps perturbations of general time patterns into changes in the expectation value of the considered observable for finite as well as infinite time. Finally, we propose a simple yet general methodology to study general Climate Change problems on virtually any time scale by resorting to only well selected simulations, and by taking full advantage of ensemble methods. The specific case of globally averaged surface temperature response to a general pattern of change of the CO2 concentration is discussed. We believe that the proposed approach may constitute a mathematically rigorous and practically very effective way to approach the problem of climate sensitivity, climate prediction, and climate change from a radically new perspective.
Resumo:
We survey observations of the radial magnetic field in the heliosphere as a function of position, sunspot number, and sunspot cycle phase. We show that most of the differences between pairs of simultaneous observations, normalized using the square of the heliocentric distance and averaged over solar rotations, are consistent with the kinematic "flux excess" effect whereby the radial component of the frozen-in heliospheric field is increased by longitudinal solar wind speed structure. In particular, the survey shows that, as expected, the flux excess effect at high latitudes is almost completely absent during sunspot minimum but is almost the same as within the streamer belt at sunspot maximum. We study the uncertainty inherent in the use of the Ulysses result that the radial field is independent of heliographic latitude in the computation of the total open solar flux: we show that after the kinematic correction for the excess flux effect has been made it causes errors that are smaller than 4.5%, with a most likely value of 2.5%. The importance of this result for understanding temporal evolution of the open solar flux is reviewed.
Resumo:
Two-dimensional flood inundation modelling is a widely used tool to aid flood risk management. In urban areas, where asset value and population density are greatest, the model spatial resolution required to represent flows through a typical street network (i.e. < 10m) often results in impractical computational cost at the whole city scale. Explicit diffusive storage cell models become very inefficient at such high resolutions, relative to shallow water models, because the stable time step in such schemes scales as a quadratic of resolution. This paper presents the calibration and evaluation of a recently developed new formulation of the LISFLOOD-FP model, where stability is controlled by the Courant–Freidrichs–Levy condition for the shallow water equations, such that, the stable time step instead scales linearly with resolution. The case study used is based on observations during the summer 2007 floods in Tewkesbury, UK. Aerial photography is available for model evaluation on three separate days from the 24th to the 31st of July. The model covered a 3.6 km by 2 km domain and was calibrated using gauge data from high flows during the previous month. The new formulation was benchmarked against the original version of the model at 20 m and 40 m resolutions, demonstrating equally accurate performance given the available validation data but at 67x faster computation time. The July event was then simulated at the 2 m resolution of the available airborne LiDAR DEM. This resulted in a significantly more accurate simulation of the drying dynamics compared to that simulated by the coarse resolution models, although estimates of peak inundation depth were similar.
Resumo:
In the 1990s the Message Passing Interface Forum defined MPI bindings for Fortran, C, and C++. With the success of MPI these relatively conservative languages have continued to dominate in the parallel computing community. There are compelling arguments in favour of more modern languages like Java. These include portability, better runtime error checking, modularity, and multi-threading. But these arguments have not converted many HPC programmers, perhaps due to the scarcity of full-scale scientific Java codes, and the lack of evidence for performance competitive with C or Fortran. This paper tries to redress this situation by porting two scientific applications to Java. Both of these applications are parallelized using our thread-safe Java messaging system—MPJ Express. The first application is the Gadget-2 code, which is a massively parallel structure formation code for cosmological simulations. The second application uses the finite-domain time-difference method for simulations in the area of computational electromagnetics. We evaluate and compare the performance of the Java and C versions of these two scientific applications, and demonstrate that the Java codes can achieve performance comparable with legacy applications written in conventional HPC languages. Copyright © 2009 John Wiley & Sons, Ltd.