931 resultados para visualisation of acoustic data


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe a model-data fusion (MDF) inter-comparison project (REFLEX), which compared various algorithms for estimating carbon (C) model parameters consistent with both measured carbon fluxes and states and a simple C model. Participants were provided with the model and with both synthetic net ecosystem exchange (NEE) of CO2 and leaf area index (LAI) data, generated from the model with added noise, and observed NEE and LAI data from two eddy covariance sites. Participants endeavoured to estimate model parameters and states consistent with the model for all cases over the two years for which data were provided, and generate predictions for one additional year without observations. Nine participants contributed results using Metropolis algorithms, Kalman filters and a genetic algorithm. For the synthetic data case, parameter estimates compared well with the true values. The results of the analyses indicated that parameters linked directly to gross primary production (GPP) and ecosystem respiration, such as those related to foliage allocation and turnover, or temperature sensitivity of heterotrophic respiration, were best constrained and characterised. Poorly estimated parameters were those related to the allocation to and turnover of fine root/wood pools. Estimates of confidence intervals varied among algorithms, but several algorithms successfully located the true values of annual fluxes from synthetic experiments within relatively narrow 90% confidence intervals, achieving >80% success rate and mean NEE confidence intervals <110 gC m−2 year−1 for the synthetic case. Annual C flux estimates generated by participants generally agreed with gap-filling approaches using half-hourly data. The estimation of ecosystem respiration and GPP through MDF agreed well with outputs from partitioning studies using half-hourly data. Confidence limits on annual NEE increased by an average of 88% in the prediction year compared to the previous year, when data were available. Confidence intervals on annual NEE increased by 30% when observed data were used instead of synthetic data, reflecting and quantifying the addition of model error. Finally, our analyses indicated that incorporating additional constraints, using data on C pools (wood, soil and fine roots) would help to reduce uncertainties for model parameters poorly served by eddy covariance data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The collection of wind speed time series by means of digital data loggers occurs in many domains, including civil engineering, environmental sciences and wind turbine technology. Since averaging intervals are often significantly larger than typical system time scales, the information lost has to be recovered in order to reconstruct the true dynamics of the system. In the present work we present a simple algorithm capable of generating a real-time wind speed time series from data logger records containing the average, maximum, and minimum values of the wind speed in a fixed interval, as well as the standard deviation. The signal is generated from a generalized random Fourier series. The spectrum can be matched to any desired theoretical or measured frequency distribution. Extreme values are specified through a postprocessing step based on the concept of constrained simulation. Applications of the algorithm to 10-min wind speed records logged at a test site at 60 m height above the ground show that the recorded 10-min values can be reproduced by the simulated time series to a high degree of accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation deals with aspects of sequential data assimilation (in particular ensemble Kalman filtering) and numerical weather forecasting. In the first part, the recently formulated Ensemble Kalman-Bucy (EnKBF) filter is revisited. It is shown that the previously used numerical integration scheme fails when the magnitude of the background error covariance grows beyond that of the observational error covariance in the forecast window. Therefore, we present a suitable integration scheme that handles the stiffening of the differential equations involved and doesn’t represent further computational expense. Moreover, a transform-based alternative to the EnKBF is developed: under this scheme, the operations are performed in the ensemble space instead of in the state space. Advantages of this formulation are explained. For the first time, the EnKBF is implemented in an atmospheric model. The second part of this work deals with ensemble clustering, a phenomenon that arises when performing data assimilation using of deterministic ensemble square root filters in highly nonlinear forecast models. Namely, an M-member ensemble detaches into an outlier and a cluster of M-1 members. Previous works may suggest that this issue represents a failure of EnSRFs; this work dispels that notion. It is shown that ensemble clustering can be reverted also due to nonlinear processes, in particular the alternation between nonlinear expansion and compression of the ensemble for different regions of the attractor. Some EnSRFs that use random rotations have been developed to overcome this issue; these formulations are analyzed and their advantages and disadvantages with respect to common EnSRFs are discussed. The third and last part contains the implementation of the Robert-Asselin-Williams (RAW) filter in an atmospheric model. The RAW filter is an improvement to the widely popular Robert-Asselin filter that successfully suppresses spurious computational waves while avoiding any distortion in the mean value of the function. Using statistical significance tests both at the local and field level, it is shown that the climatology of the SPEEDY model is not modified by the changed time stepping scheme; hence, no retuning of the parameterizations is required. It is found the accuracy of the medium-term forecasts is increased by using the RAW filter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data assimilation refers to the problem of finding trajectories of a prescribed dynamical model in such a way that the output of the model (usually some function of the model states) follows a given time series of observations. Typically though, these two requirements cannot both be met at the same time–tracking the observations is not possible without the trajectory deviating from the proposed model equations, while adherence to the model requires deviations from the observations. Thus, data assimilation faces a trade-off. In this contribution, the sensitivity of the data assimilation with respect to perturbations in the observations is identified as the parameter which controls the trade-off. A relation between the sensitivity and the out-of-sample error is established, which allows the latter to be calculated under operational conditions. A minimum out-of-sample error is proposed as a criterion to set an appropriate sensitivity and to settle the discussed trade-off. Two approaches to data assimilation are considered, namely variational data assimilation and Newtonian nudging, also known as synchronization. Numerical examples demonstrate the feasibility of the approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1. Nutrient concentrations (particularly N and P) determine the extent to which water bodies are or may become eutrophic. Direct determination of nutrient content on a wide scale is labour intensive but the main sources of N and P are well known. This paper describes and tests an export coefficient model for prediction of total N and total P from: (i) land use, stock headage and human population; (ii) the export rates of N and P from these sources; and (iii) the river discharge. Such a model might be used to forecast the effects of changes in land use in the future and to hindcast past water quality to establish comparative or baseline states for the monitoring of change. 2. The model has been calibrated against observed data for 1988 and validated against sets of observed data for a sequence of earlier years in ten British catchments varying from uplands through rolling, fertile lowlands to the flat topography of East Anglia. 3. The model predicted total N and total P concentrations with high precision (95% of the variance in observed data explained). It has been used in two forms: the first on a specific catchment basis; the second for a larger natural region which contains the catchment with the assumption that all catchments within that region will be similar. Both models gave similar results with little loss of precision in the latter case. This implies that it will be possible to describe the overall pattern of nutrient export in the UK with only a fraction of the effort needed to carry out the calculations for each individual water body. 4. Comparison between land use, stock headage, population numbers and nutrient export for the ten catchments in the pre-war year of 1931, and for 1970 and 1988 show that there has been a substantial loss of rough grazing to fertilized temporary and permanent grasslands, an increase in the hectarage devoted to arable, consistent increases in the stocking of cattle and sheep and a marked movement of humans to these rural catchments. 5. All of these trends have increased the flows of nutrients with more than a doubling of both total N and total P loads during the period. On average in these rural catchments, stock wastes have been the greatest contributors to both N and P exports, with cultivation the next most important source of N and people of P. Ratios of N to P were high in 1931 and remain little changed so that, in these catchments, phosphorus continues to be the nutrient most likely to control algal crops in standing waters supplied by the rivers studied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES: The prediction of protein structure and the precise understanding of protein folding and unfolding processes remains one of the greatest challenges in structural biology and bioinformatics. Computer simulations based on molecular dynamics (MD) are at the forefront of the effort to gain a deeper understanding of these complex processes. Currently, these MD simulations are usually on the order of tens of nanoseconds, generate a large amount of conformational data and are computationally expensive. More and more groups run such simulations and generate a myriad of data, which raises new challenges in managing and analyzing these data. Because the vast range of proteins researchers want to study and simulate, the computational effort needed to generate data, the large data volumes involved, and the different types of analyses scientists need to perform, it is desirable to provide a public repository allowing researchers to pool and share protein unfolding data. METHODS: To adequately organize, manage, and analyze the data generated by unfolding simulation studies, we designed a data warehouse system that is embedded in a grid environment to facilitate the seamless sharing of available computer resources and thus enable many groups to share complex molecular dynamics simulations on a more regular basis. RESULTS: To gain insight into the conformational fluctuations and stability of the monomeric forms of the amyloidogenic protein transthyretin (TTR), molecular dynamics unfolding simulations of the monomer of human TTR have been conducted. Trajectory data and meta-data of the wild-type (WT) protein and the highly amyloidogenic variant L55P-TTR represent the test case for the data warehouse. CONCLUSIONS: Web and grid services, especially pre-defined data mining services that can run on or 'near' the data repository of the data warehouse, are likely to play a pivotal role in the analysis of molecular dynamics unfolding data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the increasing awareness of protein folding disorders, the explosion of genomic information, and the need for efficient ways to predict protein structure, protein folding and unfolding has become a central issue in molecular sciences research. Molecular dynamics computer simulations are increasingly employed to understand the folding and unfolding of proteins. Running protein unfolding simulations is computationally expensive and finding ways to enhance performance is a grid issue on its own. However, more and more groups run such simulations and generate a myriad of data, which raises new challenges in managing and analyzing these data. Because the vast range of proteins researchers want to study and simulate, the computational effort needed to generate data, the large data volumes involved, and the different types of analyses scientists need to perform, it is desirable to provide a public repository allowing researchers to pool and share protein unfolding data. This paper describes efforts to provide a grid-enabled data warehouse for protein unfolding data. We outline the challenge and present first results in the design and implementation of the data warehouse.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The P-found protein folding and unfolding simulation repository is designed to allow scientists to perform data mining and other analyses across large, distributed simulation data sets. There are two storage components in P-found: a primary repository of simulation data that is used to populate the second component, and a data warehouse that contains important molecular properties. These properties may be used for data mining studies. Here we demonstrate how grid technologies can support multiple, distributed P-found installations. In particular, we look at two aspects: firstly, how grid data management technologies can be used to access the distributed data warehouses; and secondly, how the grid can be used to transfer analysis programs to the primary repositories — this is an important and challenging aspect of P-found, due to the large data volumes involved and the desire of scientists to maintain control of their own data. The grid technologies we are developing with the P-found system will allow new large data sets of protein folding simulations to be accessed and analysed in novel ways, with significant potential for enabling scientific discovery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Considerable progress has taken place in numerical weather prediction over the last decade. It has been possible to extend predictive skills in the extra-tropics of the Northern Hemisphere during the winter from less than five days to seven days. Similar improvements, albeit on a lower level, have taken place in the Southern Hemisphere. Another example of improvement in the forecasts is the prediction of intense synoptic phenomena such as cyclogenesis which on the whole is quite successful with the most advanced operational models (Bengtsson (1989), Gadd and Kruze (1988)). A careful examination shows that there are no single causes for the improvements in predictive skill, but instead they are due to several different factors encompassing the forecasting system as a whole (Bengtsson, 1985). In this paper we will focus our attention on the role of data-assimilation and the effect it may have on reducing the initial error and hence improving the forecast. The first part of the paper contains a theoretical discussion on error growth in simple data assimilation systems, following Leith (1983). In the second part we will apply the result on actual forecast data from ECMWF. The potential for further forecast improvements within the framework of the present observing system in the two hemispheres will be discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is concerned with the problem of propagation from a monofrequency coherent line source above a plane of homogeneous surface impedance. The solution of this problem occurs in the kernel of certain boundary integral equation formulations of acoustic propagation above an impedance boundary, and the discussion of the paper is motivated by this application. The paper starts by deriving representations, as Laplace-type integrals, of the solution and its first partial derivatives. The evaluation of these integral representations by Gauss-Laguerre quadrature is discussed, and theoretical bounds on the truncation error are obtained. Specific approximations are proposed which are shown to be accurate except in the very near field, for all angles of incidence and a wide range of values of surface impedance. The paper finishes with derivations of partial results and analogous Laplace-type integral representations for the case of a point source.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Advanced Along-Track Scanning Radiometer (AATSR) was launched on Envisat in March 2002. The AATSR instrument is designed to retrieve precise and accurate global sea surface temperature (SST) that, combined with the large data set collected from its predecessors, ATSR and ATSR-2, will provide a long term record of SST data that is greater than 15 years. This record can be used for independent monitoring and detection of climate change. The AATSR validation programme has successfully completed its initial phase. The programme involves validation of the AATSR derived SST values using in situ radiometers, in situ buoys and global SST fields from other data sets. The results of the initial programme presented here will demonstrate that the AATSR instrument is currently close to meeting its scientific objectives of determining global SST to an accuracy of 0.3 K (one sigma). For night time data, the analysis gives a warm bias of between +0.04 K (0.28 K) for buoys to +0.06 K (0.20 K) for radiometers, with slightly higher errors observed for day time data, showing warm biases of between +0.02 (0.39 K) for buoys to +0.11 K (0.33 K) for radiometers. They show that the ATSR series of instruments continues to be the world leader in delivering accurate space-based observations of SST, which is a key climate parameter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A number of methods of evaluating the validity of interval forecasts of financial data are analysed, and illustrated using intraday FTSE100 index futures returns. Some existing interval forecast evaluation techniques, such as the Markov chain approach of Christoffersen (1998), are shown to be inappropriate in the presence of periodic heteroscedasticity. Instead, we consider a regression-based test, and a modified version of Christoffersen's Markov chain test for independence, and analyse their properties when the financial time series exhibit periodic volatility. These approaches lead to different conclusions when interval forecasts of FTSE100 index futures returns generated by various GARCH(1,1) and periodic GARCH(1,1) models are evaluated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We examine how the accuracy of real-time forecasts from models that include autoregressive terms can be improved by estimating the models on ‘lightly revised’ data instead of using data from the latest-available vintage. The benefits of estimating autoregressive models on lightly revised data are related to the nature of the data revision process and the underlying process for the true values. Empirically, we find improvements in root mean square forecasting error of 2–4% when forecasting output growth and inflation with univariate models, and of 8% with multivariate models. We show that multiple-vintage models, which explicitly model data revisions, require large estimation samples to deliver competitive forecasts. Copyright © 2012 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This technique paper describes a novel method for quantitatively and routinely identifying auroral breakup following substorm onset using the Time History of Events and Macroscale Interactions During Substorms (THEMIS) all-sky imagers (ASIs). Substorm onset is characterised by a brightening of the aurora that is followed by auroral poleward expansion and auroral breakup. This breakup can be identified by a sharp increase in the auroral intensity i(t) and the time derivative of auroral intensity i'(t). Utilising both i(t) and i'(t) we have developed an algorithm for identifying the time interval and spatial location of auroral breakup during the substorm expansion phase within the field of view of ASI data based solely on quantifiable characteristics of the optical auroral emissions. We compare the time interval determined by the algorithm to independently identified auroral onset times from three previously published studies. In each case the time interval determined by the algorithm is within error of the onset independently identified by the prior studies. We further show the utility of the algorithm by comparing the breakup intervals determined using the automated algorithm to an independent list of substorm onset times. We demonstrate that up to 50% of the breakup intervals characterised by the algorithm are within the uncertainty of the times identified in the independent list. The quantitative description and routine identification of an interval of auroral brightening during the substorm expansion phase provides a foundation for unbiased statistical analysis of the aurora to probe the physics of the auroral substorm as a new scientific tool for aiding the identification of the processes leading to auroral substorm onset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Palaeodata in synthesis form are needed as benchmarks for the Palaeoclimate Modelling Intercomparison Project (PMIP). Advances since the last synthesis of terrestrial palaeodata from the last glacial maximum (LGM) call for a new evaluation, especially of data from the tropics. Here pollen, plant-macrofossil, lake-level, noble gas (from groundwater) and δ18O (from speleothems) data are compiled for 18±2 ka (14C), 32 °N–33 °S. The reliability of the data was evaluated using explicit criteria and some types of data were re-analysed using consistent methods in order to derive a set of mutually consistent palaeoclimate estimates of mean temperature of the coldest month (MTCO), mean annual temperature (MAT), plant available moisture (PAM) and runoff (P-E). Cold-month temperature (MAT) anomalies from plant data range from −1 to −2 K near sea level in Indonesia and the S Pacific, through −6 to −8 K at many high-elevation sites to −8 to −15 K in S China and the SE USA. MAT anomalies from groundwater or speleothems seem more uniform (−4 to −6 K), but the data are as yet sparse; a clear divergence between MAT and cold-month estimates from the same region is seen only in the SE USA, where cold-air advection is expected to have enhanced cooling in winter. Regression of all cold-month anomalies against site elevation yielded an estimated average cooling of −2.5 to −3 K at modern sea level, increasing to ≈−6 K by 3000 m. However, Neotropical sites showed larger than the average sea-level cooling (−5 to −6 K) and a non-significant elevation effect, whereas W and S Pacific sites showed much less sea-level cooling (−1 K) and a stronger elevation effect. These findings support the inference that tropical sea-surface temperatures (SSTs) were lower than the CLIMAP estimates, but they limit the plausible average tropical sea-surface cooling, and they support the existence of CLIMAP-like geographic patterns in SST anomalies. Trends of PAM and lake levels indicate wet LGM conditions in the W USA, and at the highest elevations, with generally dry conditions elsewhere. These results suggest a colder-than-present ocean surface producing a weaker hydrological cycle, more arid continents, and arguably steeper-than-present terrestrial lapse rates. Such linkages are supported by recent observations on freezing-level height and tropical SSTs; moreover, simulations of “greenhouse” and LGM climates point to several possible feedback processes by which low-level temperature anomalies might be amplified aloft.