915 resultados para Sampling method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5°-resolution range from approximately 50% at 1 mm h−1 to 20% at 14 mm h−1. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%–80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5° resolution is relatively small (less than 6% at 5 mm day−1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%–35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%–15% at 5 mm day−1, with proportionate reductions in latent heating sampling errors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A sampling oscilloscope is one of the main units in automatic pulse measurement system (APMS). The time jitter in waveform samplers is an important error source that affect the precision of data acquisition. In this paper, this kind of error is greatly reduced by using the deconvolution method. First, the probability density function (PDF) of time jitter distribution is determined by the statistical approach, then, this PDF is used as convolution kern to deconvolve with the acquired waveform data with additional averaging, and the result is the waveform data in which the effect of time jitter has been removed, and the measurement precision of APMS is greatly improved. In addition, some computer simulations are given which prove the success of the method given in this paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Following a malicious or accidental atmospheric release in an outdoor environment it is essential for first responders to ensure safety by identifying areas where human life may be in danger. For this to happen quickly, reliable information is needed on the source strength and location, and the type of chemical agent released. We present here an inverse modelling technique that estimates the source strength and location of such a release, together with the uncertainty in those estimates, using a limited number of measurements of concentration from a network of chemical sensors considering a single, steady, ground-level source. The technique is evaluated using data from a set of dispersion experiments conducted in a meteorological wind tunnel, where simultaneous measurements of concentration time series were obtained in the plume from a ground-level point-source emission of a passive tracer. In particular, we analyze the response to the number of sensors deployed and their arrangement, and to sampling and model errors. We find that the inverse algorithm can generate acceptable estimates of the source characteristics with as few as four sensors, providing these are well-placed and that the sampling error is controlled. Configurations with at least three sensors in a profile across the plume were found to be superior to other arrangements examined. Analysis of the influence of sampling error due to the use of short averaging times showed that the uncertainty in the source estimates grew as the sampling time decreased. This demonstrated that averaging times greater than about 5min (full scale time) lead to acceptable accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The task of this paper is to develop a Time-Domain Probe Method for the reconstruction of impenetrable scatterers. The basic idea of the method is to use pulses in the time domain and the time-dependent response of the scatterer to reconstruct its location and shape. The method is based on the basic causality principle of timedependent scattering. The method is independent of the boundary condition and is applicable for limited aperture scattering data. In particular, we discuss the reconstruction of the shape of a rough surface in three dimensions from time-domain measurements of the scattered field. In practise, measurement data is collected where the incident field is given by a pulse. We formulate the time-domain fieeld reconstruction problem equivalently via frequency-domain integral equations or via a retarded boundary integral equation based on results of Bamberger, Ha-Duong, Lubich. In contrast to pure frequency domain methods here we use a time-domain characterization of the unknown shape for its reconstruction. Our paper will describe the Time-Domain Probe Method and relate it to previous frequency-domain approaches on sampling and probe methods by Colton, Kirsch, Ikehata, Potthast, Luke, Sylvester et al. The approach significantly extends recent work of Chandler-Wilde and Lines (2005) and Luke and Potthast (2006) on the timedomain point source method. We provide a complete convergence analysis for the method for the rough surface scattering case and provide numerical simulations and examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently there are few observations of the urban wind field at heights other than rooftop level. Remote sensing instruments such as Doppler lidars provide wind speed data at many heights, which would be useful in determining wind loadings of tall buildings, and predicting local air quality. Studies comparing remote sensing with traditional anemometers carried out in flat, homogeneous terrain often use scan patterns which take several minutes. In an urban context the flow changes quickly in space and time, so faster scans are required to ensure little change in the flow over the scan period. We compare 3993 h of wind speed data collected using a three-beam Doppler lidar wind profiling method with data from a sonic anemometer (190 m). Both instruments are located in central London, UK; a highly built-up area. Based on wind profile measurements every 2 min, the uncertainty in the hourly mean wind speed due to the sampling frequency is 0.05–0.11 m s−1. The lidar tended to overestimate the wind speed by ≈0.5 m s−1 for wind speeds below 20 m s−1. Accuracy may be improved by increasing the scanning frequency of the lidar. This method is considered suitable for use in urban areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The hybrid Monte Carlo (HMC) method is a popular and rigorous method for sampling from a canonical ensemble. The HMC method is based on classical molecular dynamics simulations combined with a Metropolis acceptance criterion and a momentum resampling step. While the HMC method completely resamples the momentum after each Monte Carlo step, the generalized hybrid Monte Carlo (GHMC) method can be implemented with a partial momentum refreshment step. This property seems desirable for keeping some of the dynamic information throughout the sampling process similar to stochastic Langevin and Brownian dynamics simulations. It is, however, ultimate to the success of the GHMC method that the rejection rate in the molecular dynamics part is kept at a minimum. Otherwise an undesirable Zitterbewegung in the Monte Carlo samples is observed. In this paper, we describe a method to achieve very low rejection rates by using a modified energy, which is preserved to high-order along molecular dynamics trajectories. The modified energy is based on backward error results for symplectic time-stepping methods. The proposed generalized shadow hybrid Monte Carlo (GSHMC) method is applicable to NVT as well as NPT ensemble simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many applications, such as intermittent data assimilation, lead to a recursive application of Bayesian inference within a Monte Carlo context. Popular data assimilation algorithms include sequential Monte Carlo methods and ensemble Kalman filters (EnKFs). These methods differ in the way Bayesian inference is implemented. Sequential Monte Carlo methods rely on importance sampling combined with a resampling step, while EnKFs utilize a linear transformation of Monte Carlo samples based on the classic Kalman filter. While EnKFs have proven to be quite robust even for small ensemble sizes, they are not consistent since their derivation relies on a linear regression ansatz. In this paper, we propose another transform method, which does not rely on any a priori assumptions on the underlying prior and posterior distributions. The new method is based on solving an optimal transportation problem for discrete random variables. © 2013, Society for Industrial and Applied Mathematics

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The high computational cost of calculating the radiative heating rates in numerical weather prediction (NWP) and climate models requires that calculations are made infrequently, leading to poor sampling of the fast-changing cloud field and a poor representation of the feedback that would occur. This paper presents two related schemes for improving the temporal sampling of the cloud field. Firstly, the ‘split time-stepping’ scheme takes advantage of the independent nature of the monochromatic calculations of the ‘correlated-k’ method to split the calculation into gaseous absorption terms that are highly dependent on changes in cloud (the optically thin terms) and those that are not (optically thick). The small number of optically thin terms can then be calculated more often to capture changes in the grey absorption and scattering associated with cloud droplets and ice crystals. Secondly, the ‘incremental time-stepping’ scheme uses a simple radiative transfer calculation using only one or two monochromatic calculations representing the optically thin part of the atmospheric spectrum. These are found to be sufficient to represent the heating rate increments caused by changes in the cloud field, which can then be added to the last full calculation of the radiation code. We test these schemes in an operational forecast model configuration and find a significant improvement is achieved, for a small computational cost, over the current scheme employed at the Met Office. The ‘incremental time-stepping’ scheme is recommended for operational use, along with a new scheme to correct the surface fluxes for the change in solar zenith angle between radiation calculations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Weeds tend to aggregate in patches within fields and there is evidence that this is partly owing to variation in soil properties. Because the processes driving soil heterogeneity operate at different scales, the strength of the relationships between soil properties and weed density would also be expected to be scale-dependent. Quantifying these effects of scale on weed patch dynamics is essential to guide the design of discrete sampling protocols for mapping weed distribution. We have developed a general method that uses novel within-field nested sampling and residual maximum likelihood (REML) estimation to explore scale-dependent relationships between weeds and soil properties. We have validated the method using a case study of Alopecurus myosuroides in winter wheat. Using REML, we partitioned the variance and covariance into scale-specific components and estimated the correlations between the weed counts and soil properties at each scale. We used variograms to quantify the spatial structure in the data and to map variables by kriging. Our methodology successfully captured the effect of scale on a number of edaphic drivers of weed patchiness. The overall Pearson correlations between A. myosuroides and soil organic matter and clay content were weak and masked the stronger correlations at >50 m. Knowing how the variance was partitioned across the spatial scales we optimized the sampling design to focus sampling effort at those scales that contributed most to the total variance. The methods have the potential to guide patch spraying of weeds by identifying areas of the field that are vulnerable to weed establishment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a critique of current methods of sampling and analyzing soils for metals in archaeological prospection. Commonly used methodologies in soil science are shown to be suitable for archaeological investigations, with a concomitant improvement in their resolution. Understanding the soil-fraction location, concentration range, and spatial distribution of autochthonous (native) soil metals is shown to be a vital precursor to archaeological-site investigations, as this is the background upon which anthropogenic deposition takes place. Nested sampling is suggested as the most cost-effective method of investigating the spatial variability in the autochthonous metal concentrations. The use of the appropriate soil horizon (or sampling depth) and point sampling are critical in the preparation of a sampling regime. Simultaneous extraction is proposed as the most efficient method of identifying the location and eventual fate of autochthonous and anthropogenic metals, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An efficient and robust method to measure vitamin D (25-hydroxy vitamin D3 (25(OH)D3) and 25-hydroxy vitamin D2 in dried blood spots (DBS) has been developed and applied in the pan-European multi-centre, internet-based, personalised nutrition intervention study Food4Me. The method includes calibration with blood containing endogenous 25(OH)D3, spotted as DBS and corrected for haematocrit content. The methodology was validated following international standards. The performance characteristics did not reach those of the current gold standard liquid chromatography-MS/MS in plasma for all parameters, but were found to be very suitable for status-level determination under field conditions. DBS sample quality was very high, and 3778 measurements of 25(OH)D3 were obtained from 1465 participants. The study centre and the season within the study centre were very good predictors of 25(OH)D3 levels (P<0·001 for each case). Seasonal effects were modelled by fitting a sine function with a minimum 25(OH)D3 level on 20 January and a maximum on 21 July. The seasonal amplitude varied from centre to centre. The largest difference between winter and summer levels was found in Germany and the smallest in Poland. The model was cross-validated to determine the consistency of the predictions and the performance of the DBS method. The Pearson's correlation between the measured values and the predicted values was r 0·65, and the sd of their differences was 21·2 nmol/l. This includes the analytical variation and the biological variation within subjects. Overall, DBS obtained by unsupervised sampling of the participants at home was a viable methodology for obtaining vitamin D status information in a large nutritional study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Activities involving fauna monitoring are usually limited by the lack of resources; therefore, the choice of a proper and efficient methodology is fundamental to maximize the cost-benefit ratio. Both direct and indirect methods can be used to survey mammals, but the latter are preferred due to the difficulty to come in sight of and/or to capture the individuals, besides being cheaper. We compared the performance of two methods to survey medium and large-sized mammal: track plot recording and camera trapping, and their costs were assessed. At Jatai Ecological Station (S21 degrees 31`15 ``- W47 degrees 34`42 ``-Brazil) we installed ten camera traps along a dirt road directly in front of ten track plots, and monitored them for 10 days. We cleaned the plots, adjusted the cameras, and noted down the recorded species daily. Records taken by both methods showed they sample the local richness in different ways (Wilcoxon, T=231; p;;0.01). The track plot method performed better on registering individuals whereas camera trapping provided records which permitted more accurate species identification. The type of infra-red sensor camera used showed a strong bias towards individual body mass (R(2)=0.70; p=0.017), and the variable expenses of this method in a 10-day survey were estimated about 2.04 times higher compared to track plot method; however, in a long run camera trapping becomes cheaper than track plot recording. Concluding, track plot recording is good enough for quick surveys under a limited budget, and camera trapping is best for precise species identification and the investigation of species details, performing better for large animals. When used together, these methods can be complementary.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MCNP has stood so far as one of the main Monte Carlo radiation transport codes. Its use, as any other Monte Carlo based code, has increased as computers perform calculations faster and become more affordable along time. However, the use of Monte Carlo method to tally events in volumes which represent a small fraction of the whole system may turn to be unfeasible, if a straight analogue transport procedure (no use of variance reduction techniques) is employed and precise results are demanded. Calculations of reaction rates in activation foils placed in critical systems turn to be one of the mentioned cases. The present work takes advantage of the fixed source representation from MCNP to perform the above mentioned task in a more effective sampling way (characterizing neutron population in the vicinity of the tallying region and using it in a geometric reduced coupled simulation). An extended analysis of source dependent parameters is studied in order to understand their influence on simulation performance and on validity of results. Although discrepant results have been observed for small enveloping regions, the procedure presents itself as very efficient, giving adequate and precise results in shorter times than the standard analogue procedure. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The analytical determination of atmospheric pollutants still presents challenges due to the low-level concentrations (frequently in the mu g m(-3) range) and their variations with sampling site and time In this work a capillary membrane diffusion scrubber (CMDS) was scaled down to match with capillary electrophoresis (CE) a quick separation technique that requires nothing more than some nanoliters of sample and when combined with capacitively coupled contactless conductometric detection (C(4)D) is particularly favorable for ionic species that do not absorb in the UV-vis region like the target analytes formaldehyde formic acid acetic acid and ammonium The CMDS was coaxially assembled inside a PTFE tube and fed with acceptor phase (deionized water for species with a high Henry s constant such as formaldehyde and carboxylic acids or acidic solution for ammonia sampling with equilibrium displacement to the non-volatile ammonium ion) at a low flow rate (8 3 nLs(-1)) while the sample was aspirated through the annular gap of the concentric tubes at 25 mLs(-1) A second unit in all similar to the CMDS was operated as a capillary membrane diffusion emitter (CMDE) generating a gas flow with know concentrations of ammonia for the evaluation of the CMDS The fluids of the system were driven with inexpensive aquarium air pumps and the collected samples were stored in vials cooled by a Peltier element Complete protocols were developed for the analysis in air of NH(3) CH(3)COOH HCOOH and with a derivatization setup CH(2)O by associating the CMDS collection with the determination by CE-C(4)D The ammonia concentrations obtained by electrophoresis were checked against the reference spectrophotometric method based on Berthelot s reaction Sensitivity enhancements of this reference method were achieved by using a modified Berthelot reaction solenoid micro-pumps for liquid propulsion and a long optical path cell based on a liquid core waveguide (LCW) All techniques and methods of this work are in line with the green analytical chemistry trends (C) 2010 Elsevier B V All rights reserved

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work describes the development and optimization of a sequential injection method to automate the determination of paraquat by square-wave voltammetry employing a hanging mercury drop electrode. Automation by sequential injection enhanced the sampling throughput, improving the sensitivity and precision of the measurements as a consequence of the highly reproducible and efficient conditions of mass transport of the analyte toward the electrode surface. For instance, 212 analyses can be made per hour if the sample/standard solution is prepared off-line and the sequential injection system is used just to inject the solution towards the flow cell. In-line sample conditioning reduces the sampling frequency to 44 h(-1). Experiments were performed in 0.10 M NaCl, which was the carrier solution, using a frequency of 200 Hz, a pulse height of 25 mV, a potential step of 2 mV, and a flow rate of 100 mu L s(-1). For a concentration range between 0.010 and 0.25 mg L(-1), the current (i(p), mu A) read at the potential corresponding to the peak maximum fitted the following linear equation with the paraquat concentration (mg L(-1)): ip = (-20.5 +/- 0.3) Cparaquat -(0.02 +/- 0.03). The limits of detection and quantification were 2.0 and 7.0 mu g L(-1), respectively. The accuracy of the method was evaluated by recovery studies using spiked water samples that were also analyzed by molecular absorption spectrophotometry after reduction of paraquat with sodium dithionite in an alkaline medium. No evidence of statistically significant differences between the two methods was observed at the 95% confidence level.