988 resultados para Sampling rates


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background There is a paucity of data describing the prevalence of childhood refractive error in the United Kingdom. The Northern Ireland Childhood Errors of Refraction study, along with its sister study the Aston Eye Study, are the first population-based surveys of children using both random cluster sampling and cycloplegic autorefraction to quantify levels of refractive error in the United Kingdom. Methods Children aged 6–7 years and 12–13 years were recruited from a stratified random sample of primary and post-primary schools, representative of the population of Northern Ireland as a whole. Measurements included assessment of visual acuity, oculomotor balance, ocular biometry and cycloplegic binocular open-field autorefraction. Questionnaires were used to identify putative risk factors for refractive error. Results 399 (57%) of 6–7 years and 669 (60%) of 12–13 years participated. School participation rates did not vary statistically significantly with the size of the school, whether the school is urban or rural, or whether it is in a deprived/non-deprived area. The gender balance, ethnicity and type of schooling of participants are reflective of the Northern Ireland population. Conclusions The study design, sample size and methodology will ensure accurate measures of the prevalence of refractive errors in the target population and will facilitate comparisons with other population-based refractive data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human scent and human remains detection canines are used to locate living or deceased humans under many circumstances. Human scent canines locate individual humans on the basis of their unique scent profile, while human remains detection canines locate the general scent of decomposing human remains. Scent evidence is often collected by law enforcement agencies using a Scent Transfer Unit, a dynamic headspace concentration device. The goals of this research were to evaluate the STU-100 for the collection of human scent samples, and to apply this method to the collection of living and deceased human samples, and to the creation of canine training aids. The airflow rate and collection material used with the STU-100 were evaluated using a novel scent delivery method. Controlled Odor Mimic Permeation Systems were created containing representative standard compounds delivered at known rates, improving the reproducibility of optimization experiments. Flow rates and collection materials were compared. Higher air flow rates usually yielded significantly less total volatile compounds due to compound breakthrough through the collection material. Collection from polymer and cellulose-based materials demonstrated that the molecular backbone of the material is a factor in the trapping and releasing of compounds. The weave of the material also affects compound collection, as those materials with a tighter weave demonstrated enhanced collection efficiencies. Using the optimized method, volatiles were efficiently collected from living and deceased humans. Replicates of the living human samples showed good reproducibility; however, the odor profiles from individuals were not always distinguishable from one another. Analysis of the human remains samples revealed similarity in the type and ratio of compounds. Two types of prototype training aids were developed utilizing combinations of pure compounds as well as volatiles from actual human samples concentrated onto sorbents, which were subsequently used in field tests. The pseudo scent aids had moderate success in field tests, and the Odor pad aids had significant success. This research demonstrates that the STU-100 is a valuable tool for dog handlers and as a field instrument; however, modifications are warranted in order to improve its performance as a method for instrumental detection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The growing need for fast sampling of explosives in high throughput areas has increased the demand for improved technology for the trace detection of illicit compounds. Detection of the volatiles associated with the presence of the illicit compounds offer a different approach for sensitive trace detection of these compounds without increasing the false positive alarm rate. This study evaluated the performance of non-contact sampling and detection systems using statistical analysis through the construction of Receiver Operating Characteristic (ROC) curves in real-world scenarios for the detection of volatiles in the headspace of smokeless powder, used as the model system for generalizing explosives detection. A novel sorbent coated disk coined planar solid phase microextraction (PSPME) was previously used for rapid, non-contact sampling of the headspace containers. The limits of detection for the PSPME coupled to IMS detection was determined to be 0.5-24 ng for vapor sampling of volatile chemical compounds associated with illicit compounds and demonstrated an extraction efficiency of three times greater than other commercially available substrates, retaining >50% of the analyte after 30 minutes sampling of an analyte spike in comparison to a non-detect for the unmodified filters. Both static and dynamic PSPME sampling was used coupled with two ion mobility spectrometer (IMS) detection systems in which 10-500 mg quantities of smokeless powders were detected within 5-10 minutes of static sampling and 1 minute of dynamic sampling time in 1-45 L closed systems, resulting in faster sampling and analysis times in comparison to conventional solid phase microextraction-gas chromatography-mass spectrometry (SPME-GC-MS) analysis. Similar real-world scenarios were sampled in low and high clutter environments with zero false positive rates. Excellent PSPME-IMS detection of the volatile analytes were visualized from the ROC curves, resulting with areas under the curves (AUC) of 0.85-1.0 and 0.81-1.0 for portable and bench-top IMS systems, respectively. Construction of ROC curves were also developed for SPME-GC-MS resulting with AUC of 0.95-1.0, comparable with PSPME-IMS detection. The PSPME-IMS technique provides less false positive results for non-contact vapor sampling, cutting the cost and providing an effective sampling and detection needed in high-throughput scenarios, resulting in similar performance in comparison to well-established techniques with the added advantage of fast detection in the field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research explores Bayesian updating as a tool for estimating parameters probabilistically by dynamic analysis of data sequences. Two distinct Bayesian updating methodologies are assessed. The first approach focuses on Bayesian updating of failure rates for primary events in fault trees. A Poisson Exponentially Moving Average (PEWMA) model is implemnented to carry out Bayesian updating of failure rates for individual primary events in the fault tree. To provide a basis for testing of the PEWMA model, a fault tree is developed based on the Texas City Refinery incident which occurred in 2005. A qualitative fault tree analysis is then carried out to obtain a logical expression for the top event. A dynamic Fault Tree analysis is carried out by evaluating the top event probability at each Bayesian updating step by Monte Carlo sampling from posterior failure rate distributions. It is demonstrated that PEWMA modeling is advantageous over conventional conjugate Poisson-Gamma updating techniques when failure data is collected over long time spans. The second approach focuses on Bayesian updating of parameters in non-linear forward models. Specifically, the technique is applied to the hydrocarbon material balance equation. In order to test the accuracy of the implemented Bayesian updating models, a synthetic data set is developed using the Eclipse reservoir simulator. Both structured grid and MCMC sampling based solution techniques are implemented and are shown to model the synthetic data set with good accuracy. Furthermore, a graphical analysis shows that the implemented MCMC model displays good convergence properties. A case study demonstrates that Likelihood variance affects the rate at which the posterior assimilates information from the measured data sequence. Error in the measured data significantly affects the accuracy of the posterior parameter distributions. Increasing the likelihood variance mitigates random measurement errors, but casuses the overall variance of the posterior to increase. Bayesian updating is shown to be advantageous over deterministic regression techniques as it allows for incorporation of prior belief and full modeling uncertainty over the parameter ranges. As such, the Bayesian approach to estimation of parameters in the material balance equation shows utility for incorporation into reservoir engineering workflows.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Four firn cores were retrieved in 2007 at two ridges in the area of the Ekström Ice Shelf, Dronning Maud Land, coastal East Antarctica, in order to investigate the recent regional climate variability and the potential for future extraction of an intermediate-depth core. Stable water-isotope analysis, tritium content and electrical conductivity were used to date the cores. For the period 1981-2006 a strong and significant correlation between the stable-isotope composition of firn cores in the hinterland and mean monthly air temperatures at Neumayer station was (r=0.54-0.71). No atmospheric warming or cooling trend is inferred from our stable-isotope data for the period 1962-2006. The stable-isotope record of the ice/firn cores could expand well beyond the meteorological record of the region. No significant temporal variation of accumulation rates was detected. However, decreasing accumulation rates were found from coast to hinterland, as well as from east (Halvfarryggen) to west (Søråsen). The deuterium excess (d) exhibits similar differences (higher d at Søråsen, lower d at Halvfarryggen), with a weak negative temporal trend on Halvfarryggen (0.04 per mil/a), probably implying increasing oceanic input. We conclude that Halvfarryggen acts as a natural barrier for moisture-carrying air masses circulating in the region from east to west.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To study the effects of temperature, salinity, and life processes (growth rates, size, metabolic effects, and physiological/genetic effects) on newly precipitated bivalve carbonate, we quantified shell isotopic chemistry of adult and juvenile animals of the intertidal bivalve Mytilus edulis (Blue mussel) collected alive from western Greenland and the central Gulf of Maine and cultured them under controlled conditions. Data for juvenile and adult M. edulis bivalves cultured in this study, and previously by Wanamaker et al. (2006, doi:10.1029/2005GC001189), yielded statistically identical paleotemperature relationships. On the basis of these experiments we have developed a species-specific paleotemperature equation for the bivalve M. edulis [T °C = 16.28 (±0.10) - 4.57 (±0.15) {d18Oc VPBD - d18Ow VSMOW} + 0.06 (±0.06) {d18Oc VPBD - d18Ow VSMOW}**2; r**2 = 0.99; N = 323; p < 0.0001]. Compared to the Kim and O'Neil (1997) inorganic calcite equation, M. edulis deposits its shell in isotope equilibrium (d18Ocalcite) with ambient water. Carbon isotopes (d13Ccalcite) from sampled shells were substantially more negative than predicted values, indicating an uptake of metabolic carbon into shell carbonate, and d13Ccalcite disequilibrium increased with increasing salinity. Sampled shells of M. edulis showed no significant trends in d18Ocalcite based on size, cultured growth rates, or geographic collection location, suggesting that vital effects do not affect d18Ocalcite in M. edulis. The broad modern and paleogeographic distribution of this bivalve, its abundance during the Holocene, and the lack of an intraspecies physiologic isotope effect demonstrated here make it an ideal nearshore paleoceanographic proxy throughout much of the North Atlantic Ocean.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With Tweet volumes reaching 500 million a day, sampling is inevitable for any application using Twitter data. Realizing this, data providers such as Twitter, Gnip and Boardreader license sampled data streams priced in accordance with the sample size. Big Data applications working with sampled data would be interested in working with a large enough sample that is representative of the universal dataset. Previous work focusing on the representativeness issue has considered ensuring the global occurrence rates of key terms, be reliably estimated from the sample. Present technology allows sample size estimation in accordance with probabilistic bounds on occurrence rates for the case of uniform random sampling. In this paper, we consider the problem of further improving sample size estimates by leveraging stratification in Twitter data. We analyze our estimates through an extensive study using simulations and real-world data, establishing the superiority of our method over uniform random sampling. Our work provides the technical know-how for data providers to expand their portfolio to include stratified sampled datasets, whereas applications are benefited by being able to monitor more topics/events at the same data and computing cost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study was undertaken in Napoleon gulf, and part of the off shore area of Lake Victoria Uganda in the landing site of Lufu in Buvuma district in the month of October 2015 for three days of sampling. It was conducted in four landing sites; Busana, Kikondo from Buikwe district and Lufu landing site from Buvuma district. The main aim was to determine the effect of Lampara net on the catch rate and size of the Rastrineobola argentae (mukene) harvested on Lake Victoria using various mesh sizes. The study focused on the 5mm and 10 mm mesh sizes of the Lampara net. A total of 109 boats were sampled; from the 5 mm mesh sizes indicated catch rates of 78-200 kg/boat/day and yet for 10 mm mesh size was at 248 kg/boat day. Statistical tests were carried out on the these two mesh sizes using One way Anova and indicated in catches was (Anova F=7.476; P<0.05) and for the price values was (Anova F=5.488 ; P<0.05). This is an indication that despite the fact that the mukene fishery is on the increase, a time will come when it also be depleted so a need to use the rightful fishing gear of 10mm mesh size is advisable for the biodiversity conservation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many exchange rate papers articulate the view that instabilities constitute a major impediment to exchange rate predictability. In this thesis we implement Bayesian and other techniques to account for such instabilities, and examine some of the main obstacles to exchange rate models' predictive ability. We first consider in Chapter 2 a time-varying parameter model in which fluctuations in exchange rates are related to short-term nominal interest rates ensuing from monetary policy rules, such as Taylor rules. Unlike the existing exchange rate studies, the parameters of our Taylor rules are allowed to change over time, in light of the widespread evidence of shifts in fundamentals - for example in the aftermath of the Global Financial Crisis. Focusing on quarterly data frequency from the crisis, we detect forecast improvements upon a random walk (RW) benchmark for at least half, and for as many as seven out of 10, of the currencies considered. Results are stronger when we allow the time-varying parameters of the Taylor rules to differ between countries. In Chapter 3 we look closely at the role of time-variation in parameters and other sources of uncertainty in hindering exchange rate models' predictive power. We apply a Bayesian setup that incorporates the notion that the relevant set of exchange rate determinants and their corresponding coefficients, change over time. Using statistical and economic measures of performance, we first find that predictive models which allow for sudden, rather than smooth, changes in the coefficients yield significant forecast improvements and economic gains at horizons beyond 1-month. At shorter horizons, however, our methods fail to forecast better than the RW. And we identify uncertainty in coefficients' estimation and uncertainty about the precise degree of coefficients variability to incorporate in the models, as the main factors obstructing predictive ability. Chapter 4 focus on the problem of the time-varying predictive ability of economic fundamentals for exchange rates. It uses bootstrap-based methods to uncover the time-specific conditioning information for predicting fluctuations in exchange rates. Employing several metrics for statistical and economic evaluation of forecasting performance, we find that our approach based on pre-selecting and validating fundamentals across bootstrap replications generates more accurate forecasts than the RW. The approach, known as bumping, robustly reveals parsimonious models with out-of-sample predictive power at 1-month horizon; and outperforms alternative methods, including Bayesian, bagging, and standard forecast combinations. Chapter 5 exploits the predictive content of daily commodity prices for monthly commodity-currency exchange rates. It builds on the idea that the effect of daily commodity price fluctuations on commodity currencies is short-lived, and therefore harder to pin down at low frequencies. Using MIxed DAta Sampling (MIDAS) models, and Bayesian estimation methods to account for time-variation in predictive ability, the chapter demonstrates the usefulness of suitably exploiting such short-lived effects in improving exchange rate forecasts. It further shows that the usual low-frequency predictors, such as money supplies and interest rates differentials, typically receive little support from the data at monthly frequency, whereas MIDAS models featuring daily commodity prices are highly likely. The chapter also introduces the random walk Metropolis-Hastings technique as a new tool to estimate MIDAS regressions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coprime and nested sampling are well known deterministic sampling techniques that operate at rates significantly lower than the Nyquist rate, and yet allow perfect reconstruction of the spectra of wide sense stationary signals. However, theoretical guarantees for these samplers assume ideal conditions such as synchronous sampling, and ability to perfectly compute statistical expectations. This thesis studies the performance of coprime and nested samplers in spatial and temporal domains, when these assumptions are violated. In spatial domain, the robustness of these samplers is studied by considering arrays with perturbed sensor locations (with unknown perturbations). Simplified expressions for the Fisher Information matrix for perturbed coprime and nested arrays are derived, which explicitly highlight the role of co-array. It is shown that even in presence of perturbations, it is possible to resolve $O(M^2)$ under appropriate conditions on the size of the grid. The assumption of small perturbations leads to a novel ``bi-affine" model in terms of source powers and perturbations. The redundancies in the co-array are then exploited to eliminate the nuisance perturbation variable, and reduce the bi-affine problem to a linear underdetermined (sparse) problem in source powers. This thesis also studies the robustness of coprime sampling to finite number of samples and sampling jitter, by analyzing their effects on the quality of the estimated autocorrelation sequence. A variety of bounds on the error introduced by such non ideal sampling schemes are computed by considering a statistical model for the perturbation. They indicate that coprime sampling leads to stable estimation of the autocorrelation sequence, in presence of small perturbations. Under appropriate assumptions on the distribution of WSS signals, sharp bounds on the estimation error are established which indicate that the error decays exponentially with the number of samples. The theoretical claims are supported by extensive numerical experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compressed covariance sensing using quadratic samplers is gaining increasing interest in recent literature. Covariance matrix often plays the role of a sufficient statistic in many signal and information processing tasks. However, owing to the large dimension of the data, it may become necessary to obtain a compressed sketch of the high dimensional covariance matrix to reduce the associated storage and communication costs. Nested sampling has been proposed in the past as an efficient sub-Nyquist sampling strategy that enables perfect reconstruction of the autocorrelation sequence of Wide-Sense Stationary (WSS) signals, as though it was sampled at the Nyquist rate. The key idea behind nested sampling is to exploit properties of the difference set that naturally arises in quadratic measurement model associated with covariance compression. In this thesis, we will focus on developing novel versions of nested sampling for low rank Toeplitz covariance estimation, and phase retrieval, where the latter problem finds many applications in high resolution optical imaging, X-ray crystallography and molecular imaging. The problem of low rank compressive Toeplitz covariance estimation is first shown to be fundamentally related to that of line spectrum recovery. In absence if noise, this connection can be exploited to develop a particular kind of sampler called the Generalized Nested Sampler (GNS), that can achieve optimal compression rates. In presence of bounded noise, we develop a regularization-free algorithm that provably leads to stable recovery of the high dimensional Toeplitz matrix from its order-wise minimal sketch acquired using a GNS. Contrary to existing TV-norm and nuclear norm based reconstruction algorithms, our technique does not use any tuning parameters, which can be of great practical value. The idea of nested sampling idea also finds a surprising use in the problem of phase retrieval, which has been of great interest in recent times for its convex formulation via PhaseLift, By using another modified version of nested sampling, namely the Partial Nested Fourier Sampler (PNFS), we show that with probability one, it is possible to achieve a certain conjectured lower bound on the necessary measurement size. Moreover, for sparse data, an l1 minimization based algorithm is proposed that can lead to stable phase retrieval using order-wise minimal number of measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We identified and quantified the effect of season, depth, and inner and outer panel mesh size on the trammel net catch species composition and catch rates in four southern European areas (Northeast Atlantic: Basque Country, Spain; Algarve, Portugal; Gulf of Cadiz, Spain; Mediterranean: Cyclades, Greece), all of which are characterised by important trammel net fisheries. In each area, we conducted, in 1999-2000, seasonal, experimental fishing trials at various depths with trammel nets of six different inner/outer panel mesh combinations (i.e., two large outer panel meshes and three small inner panel meshes). Overall, our study covered some of the most commonly used inner panel mesh sizes, ranging from 40 to 140 mm (stretched). We analysed the species composition and catch rates of the different inner/outer panel combinations with regression, multivariate analysis (cluster analysis and multidimensional scaling) and other 'community' techniques (number of species, dominance curves). All our analyses indicated that the outer panel mesh sizes used in the present study did not significantly affect the catch characteristics in terms of number of species, catch rates and species composition. Multivariate analyses and seasonal dominance plots indicated that in Basque, Algarve and Cyclades waters, where sampling covered wide depth ranges, both season and depth strongly affected catch species compositions. For the Gulf of Cadiz, where sampling was restricted to depths 10-30 m, season was the only factor affecting catch species composition and thus group formation. In contrast, the inner panel mesh size did not generally affect multidimensional group formation in all areas but affected the dominance of the species caught in the Algarve and the Gulf of Cadiz. Multivariate analyses also revealed 11 different metiers (i.e., season-depth-species-inner panel mesh size combinations) in the four areas. This clearly indicated the existence of trammel net 'hot spots', which represent essential habitats (e.g., spawning, nursery or wintering grounds) of the life history of the targeted and associated species. The number of specimens caught declined significantly with inner panel mesh size in all areas. We attributed this to the exponential decline in abundance with size, both within- and between-species. In contrast, the number of species caught in each area was not related to the inner mesh size. This was unexpected and might be a consequence of the wide size-selective range of trammel nets. (c) 2006 Elsevier B.V All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Knowledge of the geographical distribution of timber tree species in the Amazon is still scarce. This is especially true at the local level, thereby limiting natural resource management actions. Forest inventories are key sources of information on the occurrence of such species. However, areas with approved forest management plans are mostly located near access roads and the main industrial centers. The present study aimed to assess the spatial scale effects of forest inventories used as sources of occurrence data in the interpolation of potential species distribution models. The occurrence data of a group of six forest tree species were divided into four geographical areas during the modeling process. Several sampling schemes were then tested applying the maximum entropy algorithm, using the following predictor variables: elevation, slope, exposure, normalized difference vegetation index (NDVI) and height above the nearest drainage (HAND). The results revealed that using occurrence data from only one geographical area with unique environmental characteristics increased both model overfitting to input data and omission error rates. The use of a diagonal systematic sampling scheme and lower threshold values led to improved model performance. Forest inventories may be used to predict areas with a high probability of species occurrence, provided they are located in forest management plan regions representative of the environmental range of the model projection area.