988 resultados para Sampling rates
Resumo:
Sampling devices differing greatly in shape, size and operating condition have been used to collect air samples to determine rates of emission of volatile substances, including odour. However, physical chemistry principles, in particular the partitioning of volatile substances between two phases as explained by Henrys Law and the relationship between wind velocity and emission rate, suggests that different devices cannot be expected to provide equivalent emission rate estimates. Thus several problems are associated with the use of static and dynamic emission chambers, but the more turbulent devices such as wind tunnels do not appear to be subject to these problems. In general, the ability to relate emission rate estimates obtained from wind tunnel measurements to those derived from device-independent techniques supports the use of wind tunnels to determine emission rates that can be used as input data for dispersion models.
Resumo:
Deficiencies in sardine post-harvest handling methods were seen as major impediments to development of a value-adding sector supplying Australian bait and human consumption markets. Factors affecting sardine deterioration rates in the immediate post-harvest period were investigated and recommendations made for alternative handling procedures to optimise sardine quality. Net to factory sampling showed that post-mortem autolysis was probably caused by digestive enzyme activity contributing to the observed temporal increase in sardine Quality Index. Belly burst was not an issue. Sardine quality could be maintained by reducing tank loading, and rapid temperature reduction using dedicated, on-board value-adding tanks. Fish should be iced between the jetty and the processing factory, and transport bins chilled using an efficient cooling medium such as flow ice.
Resumo:
Telomere length has been purported as a biomarker for age and could offer a non-lethal method for determining the age of wild-caught individuals. Molluscs, including oysters and abalone, are the basis of important fisheries globally and have been problematic to accurately age. To determine whether telomere length could provide an alternative means of ageing molluscs, we evaluated the relationship between telomere length and age using the commercially important Sydney rock oyster (Saccostrea glomerata). Telomere lengths were estimated from tissues of known age individuals from different age classes, locations and at different sampling times. Telomere length tended to decrease with age only in young oysters less than 18 months old, but no decrease was observed in older oysters aged 2-4 years. Regional and temporal differences in telomere attrition rates were also observed. The relationship between telomere length and age was weak, however, with individuals of identical age varying significantly in their telomere length making it an imprecise age biomarker in oysters.
Resumo:
Trawl surveys to assess the stocks of Lake Victoria (Tanzania) for estimates of biomass and yield, together with the establishment of exploitation patterns, are being undertaken under the Lake Victoria Fisheries Research Project. Preliminary surveys to establish the sampling stations and strategy were carried out between October 1997 and February 1998. Three cruises to cover the whole of Tanzanian waters were undertaken with 133 sampling stations. Data on each rates, species composition, and distribution were collected. Three sampling areas were designated: area A, B and C. In each area, almost the same distribution pattern over depth was found. Lates niloticus (L) formed over 90% of the total catch. Most L. niloticus were from 5-40 cm TL. Abundance decreased with depth, few fish were found deeper than 40m and most fish were caught at <20 m deep. Catch rates varied considerably between stations and areas. Area A had the highest catch rates with little variation over the stations. There is an indication of recovery of species diversity compared with the surveys of RV Kiboko(1985 and 1989)
Resumo:
Trawling was conducted in the Charleston, South Carolina, shipping channel between May and August during 2004–07 to evaluate loggerhead sea turtle (Caretta caretta) catch rates and demographic distributions. Two hundred and twenty individual loggerheads were captured in 432 trawling events during eight sampling periods lasting 2–10 days each. Catch was analyzed by using a generalized linear model. Data were fitted to a negative binomial distribution with the log of standardized sampling effort (i.e., an hour of sampling with a net head rope length standardized to 30.5 m) for each event treated as an offset term. Among 21 variables, factors, and interactions, five terms were significant in the final model, which accounted for 45% of model deviance. Highly significant differences in catch were noted among sampling periods and sampling locations within the channel, with greatest catch furthest seaward consistent with historical observations. Loggerhead sea turtle catch rates in 2004–07 were greater than in 1991–92 when mandatory use of turtle excluder devices was beginning to be phased in. Concurrent with increased catch rates, loggerheads captured in 2004–07 were larger than in 1991–92. Eighty-five percent of loggerheads captured were ≤75.0 cm straight-line carapace length (nuchal notch to tip of carapace) and there was a 3.9:1 female-to-male bias, consistent with limited data for this location two decades earlier. Only juvenile loggerheads ≤75.0 cm possessed haplotypes other than CC-A01 or CC-A02 that dominate in the region. Six rare and one un-described haplotype were predominantly found in June 2004.
Resumo:
The hypothesis that heavy fishing pressure has led to changes in the biological characteristics of the estuary cobbler (Cnidoglanis macrocephalus) was tested in a large seasonally open estuary in southwestern Australia, where this species completes its life cycle and is the most valuable commercial fish species. Comparisons were made between seasonal data collected for this plotosid (eeltail catfish) in Wilson Inlet during 2005–08 and those recorded with the same fishery-independent sampling regime during 1987–89. These comparisons show that the proportions of larger and older individuals and the catch rates in the more recent period were far lower, i.e., they constituted reductions of 40% for fish ≥430 mm total length, 62% for fish ≥4 years of age, and 80% for catch rate. In addition, total mortality and fishing-induced mortality estimates increased by factors of ~2 and 2.5, respectively. The indications that the abundance and proportion of older C. macrocephalus declined between the two periods are consistent with the perception of long-term commercial fishermen and their shift toward using a smaller maximum gill net mesh to target this species. The sustained heavy fishing pressure on C. macrocephalus between 1987–89 and 2005–08 was accompanied by a marked reduction in length and age at maturity of this species. The shift in probabilistic maturation reaction norms toward smaller fish in 2005–08 and the lack of a conspicuous change in growth between the two periods indicate that the maturity changes were related to fishery-induced evolution rather than to compensatory responses to reduced fish densities.
Resumo:
Seasonal trawling was conducted randomly in coastal (depths of 4.6–17 m) waters from St. Augustine, Florida, (29.9°N) to Winyah Bay, South Carolina (33.1°N), during 2000–03, 2008–09, and 2011 to assess annual trends in the relative abundance of sea turtles. A total of 1262 loggerhead sea turtles (Caretta caretta) were captured in 23% (951) of 4207 sampling events. Capture rates (overall and among prevalent 5-cm size classes) were analyzed through the use of a generalized linear model with log link function for the 4097 events that had complete observations for all 25 model parameters. Final models explained 6.6% (70.1–75.0 cm minimum straight-line carapace length [SCLmin]) to 14.9% (75.1–80.0 cm SCLmin) of deviance in the data set. Sampling year, geographic subregion, and distance from shore were retained as significant terms in all final models, and these terms collectively accounted for 6.2% of overall model deviance (range: 4.5–11.7% of variance among 5-cm size classes). We retained 18 parameters only in a subset of final models: 4 as exclusively significant terms, 5 as a mixture of significant or nonsignificant terms, and 9 as exclusively nonsignificant terms. Four parameters also were dropped completely from all final models. The generalized linear model proved appropriate for monitoring trends for this data set that was laden with zero values for catches and was compiled for a globally protected species. Because we could not account for much model deviance, metrics other than those examined in our study may better explain catch variability and, once elucidated, their inclusion in the generalized linear model should improve model fits.
Resumo:
Wind erosion is one of the major environmental problems in semi-arid and arid regions. Here we established the Tariat-Xilin Gol transect from northwest to southeast across the Mongolian Plateau, and selected seven sampling sites along the transect. We then estimated the soil wind erosion rates by using the Cs-137 tracing technique and examined their spatial dynamics. Our results showed that the Cs-137 inventories of sampling sites ranged from 265.63 +/- 44.91 to 1279.54 +/- 166.53 Bq.m(-2), and the wind erosion rates varied from 64.58 to 419.63 t.km(-2).a(-1) accordingly. In the Mongolia section of the transect (from Tariat to Sainshand), the wind erosion rate increased gradually with vegetation type and climatic regimes; the wind erosion process was controlled by physical factors such as annual precipitation and vegetation coverage, etc., and the impact of human activities was negligible. While in the China section of the transect (Inner Mongolia), the wind erosion rates of Xilin Hot and Zhengxiangbai Banner were thrice as much as those of Bayannur of Mongolia, although these three sites were all dominated by typical steppe. Besides the physical factors, higher population density and livestock carrying level should be responsible for the higher wind erosion rates in these two regions of Inner Mongolia.
Resumo:
Background: Many European countries including Ireland lack high quality, on-going, population based estimates of maternal behaviours and experiences during pregnancy. PRAMS is a CDC surveillance program which was established in the United States in 1987 to generate high quality, population based data to reduce infant mortality rates and improve maternal and infant health. PRAMS is the only on-going population based surveillance system of maternal behaviours and experiences that occur before, during and after pregnancy worldwide.Methods: The objective of this study was to adapt, test and evaluate a modified CDC PRAMS methodology in Ireland. The birth certificate file which is the standard approach to sampling for PRAMS in the United States was not available for the PRAMS Ireland study. Consequently, delivery record books for the period between 3 and 5 months before the study start date at a large urban obstetric hospital [8,900 births per year] were used to randomly sample 124 women. Name, address, maternal age, infant sex, gestational age at delivery, delivery method, APGAR score and birth weight were manually extracted from records. Stillbirths and early neonatal deaths were excluded using APGAR scores and hospital records. Women were sent a letter of invitation to participate including option to opt out, followed by a modified PRAMS survey, a reminder letter and a final survey.Results: The response rate for the pilot was 67%. Two per cent of women refused the survey, 7% opted out of the study and 24% did not respond. Survey items were at least 88% complete for all 82 respondents. Prevalence estimates of socially undesirable behaviours such as alcohol consumption during pregnancy were high [>50%] and comparable with international estimates.Conclusion: PRAMS is a feasible and valid method of collecting information on maternal experiences and behaviours during pregnancy in Ireland. PRAMS may offer a potential solution to data deficits in maternal health behaviour indicators in Ireland with further work. This study is important to researchers in Europe and elsewhere who may be interested in new ways of tailoring an established CDC methodology to their unique settings to resolve data deficits in maternal health.
Resumo:
Closing feedback loops using an IEEE 802.11b ad hoc wireless communication network incurs many challenges sensitivity to varying channel conditions and lower physical transmission rates tend to limit the bandwidth of the communication channel. Given that the bandwidth usage and control performance are linked, a method of adapting the sampling interval based on an 'a priori', static sampling policy has been proposed and, more significantly, assuring stability in the mean square sense using discrete-time Markov jump linear system theory. Practical issues including current limitations of the 802.11 b protocol, the sampling policy and stability are highlighted. Simulation results on a cart-mounted inverted pendulum show that closed-loop stability can be improved using sample rate adaptation and that the control design criteria can be met in the presence of channel errors and severe channel contention.
Resumo:
Energy efficiency is an essential requirement for all contemporary computing systems. We thus need tools to measure the energy consumption of computing systems and to understand how workloads affect it. Significant recent research effort has targeted direct power measurements on production computing systems using on-board sensors or external instruments. These direct methods have in turn guided studies of software techniques to reduce energy consumption via workload allocation and scaling. Unfortunately, direct energy measurements are hampered by the low power sampling frequency of power sensors. The coarse granularity of power sensing limits our understanding of how power is allocated in systems and our ability to optimize energy efficiency via workload allocation.
We present ALEA, a tool to measure power and energy consumption at the granularity of basic blocks, using a probabilistic approach. ALEA provides fine-grained energy profiling via sta- tistical sampling, which overcomes the limitations of power sens- ing instruments. Compared to state-of-the-art energy measurement tools, ALEA provides finer granularity without sacrificing accuracy. ALEA achieves low overhead energy measurements with mean error rates between 1.4% and 3.5% in 14 sequential and paral- lel benchmarks tested on both Intel and ARM platforms. The sampling method caps execution time overhead at approximately 1%. ALEA is thus suitable for online energy monitoring and optimization. Finally, ALEA is a user-space tool with a portable, machine-independent sampling method. We demonstrate two use cases of ALEA, where we reduce the energy consumption of a k-means computational kernel by 37% and an ocean modelling code by 33%, compared to high-performance execution baselines, by varying the power optimization strategy between basic blocks.
Resumo:
This paper introduces a novel method of estimating theFourier transform of deterministic continuous-time signals from a finite number N of their nonuniformly spaced measurements. These samples, located at a mixture of deterministic and random time instants, are collected at sub-Nyquist rates since no constraints are imposed on either the bandwidth or the spectral support of the processed signal. It is shown that the proposed estimation approach converges uniformly for all frequencies at the rate N^−5 or faster. This implies that it significantly outperforms its alias-free-sampling-based predecessors, namely stratified and antithetical stratified estimates, which are shown to uniformly convergence at a rate of N^−1. Simulations are presented to demonstrate the superior performance and low complexity of the introduced technique.
Resumo:
Coal contains trace quantities of natural radionuclides such as Th-232, U-235, U-238, as well as their radioactive decay products and 40K. These radionuclides can be released as fly ash in atmospheric emissions from coal-fired power plants, dispersed into the environment and deposited on the surrounding top soils. Therefore, the natural radiation background level is enhanced and consequently increase the total dose for the nearby population. A radiation monitoring programme was used to assess the external dose contribution to the natural radiation background, potentially resulting from the dispersion of coal ash in past atmospheric emissions. Radiation measurements were carried out by gamma spectrometry in the vicinity of a Portuguese coal-fired power plant. The radiation monitoring was achieved both on and off site, being the boundary delimited by a 20 km circle centered in the stacks of the coal plant. The measured radionuclides concentrations for the uranium and thorium series ranged from 7.7 to 41.3 Bq/kg for Ra-226 and from 4.7 to 71.6 Bq/kg for Th-232, while K-40 concentrations ranged from 62.3 to 795.1 Bq/kg. The highest values were registered near the power plant and at distances between 6 and 20 km from the stacks, mainly in the prevailing wind direction. The absorbed dose rates were calculated for each sampling location: 13.97-84.00 ηGy/h, while measurements from previous studies carried out in 1993 registered values in the range of 16.6-77.6 ηGy/h. The highest values were registered at locations in the prevailing wind direction (NW-SE). This study has been primarily done to assess the radiation dose rates and exposure to the nearby population in the surroundings of a coal-fired power plant. The results suggest an enhancement or at least an influence in the background radiation due to the coal plant past activities.
Resumo:
L’apprentissage supervisé de réseaux hiérarchiques à grande échelle connaît présentement un succès fulgurant. Malgré cette effervescence, l’apprentissage non-supervisé représente toujours, selon plusieurs chercheurs, un élément clé de l’Intelligence Artificielle, où les agents doivent apprendre à partir d’un nombre potentiellement limité de données. Cette thèse s’inscrit dans cette pensée et aborde divers sujets de recherche liés au problème d’estimation de densité par l’entremise des machines de Boltzmann (BM), modèles graphiques probabilistes au coeur de l’apprentissage profond. Nos contributions touchent les domaines de l’échantillonnage, l’estimation de fonctions de partition, l’optimisation ainsi que l’apprentissage de représentations invariantes. Cette thèse débute par l’exposition d’un nouvel algorithme d'échantillonnage adaptatif, qui ajuste (de fa ̧con automatique) la température des chaînes de Markov sous simulation, afin de maintenir une vitesse de convergence élevée tout au long de l’apprentissage. Lorsqu’utilisé dans le contexte de l’apprentissage par maximum de vraisemblance stochastique (SML), notre algorithme engendre une robustesse accrue face à la sélection du taux d’apprentissage, ainsi qu’une meilleure vitesse de convergence. Nos résultats sont présent ́es dans le domaine des BMs, mais la méthode est générale et applicable à l’apprentissage de tout modèle probabiliste exploitant l’échantillonnage par chaînes de Markov. Tandis que le gradient du maximum de vraisemblance peut-être approximé par échantillonnage, l’évaluation de la log-vraisemblance nécessite un estimé de la fonction de partition. Contrairement aux approches traditionnelles qui considèrent un modèle donné comme une boîte noire, nous proposons plutôt d’exploiter la dynamique de l’apprentissage en estimant les changements successifs de log-partition encourus à chaque mise à jour des paramètres. Le problème d’estimation est reformulé comme un problème d’inférence similaire au filtre de Kalman, mais sur un graphe bi-dimensionnel, où les dimensions correspondent aux axes du temps et au paramètre de température. Sur le thème de l’optimisation, nous présentons également un algorithme permettant d’appliquer, de manière efficace, le gradient naturel à des machines de Boltzmann comportant des milliers d’unités. Jusqu’à présent, son adoption était limitée par son haut coût computationel ainsi que sa demande en mémoire. Notre algorithme, Metric-Free Natural Gradient (MFNG), permet d’éviter le calcul explicite de la matrice d’information de Fisher (et son inverse) en exploitant un solveur linéaire combiné à un produit matrice-vecteur efficace. L’algorithme est prometteur: en terme du nombre d’évaluations de fonctions, MFNG converge plus rapidement que SML. Son implémentation demeure malheureusement inefficace en temps de calcul. Ces travaux explorent également les mécanismes sous-jacents à l’apprentissage de représentations invariantes. À cette fin, nous utilisons la famille de machines de Boltzmann restreintes “spike & slab” (ssRBM), que nous modifions afin de pouvoir modéliser des distributions binaires et parcimonieuses. Les variables latentes binaires de la ssRBM peuvent être rendues invariantes à un sous-espace vectoriel, en associant à chacune d’elles, un vecteur de variables latentes continues (dénommées “slabs”). Ceci se traduit par une invariance accrue au niveau de la représentation et un meilleur taux de classification lorsque peu de données étiquetées sont disponibles. Nous terminons cette thèse sur un sujet ambitieux: l’apprentissage de représentations pouvant séparer les facteurs de variations présents dans le signal d’entrée. Nous proposons une solution à base de ssRBM bilinéaire (avec deux groupes de facteurs latents) et formulons le problème comme l’un de “pooling” dans des sous-espaces vectoriels complémentaires.
Resumo:
Models developed to identify the rates and origins of nutrient export from land to stream require an accurate assessment of the nutrient load present in the water body in order to calibrate model parameters and structure. These data are rarely available at a representative scale and in an appropriate chemical form except in research catchments. Observational errors associated with nutrient load estimates based on these data lead to a high degree of uncertainty in modelling and nutrient budgeting studies. Here, daily paired instantaneous P and flow data for 17 UK research catchments covering a total of 39 water years (WY) have been used to explore the nature and extent of the observational error associated with nutrient flux estimates based on partial fractions and infrequent sampling. The daily records were artificially decimated to create 7 stratified sampling records, 7 weekly records, and 30 monthly records from each WY and catchment. These were used to evaluate the impact of sampling frequency on load estimate uncertainty. The analysis underlines the high uncertainty of load estimates based on monthly data and individual P fractions rather than total P. Catchments with a high baseflow index and/or low population density were found to return a lower RMSE on load estimates when sampled infrequently than those with a tow baseflow index and high population density. Catchment size was not shown to be important, though a limitation of this study is that daily records may fail to capture the full range of P export behaviour in smaller catchments with flashy hydrographs, leading to an underestimate of uncertainty in Load estimates for such catchments. Further analysis of sub-daily records is needed to investigate this fully. Here, recommendations are given on load estimation methodologies for different catchment types sampled at different frequencies, and the ways in which this analysis can be used to identify observational error and uncertainty for model calibration and nutrient budgeting studies. (c) 2006 Elsevier B.V. All rights reserved.