984 resultados para SAMPLING STRATEGY
Resumo:
The precise sampling of soil, biological or micro climatic attributes in tropical forests, which are characterized by a high diversity of species and complex spatial variability, is a difficult task. We found few basic studies to guide sampling procedures. The objective of this study was to define a sampling strategy and data analysis for some parameters frequently used in nutrient cycling studies, i. e., litter amount, total nutrient amounts in litter and its composition (Ca, Mg, Κ, Ν and P), and soil attributes at three depths (organic matter, Ρ content, cation exchange capacity and base saturation). A natural remnant forest in the West of São Paulo State (Brazil) was selected as study area and samples were collected in July, 1989. The total amount of litter and its total nutrient amounts had a high spatial independent variance. Conversely, the variance of litter composition was lower and the spatial dependency was peculiar to each nutrient. The sampling strategy for the estimation of litter amounts and the amount of nutrient in litter should be different than the sampling strategy for nutrient composition. For the estimation of litter amounts and the amount of nutrients in litter (related to quantity) a large number of randomly distributed determinations are needed. Otherwise, for the estimation of litter nutrient composition (related to quality) a smaller amount of spatially located samples should be analyzed. The determination of sampling for soil attributes differed according to the depth. Overall, surface samples (0-5 cm) showed high short distance spatial dependent variance, whereas, subsurface samples exhibited spatial dependency in longer distances. Short transects with sampling interval of 5-10 m are recommended for surface sampling. Subsurface samples must also be spatially located, but with transects or grids with longer distances between sampling points over the entire area. Composite soil samples would not provide a complete understanding of the relation between soil properties and surface dynamic processes or landscape aspects. Precise distribution of Ρ was difficult to estimate.
Resumo:
[1] Cloud cover is conventionally estimated from satellite images as the observed fraction of cloudy pixels. Active instruments such as radar and Lidar observe in narrow transects that sample only a small percentage of the area over which the cloud fraction is estimated. As a consequence, the fraction estimate has an associated sampling uncertainty, which usually remains unspecified. This paper extends a Bayesian method of cloud fraction estimation, which also provides an analytical estimate of the sampling error. This method is applied to test the sensitivity of this error to sampling characteristics, such as the number of observed transects and the variability of the underlying cloud field. The dependence of the uncertainty on these characteristics is investigated using synthetic data simulated to have properties closely resembling observations of the spaceborne Lidar NASA-LITE mission. Results suggest that the variance of the cloud fraction is greatest for medium cloud cover and least when conditions are mostly cloudy or clear. However, there is a bias in the estimation, which is greatest around 25% and 75% cloud cover. The sampling uncertainty is also affected by the mean lengths of clouds and of clear intervals; shorter lengths decrease uncertainty, primarily because there are more cloud observations in a transect of a given length. Uncertainty also falls with increasing number of transects. Therefore a sampling strategy aimed at minimizing the uncertainty in transect derived cloud fraction will have to take into account both the cloud and clear sky length distributions as well as the cloud fraction of the observed field. These conclusions have implications for the design of future satellite missions. This paper describes the first integrated methodology for the analytical assessment of sampling uncertainty in cloud fraction observations from forthcoming spaceborne radar and Lidar missions such as NASA's Calipso and CloudSat.
Resumo:
DNA-based studies have been one of the major interests in conservation biology of endangered species and in population genetics. As species and population genetic assessment requires a source of biological material, the sampling strategy can be overcome by non-destructive procedures for DNA isolation. An improved method for obtaining DNA from fish fins and scales with the use of an extraction buffer containing urea and further DNA purification with phenol-chloroform is described. The methodology combines the benefits of a non-destructive DNA sampling and its high efficiency. In addition, comparisons with other methodologies for isolating DNA from fish demonstrated that the present procedure also becomes a very attractive alternative to obtain large amounts of high-quality DNA for use in different molecular analyses. The DNA samples, isolated from different fish species, have been successfully used on random amplified polymorphic DNA (RAPD) experiments, as well as on amplification of specific ribosomal and mitochondrial DNA sequences. The present DNA extraction procedure represents an alternative for population approaches and genetic studies on rare or endangered taxa.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The aim of this study was to determine the most informative sampling time(s) providing a precise prediction of tacrolimus area under the concentration-time curve (AUC). Fifty-four concentration-time profiles of tacrolimus from 31 adult liver transplant recipients were analyzed. Each profile contained 5 tacrolimus whole-blood concentrations (predose and 1, 2, 4, and 6 or 8 hours postdose), measured using liquid chromatography-tandem mass spectrometry. The concentration at 6 hours was interpolated for each profile, and 54 values of AUC(0-6) were calculated using the trapezoidal rule. The best sampling times were then determined using limited sampling strategies and sensitivity analysis. Linear mixed-effects modeling was performed to estimate regression coefficients of equations incorporating each concentration-time point (C0, C1, C2, C4, interpolated C5, and interpolated C6) as a predictor of AUC(0-6). Predictive performance was evaluated by assessment of the mean error (ME) and root mean square error (RMSE). Limited sampling strategy (LSS) equations with C2, C4, and C5 provided similar results for prediction of AUC(0-6) (R-2 = 0.869, 0.844, and 0.832, respectively). These 3 time points were superior to C0 in the prediction of AUC. The ME was similar for all time points; the RMSE was smallest for C2, C4, and C5. The highest sensitivity index was determined to be 4.9 hours postdose at steady state, suggesting that this time point provides the most information about the AUC(0-12). The results from limited sampling strategies and sensitivity analysis supported the use of a single blood sample at 5 hours postdose as a predictor of both AUC(0-6) and AUC(0-12). A jackknife procedure was used to evaluate the predictive performance of the model, and this demonstrated that collecting a sample at 5 hours after dosing could be considered as the optimal sampling time for predicting AUC(0-6).
Resumo:
Archaeological fish otoliths have the potential to serve as proxies for both season of site occupation and palaeoclimate conditions. By sampling along the distinctive sub-annual seasonal bands of the otolith and completing a stable isotope (δ¹⁸O, δ¹³C) analysis, variations within the fish’s environment can be identified. Through the analysis of cod otoliths from two archaeological sites on Kiska Island, Gertrude Cove (KIS-010) and Witchcraft Point (KIS-005), this research evaluates a micromilling methodological approach to extracting climatic data from archaeological cod otoliths. In addition, δ¹⁸Ootolith data and radiocarbon dates frame a discussion of Pacific cod harvesting, site occupation, and changing climatic conditions on Kiska Island. To aid in the interpretation of the archaeological Pacific cod results, archaeological and modern Atlantic cod otoliths were also analyzed as a component of this study to develop. The Atlantic cod otoliths provided the methodological and interpretative framework for the study, and also served to assess the efficacy of this sampling strategy for archaeological materials and to add time-depth to existing datasets. The δ¹⁸Ootolith values successfully illustrate relative variation in ambient water temperature. The Pacific cod δ¹⁸O values demonstrate a weak seasonal signal identifiable up to year 3, followed by relatively stable values until year 6/7 when values continuously increase. Based on the δ¹⁸O values, the Pacific cod were exposed to the coldest water temperatures immediately prior to capture. The lack of a clear cycle of seasonal variation and the continued increase in values towards the otolith edge obscures the season of capture, and indicates that other behavioural, environmental, or methodological factors influenced the otolith δ¹⁸O values. It is suggested that Pacific cod would have been harvested throughout the year, and the presence of cod remains in Aleutian archaeological sites cannot be used as a reliable indicator of summer occupation. In addition, when the δ¹⁸O otolith values are integrated with radiocarbon dates and known climatic regimes, it is demonstrated that climatic conditions play an integral role in the pattern of occupation at Gertrude Cove. Initial site occupation coincides with the end of a neoglacial cooling period, and the most recent and continuous occupation coincides with the end of a localized warming period and the onset of the Little Ice Age (LIA).
Resumo:
Compressed covariance sensing using quadratic samplers is gaining increasing interest in recent literature. Covariance matrix often plays the role of a sufficient statistic in many signal and information processing tasks. However, owing to the large dimension of the data, it may become necessary to obtain a compressed sketch of the high dimensional covariance matrix to reduce the associated storage and communication costs. Nested sampling has been proposed in the past as an efficient sub-Nyquist sampling strategy that enables perfect reconstruction of the autocorrelation sequence of Wide-Sense Stationary (WSS) signals, as though it was sampled at the Nyquist rate. The key idea behind nested sampling is to exploit properties of the difference set that naturally arises in quadratic measurement model associated with covariance compression. In this thesis, we will focus on developing novel versions of nested sampling for low rank Toeplitz covariance estimation, and phase retrieval, where the latter problem finds many applications in high resolution optical imaging, X-ray crystallography and molecular imaging. The problem of low rank compressive Toeplitz covariance estimation is first shown to be fundamentally related to that of line spectrum recovery. In absence if noise, this connection can be exploited to develop a particular kind of sampler called the Generalized Nested Sampler (GNS), that can achieve optimal compression rates. In presence of bounded noise, we develop a regularization-free algorithm that provably leads to stable recovery of the high dimensional Toeplitz matrix from its order-wise minimal sketch acquired using a GNS. Contrary to existing TV-norm and nuclear norm based reconstruction algorithms, our technique does not use any tuning parameters, which can be of great practical value. The idea of nested sampling idea also finds a surprising use in the problem of phase retrieval, which has been of great interest in recent times for its convex formulation via PhaseLift, By using another modified version of nested sampling, namely the Partial Nested Fourier Sampler (PNFS), we show that with probability one, it is possible to achieve a certain conjectured lower bound on the necessary measurement size. Moreover, for sparse data, an l1 minimization based algorithm is proposed that can lead to stable phase retrieval using order-wise minimal number of measurements.
Resumo:
Core collections are of strategic importance as they allow the use of a small part of a germplasm collection that is representative of the total collection. The objective of this study was to develop a soybean core collection of the USDA Soybean Germplasm Collection by comparing the results of random, proportional, logarithmic, multivariate proportional and multivariate logarithmic sampling strategies. All but the random sampling strategy used stratification of the entire collection based on passport data and maturity group classification. The multivariate proportional and multivariate logarithmic strategies made further use of qualitative and quantitative trait data to select diverse accessions within each stratum. The 18 quantitative trait data distribution parameters were calculated for each core and for the entire collection for pairwise comparison to validate the sampling strategies. All strategies were adequate for assembling a core collection. The random core collection best represented the entire collection in statistical terms. Proportional and logarithmic strategies did not maximize statistical representation but were better in selecting maximum variability. Multivariate proportional and multivariate logarithmic strategies produced the best core collections as measured by maximum variability conservation. The soybean core collection was established using the multivariate proportional selection strategy. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The principle of using induction rules based on spatial environmental data to model a soil map has previously been demonstrated Whilst the general pattern of classes of large spatial extent and those with close association with geology were delineated small classes and the detailed spatial pattern of the map were less well rendered Here we examine several strategies to improve the quality of the soil map models generated by rule induction Terrain attributes that are better suited to landscape description at a resolution of 250 m are introduced as predictors of soil type A map sampling strategy is developed Classification error is reduced by using boosting rather than cross validation to improve the model Further the benefit of incorporating the local spatial context for each environmental variable into the rule induction is examined The best model was achieved by sampling in proportion to the spatial extent of the mapped classes boosting the decision trees and using spatial contextual information extracted from the environmental variables.
Resumo:
In Brazil, sugarcane fields are often burned to facilitate manual harvesting, and this burning causes environmental pollution from the large amounts of soot released into the atmosphere. This material contains numerous organic compounds such as PAHs. In this study, the concentrations of PAHs in two particulate-matter fractions (PM(2.5) and PM(10)) in the city of Araraquara (SE Brazil, with around 200,000 inhabitants and surrounded by sugarcane plantations) were determined during the sugarcane harvest (HV) and non-harvest (NHV) seasons in 2008 and 2009. The sampling strategy included four campaigns, with 60 samples in the NHV season and 220 samples in the HV season. The PM(2.5) and PM(10) fractions were collected using a dichotomous sampler (10 L min(-1), 24 h) with Teflon (TM) filters. The filter sets were extracted (ultrasonic bath with hexane/acetone (1:1 v/v)) and analyzed by HPLC/Fluorescence. The median concentration for total PAHs (PM(2.5) in 2009) was 0.99 ng m(-3) (NHV) and 3.3 ng m(-3) (HV). In the HV season, the total concentration of carcinogenic PAHs (benz(a)anthracene, benzo(b)fluoranthene, benzo(k)fluoranthene, and benzo(a)pyrene) was 5 times higher than in the NHV season. B(a)P median concentrations were 0.017 ng m(-3) and 0.12 ng m(-3) for the NHV and HV seasons, respectively. The potential cancer risk associated with exposure through inhalation of these compounds was estimated based on the benzo[a]pyrene toxic equivalence (BaP(eq)), where the overall toxicity of a PAR mixture is defined by the concentration of each compound multiplied by its relative toxic equivalence factor (TEF). BaP(eq) median (2008 and 2009 years) ranged between 0.65 and 1.0 ng m(-3) and 1.2-1.4 ng m(-3) for the NHV and HV seasons, respectively. Considering that the maximum permissible BaPeq in ambient air is 1 ng m(-3), related to the increased carcinogenic risk, our data suggest that the level of human exposure to PAHs in cities surrounded by sugarcane crops where the burning process is used is cause for concern. (C) 2010 Published by Elsevier Ltd.
Resumo:
Asthma is characterised by an increased airway smooth muscle (ASM) area (ASMarea) within the airway wall. The present study examined the relationship of factors including severity and duration of asthma to ASMarea. The perimeter of the basement membrane (PBM) and ASMarea were measured on transverse sections of large and small airways from post mortem cases of fatal (n=107) and nonfatal asthma (n=37) and from control subjects (n=69). The thickness of ASM (ASMarea/PBM) was compared between asthma groups using multivariate linear regression. When all airways were considered together, ASMarea/PBM (in millimetres) was increased in nonfatal (median 0.04; interquartile range 0.013-0.051; p=0.034) and fatal cases of asthma (0.048; 0.025-0.078; p<0.001) compared with controls (0.036; 0.024-0.042). Compared with cases of nonfatal asthma, ASMarea/PBM was greater in cases of fatal asthma in large (p<0.001) and medium (p<0.001), but not small, airways. ASMarea/PBM was not related to duration of asthma, age of onset of asthma, sex or smoking. No effect due to study centre, other than that due to sampling strategy, was found. The thickness of the ASM layer is increased in asthma and is related to the severity of asthma but not its duration.
Resumo:
Objectives: Although monitoring of cyclosporin (CsA) is standard clinical practice postrenal transplantation. mycophenolic acid (MPA) concentrations are not routinely measured. There is evidence that a relationship exists between MPA area under the concentration-time curve (AUC) and rejection. In this study, a retrospective analysis was undertaken of 27 adult renal transplant recipients. Methods: Patients received CsA and MPA therapy and had a four-point MPA AUC investigation. The relationship between MPA AUC performed in the first week after transplantation, as well as median trough cyclosporin concentrations, and clinical outcomes in the first month posttransplant were evaluated. Results: A total of 12 patients experienced biopsy proven rejection (44.4%) and 4 patients had gastrointestinal adverse events (14.8%). A statistically significant relationship was observed between the incidence of biopsy proven rejection and both MPA AUC (p = 0.02) and median trough CsA concentration (p = 0.008). No relationship between trough MPA concentration and rejection was observed (p = 0.21). Only 3 of 11 (27%) patients with an MPA AUC > 30 mg.h/L and a median trough CsA > 175 mug/L experienced acute rejection, compared with a 56% incidence of rejection for the remaining 16 patients. Patients who experienced adverse gastrointestinal events had significantly lower MPA AUC (p = 0.04), but median trough CsA concentrations were not significantly different (p = 0.24). Further, 3 of these 4 patients had rejection episodes. Conclusions: in addition to standard CsA monitoring, we propose further investigation of the use of a 4-point sampling strategy to predict MPA AUC in the first week posttransplant, which may facilitate optimization of mycophenolate mofetil dose at a rime when patients are most vulnerable to acute rejection. (C) 2001 The Canadian Society of Clinical Chemists. All rights reserved.
Resumo:
In this work, kriging with covariates is used to model and map the spatial distribution of salinity measurements gathered by an autonomous underwater vehicle in a sea outfall monitoring campaign aiming to distinguish the effluent plume from the receiving waters and characterize its spatial variability in the vicinity of the discharge. Four different geostatistical linear models for salinity were assumed, where the distance to diffuser, the west-east positioning, and the south-north positioning were used as covariates. Sample variograms were fitted by the Mat`ern models using weighted least squares and maximum likelihood estimation methods as a way to detect eventual discrepancies. Typically, the maximum likelihood method estimated very low ranges which have limited the kriging process. So, at least for these data sets, weighted least squares showed to be the most appropriate estimation method for variogram fitting. The kriged maps show clearly the spatial variation of salinity, and it is possible to identify the effluent plume in the area studied. The results obtained show some guidelines for sewage monitoring if a geostatistical analysis of the data is in mind. It is important to treat properly the existence of anomalous values and to adopt a sampling strategy that includes transects parallel and perpendicular to the effluent dispersion.
Resumo:
In order to evaluate the efficiency of different mammalian survey methods, we compared traditional sampling techniques (use of camera-traps on roads and artificial trails, track censuses, and direct field visualization) with an alternative sampling design (camera-traps positioned in natural areas such as natural trails and shelters). We conducted the study in a deciduous Atlantic-Forest park in southern Brazil, and additionally compared our results with a previous intensive study carried out in the same area. Our considerably smaller sampling effort (example: 336 trap.day for our camera-traps versus 2,154 trap.day for the earlier study) registered the presence of 85% of the local known species, with camera-traps being 68% efficient. Moreover, shelter camera-traps revealed a different species composition regarding most of other sampling methods. This sampling strategy involving natural forest sites was therefore able to effectively optimize the chances of evaluating species composition in a shorter period, especially with respect to lower-density and cryptic species, as well as to detect species that avoid open, disturbed sites such as roads and man-made forest trails.