990 resultados para sampling methodology


Relevância:

30.00% 30.00%

Publicador:

Resumo:

DNA-based studies have been one of the major interests in conservation biology of endangered species and in population genetics. As species and population genetic assessment requires a source of biological material, the sampling strategy can be overcome by non-destructive procedures for DNA isolation. An improved method for obtaining DNA from fish fins and scales with the use of an extraction buffer containing urea and further DNA purification with phenol-chloroform is described. The methodology combines the benefits of a non-destructive DNA sampling and its high efficiency. In addition, comparisons with other methodologies for isolating DNA from fish demonstrated that the present procedure also becomes a very attractive alternative to obtain large amounts of high-quality DNA for use in different molecular analyses. The DNA samples, isolated from different fish species, have been successfully used on random amplified polymorphic DNA (RAPD) experiments, as well as on amplification of specific ribosomal and mitochondrial DNA sequences. The present DNA extraction procedure represents an alternative for population approaches and genetic studies on rare or endangered taxa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Food webs have been used in order to understand the trophic relationship among organisms within an ecosystem, however the extension by which sampling efficiency could affect food web responses remain poorly understood. Still, there is a lack of long-term sampling data for many insect groups, mainly related to the interactions between herbivores and their host plants. In the first chapter, I describe a source food web based on the Senegalia tenuifolia plant by identifying the associated insect species and the interactions among them and with this host plant. Furthermore, I check for the data robustness from each trophic level and propose a cost-efficiently methodology. The results from this chapter show that the collected dataset and the methodology presented are a good tool for sample most insect richness of a source food web. In total the food web comprises 27 species belonging to four trophic levels. In the second chapter, I demonstrate the temporal variation in the species richness and abundance from each trophic level, as well as the relationship among distinct trophic levels. Moreover, I investigate the diversity patterns of the second and third trophic level by assessing the contribution of alfa and beta-diversity components along the years. This chapter shows that in our system the parasitoid abundance is regulated by the herbivore abundances. Besides, the species richness and abundances of the trophic levels vary temporally. It also shows that alfa-diversity was the diversity component that most contribute to the herbivore species diversity (2nd trophic level), while the contribution of alfa- and beta-diversity changed along the years for parasitoid diversity (3rd level). Overall, this dissertation describes a source food web and bring insights into some food web challenges related to the sampling effort to gather enough species from all trophic levels. It also discuss the relation among communities associated with distinct trophic levels and their temporal variation and diversity patterns. Finally, this dissertation contributes for the world food web database and in understanding the interactions among its trophic levels and each trophic level pattern along time and space

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This manuscript aims proposing a methodology for correlating soil porosity to the respective geological units using geostatistical analysis techniques, including interpolation data by kriging. The site studied was in Lorena municipality, Paraíba do Sul Valley, southeastern Brazil. Specifically all studies were carried out within an area of 12 km2 located at Santa Edwirges farm. The database comprehended 41 soil samples taken at different geological and geomorphologic units at three different depths: surface, 50 cm and 100 cm depth. The geostatistical analyses results were correlated to a geological mapping specifically elaborated for the site. This mapping accounts for two different geological formations and a geological contact characterized by a shearing zone. The results indicate the existence of a significant relationship between the soil porosity and the respective geological units. The studies revealed that the residual soils from weathered granitic rocks tend to have higher porosities than the residual soils from weathered biotite gneiss rocks, while the soil porosity within the shearing zone is relatively un-sensitive to the respective geological formation. The spatial patterns observed were efficient to evaluate the relationship between the soil porosity, geology unit and the and geomorphology showing a good potential for correlating with others soil properties such as hydraulic conductivity, soil water retention curves and erosion potentials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a fully model-based approach for the analysis of distance sampling data. Distance sampling has been widely used to estimate abundance (or density) of animals or plants in a spatially explicit study area. There is, however, no readily available method of making statistical inference on the relationships between abundance and environmental covariates. Spatial Poisson process likelihoods can be used to simultaneously estimate detection and intensity parameters by modeling distance sampling data as a thinned spatial point process. A model-based spatial approach to distance sampling data has three main benefits: it allows complex and opportunistic transect designs to be employed, it allows estimation of abundance in small subregions, and it provides a framework to assess the effects of habitat or experimental manipulation on density. We demonstrate the model-based methodology with a small simulation study and analysis of the Dubbo weed data set. In addition, a simple ad hoc method for handling overdispersion is also proposed. The simulation study showed that the model-based approach compared favorably to conventional distance sampling methods for abundance estimation. In addition, the overdispersion correction performed adequately when the number of transects was high. Analysis of the Dubbo data set indicated a transect effect on abundance via Akaike’s information criterion model selection. Further goodness-of-fit analysis, however, indicated some potential confounding of intensity with the detection function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Geostatistics involves the fitting of spatially continuous models to spatially discrete data (Chil`es and Delfiner, 1999). Preferential sampling arises when the process that determines the data-locations and the process being modelled are stochastically dependent. Conventional geostatistical methods assume, if only implicitly, that sampling is non-preferential. However, these methods are often used in situations where sampling is likely to be preferential. For example, in mineral exploration samples may be concentrated in areas thought likely to yield high-grade ore. We give a general expression for the likelihood function of preferentially sampled geostatistical data and describe how this can be evaluated approximately using Monte Carlo methods. We present a model for preferential sampling, and demonstrate through simulated examples that ignoring preferential sampling can lead to seriously misleading inferences. We describe an application of the model to a set of bio-monitoring data from Galicia, northern Spain, in which making allowance for preferential sampling materially changes the inferences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we consider estimation of the causal effect of a treatment on an outcome from observational data collected in two phases. In the first phase, a simple random sample of individuals are drawn from a population. On these individuals, information is obtained on treatment, outcome, and a few low-dimensional confounders. These individuals are then stratified according to these factors. In the second phase, a random sub-sample of individuals are drawn from each stratum, with known, stratum-specific selection probabilities. On these individuals, a rich set of confounding factors are collected. In this setting, we introduce four estimators: (1) simple inverse weighted, (2) locally efficient, (3) doubly robust and (4)enriched inverse weighted. We evaluate the finite-sample performance of these estimators in a simulation study. We also use our methodology to estimate the causal effect of trauma care on in-hospital mortality using data from the National Study of Cost and Outcomes of Trauma.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES Respondent-driven sampling (RDS) is a new data collection methodology used to estimate characteristics of hard-to-reach groups, such as the HIV prevalence in drug users. Many national public health systems and international organizations rely on RDS data. However, RDS reporting quality and available reporting guidelines are inadequate. We carried out a systematic review of RDS studies and present Strengthening the Reporting of Observational Studies in Epidemiology for RDS Studies (STROBE-RDS), a checklist of essential items to present in RDS publications, justified by an explanation and elaboration document. STUDY DESIGN AND SETTING We searched the MEDLINE (1970-2013), EMBASE (1974-2013), and Global Health (1910-2013) databases to assess the number and geographical distribution of published RDS studies. STROBE-RDS was developed based on STROBE guidelines, following Guidance for Developers of Health Research Reporting Guidelines. RESULTS RDS has been used in over 460 studies from 69 countries, including the USA (151 studies), China (70), and India (32). STROBE-RDS includes modifications to 12 of the 22 items on the STROBE checklist. The two key areas that required modification concerned the selection of participants and statistical analysis of the sample. CONCLUSION STROBE-RDS seeks to enhance the transparency and utility of research using RDS. If widely adopted, STROBE-RDS should improve global infectious diseases public health decision making.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Tara Oceans Expedition (2009-2013) sampled the world oceans on board a 36 m long schooner, collecting environmental data and organisms from viruses to planktonic metazoans for later analyses using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data set provides continuous measurements made with a FRRF instrument, operating in a flow-through mode during the 2009-2012 part of the expedition. It operates by exciting chlorophyll fluorescence using a series of short flashes of controlled energy and time intervals (Kolber et al, 1998). The fluorescence transients produced by this excitation signal were analysed in real-time to provide estimates of abundance of photosynthetic pigments, the photosynthetic yields (Fv/Fm), the functional absorption cross section (a proxy for efficiency of photosynthetic energy acquisition), the kinetics of photosynthetic electron transport between Photosystem II and Photosystem I, and the size of the PQ pool. These parameters were measured at excitation wavelength of 445 nm, 470nm, 505 nm, and 535 nm, allowing to assess the presence and the photosynthetic performance of different phytoplankton taxa based on the spectral composition of their light harvesting pigments. The FRRF-derived photosynthetic characteristics were used to calculate the initial slope, the half saturation, and the maximum level of Photosynthesis vs Irradiance relationship. FRRF data were acquired continuously, at 1-minute time intervals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Una apropiada evaluación de los márgenes de seguridad de una instalación nuclear, por ejemplo, una central nuclear, tiene en cuenta todas las incertidumbres que afectan a los cálculos de diseño, funcionanmiento y respuesta ante accidentes de dicha instalación. Una fuente de incertidumbre son los datos nucleares, que afectan a los cálculos neutrónicos, de quemado de combustible o activación de materiales. Estos cálculos permiten la evaluación de las funciones respuesta esenciales para el funcionamiento correcto durante operación, y también durante accidente. Ejemplos de esas respuestas son el factor de multiplicación neutrónica o el calor residual después del disparo del reactor. Por tanto, es necesario evaluar el impacto de dichas incertidumbres en estos cálculos. Para poder realizar los cálculos de propagación de incertidumbres, es necesario implementar metodologías que sean capaces de evaluar el impacto de las incertidumbres de estos datos nucleares. Pero también es necesario conocer los datos de incertidumbres disponibles para ser capaces de manejarlos. Actualmente, se están invirtiendo grandes esfuerzos en mejorar la capacidad de analizar, manejar y producir datos de incertidumbres, en especial para isótopos importantes en reactores avanzados. A su vez, nuevos programas/códigos están siendo desarrollados e implementados para poder usar dichos datos y analizar su impacto. Todos estos puntos son parte de los objetivos del proyecto europeo ANDES, el cual ha dado el marco de trabajo para el desarrollo de esta tesis doctoral. Por tanto, primero se ha llevado a cabo una revisión del estado del arte de los datos nucleares y sus incertidumbres, centrándose en los tres tipos de datos: de decaimiento, de rendimientos de fisión y de secciones eficaces. A su vez, se ha realizado una revisión del estado del arte de las metodologías para la propagación de incertidumbre de estos datos nucleares. Dentro del Departamento de Ingeniería Nuclear (DIN) se propuso una metodología para la propagación de incertidumbres en cálculos de evolución isotópica, el Método Híbrido. Esta metodología se ha tomado como punto de partida para esta tesis, implementando y desarrollando dicha metodología, así como extendiendo sus capacidades. Se han analizado sus ventajas, inconvenientes y limitaciones. El Método Híbrido se utiliza en conjunto con el código de evolución isotópica ACAB, y se basa en el muestreo por Monte Carlo de los datos nucleares con incertidumbre. En esta metodología, se presentan diferentes aproximaciones según la estructura de grupos de energía de las secciones eficaces: en un grupo, en un grupo con muestreo correlacionado y en multigrupos. Se han desarrollado diferentes secuencias para usar distintas librerías de datos nucleares almacenadas en diferentes formatos: ENDF-6 (para las librerías evaluadas), COVERX (para las librerías en multigrupos de SCALE) y EAF (para las librerías de activación). Gracias a la revisión del estado del arte de los datos nucleares de los rendimientos de fisión se ha identificado la falta de una información sobre sus incertidumbres, en concreto, de matrices de covarianza completas. Además, visto el renovado interés por parte de la comunidad internacional, a través del grupo de trabajo internacional de cooperación para evaluación de datos nucleares (WPEC) dedicado a la evaluación de las necesidades de mejora de datos nucleares mediante el subgrupo 37 (SG37), se ha llevado a cabo una revisión de las metodologías para generar datos de covarianza. Se ha seleccionando la actualización Bayesiana/GLS para su implementación, y de esta forma, dar una respuesta a dicha falta de matrices completas para rendimientos de fisión. Una vez que el Método Híbrido ha sido implementado, desarrollado y extendido, junto con la capacidad de generar matrices de covarianza completas para los rendimientos de fisión, se han estudiado diferentes aplicaciones nucleares. Primero, se estudia el calor residual tras un pulso de fisión, debido a su importancia para cualquier evento después de la parada/disparo del reactor. Además, se trata de un ejercicio claro para ver la importancia de las incertidumbres de datos de decaimiento y de rendimientos de fisión junto con las nuevas matrices completas de covarianza. Se han estudiado dos ciclos de combustible de reactores avanzados: el de la instalación europea para transmutación industrial (EFIT) y el del reactor rápido de sodio europeo (ESFR), en los cuales se han analizado el impacto de las incertidumbres de los datos nucleares en la composición isotópica, calor residual y radiotoxicidad. Se han utilizado diferentes librerías de datos nucleares en los estudios antreriores, comparando de esta forma el impacto de sus incertidumbres. A su vez, mediante dichos estudios, se han comparando las distintas aproximaciones del Método Híbrido y otras metodologías para la porpagación de incertidumbres de datos nucleares: Total Monte Carlo (TMC), desarrollada en NRG por A.J. Koning y D. Rochman, y NUDUNA, desarrollada en AREVA GmbH por O. Buss y A. Hoefer. Estas comparaciones demostrarán las ventajas del Método Híbrido, además de revelar sus limitaciones y su rango de aplicación. ABSTRACT For an adequate assessment of safety margins of nuclear facilities, e.g. nuclear power plants, it is necessary to consider all possible uncertainties that affect their design, performance and possible accidents. Nuclear data are a source of uncertainty that are involved in neutronics, fuel depletion and activation calculations. These calculations can predict critical response functions during operation and in the event of accident, such as decay heat and neutron multiplication factor. Thus, the impact of nuclear data uncertainties on these response functions needs to be addressed for a proper evaluation of the safety margins. Methodologies for performing uncertainty propagation calculations need to be implemented in order to analyse the impact of nuclear data uncertainties. Nevertheless, it is necessary to understand the current status of nuclear data and their uncertainties, in order to be able to handle this type of data. Great eórts are underway to enhance the European capability to analyse/process/produce covariance data, especially for isotopes which are of importance for advanced reactors. At the same time, new methodologies/codes are being developed and implemented for using and evaluating the impact of uncertainty data. These were the objectives of the European ANDES (Accurate Nuclear Data for nuclear Energy Sustainability) project, which provided a framework for the development of this PhD Thesis. Accordingly, first a review of the state-of-the-art of nuclear data and their uncertainties is conducted, focusing on the three kinds of data: decay, fission yields and cross sections. A review of the current methodologies for propagating nuclear data uncertainties is also performed. The Nuclear Engineering Department of UPM has proposed a methodology for propagating uncertainties in depletion calculations, the Hybrid Method, which has been taken as the starting point of this thesis. This methodology has been implemented, developed and extended, and its advantages, drawbacks and limitations have been analysed. It is used in conjunction with the ACAB depletion code, and is based on Monte Carlo sampling of variables with uncertainties. Different approaches are presented depending on cross section energy-structure: one-group, one-group with correlated sampling and multi-group. Differences and applicability criteria are presented. Sequences have been developed for using different nuclear data libraries in different storing-formats: ENDF-6 (for evaluated libraries) and COVERX (for multi-group libraries of SCALE), as well as EAF format (for activation libraries). A revision of the state-of-the-art of fission yield data shows inconsistencies in uncertainty data, specifically with regard to complete covariance matrices. Furthermore, the international community has expressed a renewed interest in the issue through the Working Party on International Nuclear Data Evaluation Co-operation (WPEC) with the Subgroup (SG37), which is dedicated to assessing the need to have complete nuclear data. This gives rise to this review of the state-of-the-art of methodologies for generating covariance data for fission yields. Bayesian/generalised least square (GLS) updating sequence has been selected and implemented to answer to this need. Once the Hybrid Method has been implemented, developed and extended, along with fission yield covariance generation capability, different applications are studied. The Fission Pulse Decay Heat problem is tackled first because of its importance during events after shutdown and because it is a clean exercise for showing the impact and importance of decay and fission yield data uncertainties in conjunction with the new covariance data. Two fuel cycles of advanced reactors are studied: the European Facility for Industrial Transmutation (EFIT) and the European Sodium Fast Reactor (ESFR), and response function uncertainties such as isotopic composition, decay heat and radiotoxicity are addressed. Different nuclear data libraries are used and compared. These applications serve as frameworks for comparing the different approaches of the Hybrid Method, and also for comparing with other methodologies: Total Monte Carlo (TMC), developed at NRG by A.J. Koning and D. Rochman, and NUDUNA, developed at AREVA GmbH by O. Buss and A. Hoefer. These comparisons reveal the advantages, limitations and the range of application of the Hybrid Method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biotic indices have been developed to summarise information provided by benthic macroinvertebrates, but their use can require specialized taxonomic expertise as well as a time-consuming operation. Using high taxonomic level in biotic indices reduces sampling processing time but should be considered with caution, since assigning tolerance level to high taxonomic levels may cause uncertainty. A methodology for family level tolerance categorization based on the affinity of each family with disturbed or undisturbed conditions was employed. This family tolerance classification approach was tested in two different areas from Mediterranean Sea affected by sewage discharges. Biotic indices employed at family level responded correctly to sewage presence. However, in areas with different communities among stations and high diversity of species within each family, assigning the same tolerance level to a whole family could imply mistakes. Thus, use of high taxonomic level in biotic indices should be only restricted to areas where homogeneous community is presented and families across sites have similar species composition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background There is a paucity of data describing the prevalence of childhood refractive error in the United Kingdom. The Northern Ireland Childhood Errors of Refraction study, along with its sister study the Aston Eye Study, are the first population-based surveys of children using both random cluster sampling and cycloplegic autorefraction to quantify levels of refractive error in the United Kingdom. Methods Children aged 6–7 years and 12–13 years were recruited from a stratified random sample of primary and post-primary schools, representative of the population of Northern Ireland as a whole. Measurements included assessment of visual acuity, oculomotor balance, ocular biometry and cycloplegic binocular open-field autorefraction. Questionnaires were used to identify putative risk factors for refractive error. Results 399 (57%) of 6–7 years and 669 (60%) of 12–13 years participated. School participation rates did not vary statistically significantly with the size of the school, whether the school is urban or rural, or whether it is in a deprived/non-deprived area. The gender balance, ethnicity and type of schooling of participants are reflective of the Northern Ireland population. Conclusions The study design, sample size and methodology will ensure accurate measures of the prevalence of refractive errors in the target population and will facilitate comparisons with other population-based refractive data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose - To evaluate adherence to prescribed antiepileptic drugs (AEDs) in children with epilepsy using a combination of adherence-assessment methods. Methods - A total of 100 children with epilepsy (≤17 years old) were recruited. Medication adherence was determined via parental and child self-reporting (≥9 years old), medication refill data from general practitioner (GP) prescribing records, and via AED concentrations in dried blood spot (DBS) samples obtained from children at the clinic and via self- or parental-led sampling in children's own homes. The latter were assessed using population pharmacokinetic modeling. Patients were deemed nonadherent if any of these measures were indicative of nonadherence with the prescribed treatment. In addition, beliefs about medicines, parental confidence in seizure management, and the presence of depressed mood in parents were evaluated to examine their association with nonadherence in the participating children. Key Findings - The overall rate of nonadherence in children with epilepsy was 33%. Logistic regression analysis indicated that children with generalized epilepsy (vs. focal epilepsy) were more likely (odds ratio [OR] 4.7, 95% confidence interval [CI] 1.37–15.81) to be classified as nonadherent as were children whose parents have depressed mood (OR 3.6, 95% CI 1.16–11.41). Significance - This is the first study to apply the novel methodology of determining adherence via AED concentrations in clinic and home DBS samples. The present findings show that the latter, with further development, could be a useful approach to adherence assessment when combined with other measures including parent and child self-reporting. Seizure type and parental depressed mood were strongly predictive of nonadherence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In an effort to achieve greater consistency and comparability in state‐wide seat belt use reporting, the National Highway Traffic Safety Administration (NHTSA) issued new requirements in 2011 for observing and reporting future seat belt use. The requirements included the involvement of a qualified statistician in the sampling and weighting portions of the process as well as a variety of operational details. The Iowa Governor’s Traffic Safety Bureau contracted with Iowa State University’s Survey & Behavioral Research Services (SBRS) in 2011 to develop the study design and data collection plan for the State of Iowa annual survey that would meet the new requirements of the NHTSA. A seat belt survey plan for Iowa was developed by SBRS with statistical expertise provided by Zhengyuan Zhu, Ph.D., Associate Professor of Statistics at Iowa State University. The Iowa plan was submitted to NHTSA in December of 2011 and official approval was received on March 19, 2012.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recently reported Monte Carlo Random Path Sampling method (RPS) is here improved and its application is expanded to the study of the 2D and 3D Ising and discrete Heisenberg models. The methodology was implemented to allow use in both CPU-based high-performance computing infrastructures (C/MPI) and GPU-based (CUDA) parallel computation, with significant computational performance gains. Convergence is discussed, both in terms of free energy and magnetization dependence on field/temperature. From the calculated magnetization-energy joint density of states, fast calculations of field and temperature dependent thermodynamic properties are performed, including the effects of anisotropy on coercivity, and the magnetocaloric effect. The emergence of first-order magneto-volume transitions in the compressible Ising model is interpreted using the Landau theory of phase transitions. Using metallic Gadolinium as a real-world example, the possibility of using RPS as a tool for computational magnetic materials design is discussed. Experimental magnetic and structural properties of a Gadolinium single crystal are compared to RPS-based calculations using microscopic parameters obtained from Density Functional Theory.