908 resultados para convenience sampling
Resumo:
The Biscayne Bay Benthic Sampling Program was divided into two phases. In Phase I, sixty sampling stations were established in Biscayne Bay (including Dumfoundling Bay and Card Sound) representing diverse habitats. The stations were visited in the wet season (late fall of 1981) and in the dry season (midwinter of 1982). At each station certain abiotic conditions were measured or estimated. These included depth, sources of freshwater inflow and pollution, bottom characteristics, current direction and speed, surface and bottom temperature, salinity and dissolved oxygen, and water clarity was estimated with a secchi disk. Seagrass blades and macroalgae were counted in a 0.1-m2 grid placed so as to best represent the bottom community within a 50-foot radius. Underwater 35-mm photographs were made of the bottom using flash apparatus. Benthic samples were collected using a petite Ponar dredge. These samples were washed through a 5-mm mesh screen, fixed in formalin in the field, and later sorted and identified by experts to a pre-agreed taxonomic level. During the wet season sampling period, a nonquantitative one-meter wide trawl was made of the epibenthic community. These samples were also washed, fixed, sorted and identified. During the dry season sampling period, sediment cores were collected at each station not located on bare rock. These cores were analyzed for sediment size and organic composition by personnel of the University of Miami. Data resulting from the sampling were entered into a computer. These data were subjected to cluster analyses, Shannon-Weaver diversity analysis, multiple regression analysis of variance and covariance, and factor analysis. In Phase II of the program, fifteen stations were selected from among the sixty of Phase I. These stations were sampled quarterly. At each quarter, five Petite Ponar dredge samples were collected from each station. As in Phase I, observations and measurements, including seagrass blade counts, were made at each station. In Phase II, polychaete specimens collected were given to a separate contractor for analysis to the species level. These analyses included mean, standard deviation, coefficient of dispersion, percent of total, and numeric rank for each organism in each station as well as number of species, Shannon-Weaver taxa diversity, and dominance (the compliment of Simpson's Index) for each station. Multiple regression analysis of variance and covariance, and factor analysis were applied to the data to determine effect of abiotic factors measured at each station. (PDF contains 96 pages)
Resumo:
Polycyclic aromatic hydrocarbons, butyltins, polychlorinated biphenyls, DDT and metabolites, other chlorinated pesticides, trace and major elements, and a number of measures of contaminant effects are quantified in bivalves and sediments collected as part of the NOAA National Status and Trends (NS&T) Program. This document contains descriptions of some of the sampling and analytical protocols used by NS&T contract laboratories from 1993 through 1996. (PDF contains 257 pages)
Resumo:
[EN]The Mallows and Generalized Mallows models are compact yet powerful and natural ways of representing a probability distribution over the space of permutations. In this paper we deal with the problems of sampling and learning (estimating) such distributions when the metric on permutations is the Cayley distance. We propose new methods for both operations, whose performance is shown through several experiments. We also introduce novel procedures to count and randomly generate permutations at a given Cayley distance both with and without certain structural restrictions. An application in the field of biology is given to motivate the interest of this model.
Resumo:
[EN]In this paper we deal with distributions over permutation spaces. The Mallows model is the mode l in use. The associated distance for permutations is the Hamming distance.
Resumo:
[EN]In this paper we deal with probability distributions over permutation spaces. The Probability model in use is the Mallows model. The distance for permutations that the model uses in the Ulam distance.
Resumo:
Atlantic menhaden, Brrvoortia tyrannus, the object of a major purse-seine fishery along the U.S. east coast, are landed at plants from northern Florida to central Maine. The National Marine Fisheries Service has sampled these landings since 1955 for length, weight, and age. Together with records of landings at each plant, the samples are used to estimate numbers of fish landed at each age. This report analyzes the sampling design in terms of probablity sampling theory. The design is c1assified as two-stage cluster sampling, the first stage consisting of purse-seine sets randomly selected from the population of all sets landed, and the second stage consisting of fish randomly selected from each sampled set. Implicit assumptions of this design are discussed with special attention to current sampling procedures. Methods are developed for estimating mean fish weight, numbers of fish landed, and age composition of the catch, with approximate 95% confidence intervals. Based on specific results from three ports (port Monmouth, N.J., Reedville, Va., and Beaufort, N.C.) for the 1979 fishing season, recommendations are made for improving sampling procedures to comply more exactly with assumptions of the sampling design. These recommendatlons include adopting more formal methods for randomizing set and fish selection, increasing the number of sets sampled, considering the bias introduced by unequal set sizes, and developing methods to optimize the use of funds and personnel. (PDF file contains 22 pages.)
Resumo:
ENGLISH: The staff of the Inter-American Tropical Tuna Commission is collecting and analyzing catch statistics of the Eastern Pacific fishery for yellowfin tuna (Neothunnus macropterus) and skipjack (Katsuwonus pelamis) in order to provide the factual information required for maintaining the catch of these species at maximum sustainable levels (Shimada and Schaefer, 1956). Careful, systematic and continued studies of the population structure, life history, and ecology of these species are needed for a proper and adequate interpretation of the catch statistics so that a sound conservation program may be achieved (Schaefer, 1956). SPANISH: El personal científico de la Comisión Interamericana del Atún Tropical cumple, entre sus tareas, la de reunir y analizar las estadísticas de pesca del atún aleta amarilla (Neothunnus macropterus) y del barrilete (Katsuwonus pelamis) de la pesquería del Pacífico Oriental, a fin de adquirir la información necesaria para mantener la pesca de estas especies a niveles de producción máxima sostenible (Shimada y Schaefer, 1956). Estudios cuidadosos, sistemáticos y continuos de la estructura de la población y ciclo de vida y ecología de estas especies, son necesarios para lograr una interpretación adecuada de las estadísticas de pesca, de modo que éstas, a su vez, permitan realizar un programa conservacionista serio (Schaefer, 1956). (PDF contains 73 pages.)
Resumo:
The direct simulation Monte Carlo (DSMC) method is a widely used approach for flow simulations having rarefied or nonequilibrium effects. It involves heavily to sample instantaneous values from prescribed distributions using random numbers. In this note, we briefly review the sampling techniques typically employed in the DSMC method and present two techniques to speedup related sampling processes. One technique is very efficient for sampling geometric locations of new particles and the other is useful for the Larsen-Borgnakke energy distribution.
Resumo:
ENGLISH: A two-stage sampling design is used to estimate the variances of the numbers of yellowfin in different age groups caught in the eastern Pacific Ocean. For purse seiners, the primary sampling unit (n) is a brine well containing fish from a month-area stratum; the number of fish lengths (m) measured from each well are the secondary units. The fish cannot be selected at random from the wells because of practical limitations. The effects of different sampling methods and other factors on the reliability and precision of statistics derived from the length-frequency data were therefore examined. Modifications are recommended where necessary. Lengths of fish measured during the unloading of six test wells revealed two forms of inherent size stratification: 1) short-term disruptions of existing pattern of sizes, and 2) transition zones between long-term trends in sizes. To some degree, all wells exhibited cyclic changes in mean size and variance during unloading. In half of the wells, it was observed that size selection by the unloaders induced a change in mean size. As a result of stratification, the sequence of sizes removed from all wells was non-random, regardless of whether a well contained fish from a single set or from more than one set. The number of modal sizes in a well was not related to the number of sets. In an additional well composed of fish from several sets, an experiment on vertical mixing indicated that a representative sample of the contents may be restricted to the bottom half of the well. The contents of the test wells were used to generate 25 simulated wells and to compare the results of three sampling methods applied to them. The methods were: (1) random sampling (also used as a standard), (2) protracted sampling, in which the selection process was extended over a large portion of a well, and (3) measuring fish consecutively during removal from the well. Repeated sampling by each method and different combinations indicated that, because the principal source of size variation occurred among primary units, increasing n was the most effective way to reduce the variance estimates of both the age-group sizes and the total number of fish in the landings. Protracted sampling largely circumvented the effects of size stratification, and its performance was essentially comparable to that of random sampling. Sampling by this method is recommended. Consecutive-fish sampling produced more biased estimates with greater variances. Analysis of the 1988 length-frequency samples indicated that, for age groups that appear most frequently in the catch, a minimum sampling frequency of one primary unit in six for each month-area stratum would reduce the coefficients of variation (CV) of their size estimates to approximately 10 percent or less. Additional stratification of samples by set type, rather than month-area alone, further reduced the CV's of scarce age groups, such as the recruits, and potentially improved their accuracy. The CV's of recruitment estimates for completely-fished cohorts during the 198184 period were in the vicinity of 3 to 8 percent. Recruitment estimates and their variances were also relatively insensitive to changes in the individual quarterly catches and variances, respectively, of which they were composed. SPANISH: Se usa un diseño de muestreo de dos etapas para estimar las varianzas de los números de aletas amari11as en distintos grupos de edad capturados en el Océano Pacifico oriental. Para barcos cerqueros, la unidad primaria de muestreo (n) es una bodega de salmuera que contenía peces de un estrato de mes-área; el numero de ta11as de peces (m) medidas de cada bodega es la unidad secundaria. Limitaciones de carácter practico impiden la selección aleatoria de peces de las bodegas. Por 10 tanto, fueron examinados los efectos de distintos métodos de muestreo y otros factores sobre la confiabilidad y precisión de las estadísticas derivadas de los datos de frecuencia de ta11a. Se recomiendan modificaciones donde sean necesarias. Las ta11as de peces medidas durante la descarga de seis bodegas de prueba revelaron dos formas de estratificación inherente por ta11a: 1) perturbaciones a corto plazo en la pauta de ta11as existente, y 2) zonas de transición entre las tendencias a largo plazo en las ta11as. En cierto grado, todas las bodegas mostraron cambios cíclicos en ta11a media y varianza durante la descarga. En la mitad de las bodegas, se observo que selección por ta11a por los descargadores indujo un cambio en la ta11a media. Como resultado de la estratificación, la secuencia de ta11as sacadas de todas las bodegas no fue aleatoria, sin considerar si una bodega contenía peces de un solo lance 0 de mas de uno. El numero de ta11as modales en una bodega no estaba relacionado al numero de lances. En una bodega adicional compuesta de peces de varios lances, un experimento de mezcla vertical indico que una muestra representativa del contenido podría estar limitada a la mitad inferior de la bodega. Se uso el contenido de las bodegas de prueba para generar 25 bodegas simuladas y comparar los resultados de tres métodos de muestreo aplicados a estas. Los métodos fueron: (1) muestreo aleatorio (usado también como norma), (2) muestreo extendido, en el cual el proceso de selección fue extendido sobre una porción grande de una bodega, y (3) medición consecutiva de peces durante la descarga de la bodega. EI muestreo repetido con cada método y distintas combinaciones de n y m indico que, puesto que la fuente principal de variación de ta11a ocurría entre las unidades primarias, aumentar n fue la manera mas eficaz de reducir las estimaciones de la varianza de las ta11as de los grupos de edad y el numero total de peces en los desembarcos. El muestreo extendido evito mayormente los efectos de la estratificación por ta11a, y su desempeño fue esencialmente comparable a aquel del muestreo aleatorio. Se recomienda muestrear con este método. El muestreo de peces consecutivos produjo estimaciones mas sesgadas con mayores varianzas. Un análisis de las muestras de frecuencia de ta11a de 1988 indico que, para los grupos de edad que aparecen con mayor frecuencia en la captura, una frecuencia de muestreo minima de una unidad primaria de cada seis para cada estrato de mes-área reduciría los coeficientes de variación (CV) de las estimaciones de ta11a correspondientes a aproximadamente 10% 0 menos. Una estratificación adicional de las muestras por tipo de lance, y no solamente mes-área, redujo aun mas los CV de los grupos de edad escasos, tales como los reclutas, y mejoró potencialmente su precisión. Los CV de las estimaciones del reclutamiento para las cohortes completamente pescadas durante 1981-1984 fueron alrededor de 3-8%. Las estimaciones del reclutamiento y sus varianzas fueron también relativamente insensibles a cambios en las capturas de trimestres individuales y las varianzas, respectivamente, de las cuales fueron derivadas. (PDF contains 70 pages)
Resumo:
A central objective in signal processing is to infer meaningful information from a set of measurements or data. While most signal models have an overdetermined structure (the number of unknowns less than the number of equations), traditionally very few statistical estimation problems have considered a data model which is underdetermined (number of unknowns more than the number of equations). However, in recent times, an explosion of theoretical and computational methods have been developed primarily to study underdetermined systems by imposing sparsity on the unknown variables. This is motivated by the observation that inspite of the huge volume of data that arises in sensor networks, genomics, imaging, particle physics, web search etc., their information content is often much smaller compared to the number of raw measurements. This has given rise to the possibility of reducing the number of measurements by down sampling the data, which automatically gives rise to underdetermined systems.
In this thesis, we provide new directions for estimation in an underdetermined system, both for a class of parameter estimation problems and also for the problem of sparse recovery in compressive sensing. There are two main contributions of the thesis: design of new sampling and statistical estimation algorithms for array processing, and development of improved guarantees for sparse reconstruction by introducing a statistical framework to the recovery problem.
We consider underdetermined observation models in array processing where the number of unknown sources simultaneously received by the array can be considerably larger than the number of physical sensors. We study new sparse spatial sampling schemes (array geometries) as well as propose new recovery algorithms that can exploit priors on the unknown signals and unambiguously identify all the sources. The proposed sampling structure is generic enough to be extended to multiple dimensions as well as to exploit different kinds of priors in the model such as correlation, higher order moments, etc.
Recognizing the role of correlation priors and suitable sampling schemes for underdetermined estimation in array processing, we introduce a correlation aware framework for recovering sparse support in compressive sensing. We show that it is possible to strictly increase the size of the recoverable sparse support using this framework provided the measurement matrix is suitably designed. The proposed nested and coprime arrays are shown to be appropriate candidates in this regard. We also provide new guarantees for convex and greedy formulations of the support recovery problem and demonstrate that it is possible to strictly improve upon existing guarantees.
This new paradigm of underdetermined estimation that explicitly establishes the fundamental interplay between sampling, statistical priors and the underlying sparsity, leads to exciting future research directions in a variety of application areas, and also gives rise to new questions that can lead to stand-alone theoretical results in their own right.
Resumo:
Curve samplers are sampling algorithms that proceed by viewing the domain as a vector space over a finite field, and randomly picking a low-degree curve in it as the sample. Curve samplers exhibit a nice property besides the sampling property: the restriction of low-degree polynomials over the domain to the sampled curve is still low-degree. This property is often used in combination with the sampling property and has found many applications, including PCP constructions, local decoding of codes, and algebraic PRG constructions.
The randomness complexity of curve samplers is a crucial parameter for its applications. It is known that (non-explicit) curve samplers using O(log N + log(1/δ)) random bits exist, where N is the domain size and δ is the confidence error. The question of explicitly constructing randomness-efficient curve samplers was first raised in [TU06] where they obtained curve samplers with near-optimal randomness complexity.
In this thesis, we present an explicit construction of low-degree curve samplers with optimal randomness complexity (up to a constant factor) that sample curves of degree (m logq(1/δ))O(1) in Fqm. Our construction is a delicate combination of several components, including extractor machinery, limited independence, iterated sampling, and list-recoverable codes.