990 resultados para Sub-sampling


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In 1998 the longtime series of the standardized bottom trawl surveys conducted in the western Baltic (ICES Sub-division (SD) 22 and 24 since 1978) during spring and autumn and also in the eastern Baltic (ICES Sub-division since 1993) during spring were continued. The results of the surveys as well as those of a sampling programme carried out on board of commercial cutters (mainly financed by an EU study project) and on the market, the investigation of the survival rate of the discards, and the landing statistics are the basis of the analysis of the German and international cod fishery in 1998. The German cod fishery was concentrated on the ICES SD 22 and 24 (total landings 9722 t). The total landings from the fishing grounds east off Bornholm, the traditional German fishing ground, amounted only to about 1270 t. The German cod quota was utilized at 64 %. The fishery in the ICES SD 22 and 24 was characterized by a discard rate of undersized cod of 13 %. That corresponds to about 7.3 million specimens of 0-group, one- and two-years-old youngfish, respectively. The total international cod landings of the Baltic amounted to 101 500 t. In comparison with 1997 (total landings 129 600 t) the landing decreased by 23 %. The percentage utilization of the cod TAC (1998: 140 000 t) amounted to 74 % in 1998.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Size effects of mechanical behaviors of materials are referred to the variation of the mechanical behavior due to the sample sizes changing from macroscale to micro-/nanoscales. At the micro-/nanoscale, since sample has a relatively high specific surface area (SSA) (ratio of surface area to volume), the surface although it is often neglected at the macroscale, becomes prominent in governing the energy effect, although it is often neglected at the macroscale, becomes prominent in governing the mechanical behavior. In the present research, a continuum model considering the surface energy effect is developed through introducing the surface energy to total potential energy. Simultaneously, a corresponding finite element method is developed. The model is used to analyze the axial equilibrium strain problem for a Cu nanowire at the external loading-free state. As another application of the model, from dimensional analysis, the size effects of uniform compression tests on the microscale cylinder specimens for Ni and Au single crystals are analyzed and compared with experiments in literatures. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The direct simulation Monte Carlo (DSMC) method is a widely used approach for flow simulations having rarefied or nonequilibrium effects. It involves heavily to sample instantaneous values from prescribed distributions using random numbers. In this note, we briefly review the sampling techniques typically employed in the DSMC method and present two techniques to speedup related sampling processes. One technique is very efficient for sampling geometric locations of new particles and the other is useful for the Larsen-Borgnakke energy distribution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ENGLISH: A two-stage sampling design is used to estimate the variances of the numbers of yellowfin in different age groups caught in the eastern Pacific Ocean. For purse seiners, the primary sampling unit (n) is a brine well containing fish from a month-area stratum; the number of fish lengths (m) measured from each well are the secondary units. The fish cannot be selected at random from the wells because of practical limitations. The effects of different sampling methods and other factors on the reliability and precision of statistics derived from the length-frequency data were therefore examined. Modifications are recommended where necessary. Lengths of fish measured during the unloading of six test wells revealed two forms of inherent size stratification: 1) short-term disruptions of existing pattern of sizes, and 2) transition zones between long-term trends in sizes. To some degree, all wells exhibited cyclic changes in mean size and variance during unloading. In half of the wells, it was observed that size selection by the unloaders induced a change in mean size. As a result of stratification, the sequence of sizes removed from all wells was non-random, regardless of whether a well contained fish from a single set or from more than one set. The number of modal sizes in a well was not related to the number of sets. In an additional well composed of fish from several sets, an experiment on vertical mixing indicated that a representative sample of the contents may be restricted to the bottom half of the well. The contents of the test wells were used to generate 25 simulated wells and to compare the results of three sampling methods applied to them. The methods were: (1) random sampling (also used as a standard), (2) protracted sampling, in which the selection process was extended over a large portion of a well, and (3) measuring fish consecutively during removal from the well. Repeated sampling by each method and different combinations indicated that, because the principal source of size variation occurred among primary units, increasing n was the most effective way to reduce the variance estimates of both the age-group sizes and the total number of fish in the landings. Protracted sampling largely circumvented the effects of size stratification, and its performance was essentially comparable to that of random sampling. Sampling by this method is recommended. Consecutive-fish sampling produced more biased estimates with greater variances. Analysis of the 1988 length-frequency samples indicated that, for age groups that appear most frequently in the catch, a minimum sampling frequency of one primary unit in six for each month-area stratum would reduce the coefficients of variation (CV) of their size estimates to approximately 10 percent or less. Additional stratification of samples by set type, rather than month-area alone, further reduced the CV's of scarce age groups, such as the recruits, and potentially improved their accuracy. The CV's of recruitment estimates for completely-fished cohorts during the 198184 period were in the vicinity of 3 to 8 percent. Recruitment estimates and their variances were also relatively insensitive to changes in the individual quarterly catches and variances, respectively, of which they were composed. SPANISH: Se usa un diseño de muestreo de dos etapas para estimar las varianzas de los números de aletas amari11as en distintos grupos de edad capturados en el Océano Pacifico oriental. Para barcos cerqueros, la unidad primaria de muestreo (n) es una bodega de salmuera que contenía peces de un estrato de mes-área; el numero de ta11as de peces (m) medidas de cada bodega es la unidad secundaria. Limitaciones de carácter practico impiden la selección aleatoria de peces de las bodegas. Por 10 tanto, fueron examinados los efectos de distintos métodos de muestreo y otros factores sobre la confiabilidad y precisión de las estadísticas derivadas de los datos de frecuencia de ta11a. Se recomiendan modificaciones donde sean necesarias. Las ta11as de peces medidas durante la descarga de seis bodegas de prueba revelaron dos formas de estratificación inherente por ta11a: 1) perturbaciones a corto plazo en la pauta de ta11as existente, y 2) zonas de transición entre las tendencias a largo plazo en las ta11as. En cierto grado, todas las bodegas mostraron cambios cíclicos en ta11a media y varianza durante la descarga. En la mitad de las bodegas, se observo que selección por ta11a por los descargadores indujo un cambio en la ta11a media. Como resultado de la estratificación, la secuencia de ta11as sacadas de todas las bodegas no fue aleatoria, sin considerar si una bodega contenía peces de un solo lance 0 de mas de uno. El numero de ta11as modales en una bodega no estaba relacionado al numero de lances. En una bodega adicional compuesta de peces de varios lances, un experimento de mezcla vertical indico que una muestra representativa del contenido podría estar limitada a la mitad inferior de la bodega. Se uso el contenido de las bodegas de prueba para generar 25 bodegas simuladas y comparar los resultados de tres métodos de muestreo aplicados a estas. Los métodos fueron: (1) muestreo aleatorio (usado también como norma), (2) muestreo extendido, en el cual el proceso de selección fue extendido sobre una porción grande de una bodega, y (3) medición consecutiva de peces durante la descarga de la bodega. EI muestreo repetido con cada método y distintas combinaciones de n y m indico que, puesto que la fuente principal de variación de ta11a ocurría entre las unidades primarias, aumentar n fue la manera mas eficaz de reducir las estimaciones de la varianza de las ta11as de los grupos de edad y el numero total de peces en los desembarcos. El muestreo extendido evito mayormente los efectos de la estratificación por ta11a, y su desempeño fue esencialmente comparable a aquel del muestreo aleatorio. Se recomienda muestrear con este método. El muestreo de peces consecutivos produjo estimaciones mas sesgadas con mayores varianzas. Un análisis de las muestras de frecuencia de ta11a de 1988 indico que, para los grupos de edad que aparecen con mayor frecuencia en la captura, una frecuencia de muestreo minima de una unidad primaria de cada seis para cada estrato de mes-área reduciría los coeficientes de variación (CV) de las estimaciones de ta11a correspondientes a aproximadamente 10% 0 menos. Una estratificación adicional de las muestras por tipo de lance, y no solamente mes-área, redujo aun mas los CV de los grupos de edad escasos, tales como los reclutas, y mejoró potencialmente su precisión. Los CV de las estimaciones del reclutamiento para las cohortes completamente pescadas durante 1981-1984 fueron alrededor de 3-8%. Las estimaciones del reclutamiento y sus varianzas fueron también relativamente insensibles a cambios en las capturas de trimestres individuales y las varianzas, respectivamente, de las cuales fueron derivadas. (PDF contains 70 pages)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gold Coast Water is responsible for the management of the water and wastewater assets of the City of the Gold Coast on Australia’s east coast. Treated wastewater is released at the Gold Coast Seaway on an outgoing tide in order for the plume to be dispersed before the tide changes and renters the Broadwater estuary. Rapid population growth over the past decade has placed increasing demands on the receiving waters for the release of the City’s effluent. The Seaway SmartRelease Project is designed to optimise the release of the effluent from the City’s main wastewater treatment plant in order to minimise the impact of the estuarine water quality and maximise the cost efficiency of pumping. In order to do this an optimisation study that involves water quality monitoring, numerical modelling and a web based decision support system was conducted. An intensive monitoring campaign provided information on water levels, currents, winds, waves, nutrients and bacterial levels within the Broadwater. These data were then used to calibrate and verify numerical models using the MIKE by DHI suite of software. The decision support system then collects continually measured data such as water levels, interacts with the WWTP SCADA system, runs the models in forecast mode and provides the optimal time window to release the required amount of effluent from the WWTP. The City’s increasing population means that the length of time available for releasing the water with minimal impact may be exceeded within 5 years. Optimising the release of the treated water through monitoring, modelling and a decision support system has been an effective way of demonstrating the limited environmental impact of the expected short term increase in effluent disposal procedures. (PDF contains 5 pages)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A time-domain spectrometer for use in the terahertz (THz) spectral range was designed and constructed. Due to there being few existing methods of generating and detecting THz radiation, the spectrometer is expected to have vast applications to solid, liquid, and gas phase samples. In particular, knowledge of complex organic chemistry and chemical abundances in the interstellar medium (ISM) can be obtained when compared to astronomical data. The THz spectral region is of particular interest due to reduced line density when compared to the millimeter wave spectrum, the existence of high resolution observatories, and potentially strong transitions resulting from the lowest-lying vibrational modes of large molecules.

The heart of the THz time-domain spectrometer (THz-TDS) is the ultrafast laser. Due to the femtosecond duration of ultrafast laser pulses and an energy-time uncertainty relationship, the pulses typically have a several-THz bandwidth. By various means of optical rectification, the optical pulse carrier envelope shape, i.e. intensity-time profile, can be transferred to the phase of the resulting THz pulse. As a consequence, optical pump-THz probe spectroscopy is readily achieved, as was demonstrated in studies of dye-sensitized TiO2, as discussed in chapter 4. Detection of the terahertz radiation is commonly based on electro-optic sampling and provides full phase information. This allows for accurate determination of both the real and imaginary index of refraction, the so-called optical constants, without additional analysis. A suite of amino acids and sugars, all of which have been found in meteorites, were studied in crystalline form embedded in a polyethylene matrix. As the temperature was varied between 10 and 310 K, various strong vibrational modes were found to shift in spectral intensity and frequency. Such modes can be attributed to intramolecular, intermolecular, or phonon modes, or to some combination of the three.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Women, all over the world have contributed in various ways to the social, political and economic development of the Society. In fact, the World Resource Institute recognizes that "women have profound and preserve effect onn the well-being of their families, communities and local ecosystems" (Gamble and Well 1997:211). Women constitute more than 50 percent of the Agricultural (Fisheries being a sub sector), labour force. A study on Women in Fisheries showed that they participate in all aspects of the sector (capture, culture, processing, marketing research, training and Extension services). This paper reports the result of the study on women's contributions in the development of the Fisheries Industry particularly their roles in Fish Food Security, Poverty Alleviation and high rates of women's adoption of Fisheries technologies. The Case-study research methodology is used to study the "How" and "Why" Women's Contribution in Fish Food Security and Poverty Alleviation is at the index level recorded for the gender. The study made use of "Case Study" Research Instrument; documents, interview, artefacts, direct observation and archival records. The sampling techniques were purposive for research audiences and simple random for fisher-folks in the chosen locations. Analysed data showed among others that in Fisheries Research women occupy very important positions as Heads of Division/Section, Fisheries Liasion/Extension Officers and Fisheries Laboratory Chiefs etc. The paper also gave results of women production, processing, marketing and other services statistics; it also discusses the "whys" of women's low capacity in fisheries development of the nation and finally suggested ways in improving women's optimal capacity utilization in fisheries development

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this reservoir, the parameters being assessed are very important in the aspect of fish culture. These parameters are: physical parameters which includes temperature (O), Transparency (M).Chemical parameters include: Dissolve oxygen (mg/l) pH concentration and the Biological Parameters which include phytoplankton and zooplankton. The phytoplankton and zooplankton identification and estimation were carried out in the NIFFR Limnology Laboratory, (Green House), New Bussa. Each identified zooplankton and phytoplankton species was placed according to its major group e.g. zooplankton was grouped into three families, Roifera, Cladocera and Copepods. During this study period it was observed that copepods have the highest total number of zooplankton both beside the poultry and monk (station 'A'&'B'). Water temperature of station 'A' (beside the poultry house) ranges from 27 C-29, 5 c also same station 'B' (near the monk). Dissolve oxygen station 'A' range from 6.30mg/l-7.40mg/l while that of station 'B' ranges from 6.20mg/7.50mg/l, turbidity reading of station A'ranges from 0.19m-0.3m while station 'B' ranges from 0.22m-0.37m. The last parameter, which is pH concentration, in both stations 8.2 was observed this is an indication that the pH was constant. According to some literature review all the water parameter figures obtained were good for fish culture

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A central objective in signal processing is to infer meaningful information from a set of measurements or data. While most signal models have an overdetermined structure (the number of unknowns less than the number of equations), traditionally very few statistical estimation problems have considered a data model which is underdetermined (number of unknowns more than the number of equations). However, in recent times, an explosion of theoretical and computational methods have been developed primarily to study underdetermined systems by imposing sparsity on the unknown variables. This is motivated by the observation that inspite of the huge volume of data that arises in sensor networks, genomics, imaging, particle physics, web search etc., their information content is often much smaller compared to the number of raw measurements. This has given rise to the possibility of reducing the number of measurements by down sampling the data, which automatically gives rise to underdetermined systems.

In this thesis, we provide new directions for estimation in an underdetermined system, both for a class of parameter estimation problems and also for the problem of sparse recovery in compressive sensing. There are two main contributions of the thesis: design of new sampling and statistical estimation algorithms for array processing, and development of improved guarantees for sparse reconstruction by introducing a statistical framework to the recovery problem.

We consider underdetermined observation models in array processing where the number of unknown sources simultaneously received by the array can be considerably larger than the number of physical sensors. We study new sparse spatial sampling schemes (array geometries) as well as propose new recovery algorithms that can exploit priors on the unknown signals and unambiguously identify all the sources. The proposed sampling structure is generic enough to be extended to multiple dimensions as well as to exploit different kinds of priors in the model such as correlation, higher order moments, etc.

Recognizing the role of correlation priors and suitable sampling schemes for underdetermined estimation in array processing, we introduce a correlation aware framework for recovering sparse support in compressive sensing. We show that it is possible to strictly increase the size of the recoverable sparse support using this framework provided the measurement matrix is suitably designed. The proposed nested and coprime arrays are shown to be appropriate candidates in this regard. We also provide new guarantees for convex and greedy formulations of the support recovery problem and demonstrate that it is possible to strictly improve upon existing guarantees.

This new paradigm of underdetermined estimation that explicitly establishes the fundamental interplay between sampling, statistical priors and the underlying sparsity, leads to exciting future research directions in a variety of application areas, and also gives rise to new questions that can lead to stand-alone theoretical results in their own right.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A spectral-filter method is numerically demonstrated to obtain sub-5 fs pulses by using femtosecond filamentation in fused silica. Instead of employing spectral phase compensation, by properly employing a high-pass filter to select the broadened high-frequency spectra that are located almost in phase in the tailing edge of the self-compressed pulses owing to self-steepening, as short as single-cycle pulses can be obtained. For instance, for an input pulse with a duration of 50 fs and energy 2.2 mu J, the minimum pulse duration can reach to similar to 4 fs (about 1.5 cycles) by applying a proper spectral filter. (C) 2008 Optical Society of America