920 resultados para kernel estimators
Resumo:
Growth of a temperate reefa-ssociated fish, the purple wrasse (Notolabrus fucicola), was examined from two sites on the east coast of Tasmania by using age- and length-based models. Models based on the von Bertalanffy growth function, in the standard and a reparameterized form, were constructed by using otolith-derived age estimates. Growth trajectories from tag-recaptures were used to construct length-based growth models derived from the GROTAG model, in turn a reparameterization of the Fabens model. Likelihood ratio tests (LRTs) determined the optimal parameterization of the GROTAG model, including estimators of individual growth variability, seasonal growth, measurement error, and outliers for each data set. Growth models and parameter estimates were compared by bootstrap confidence intervals, LRTs, and randomization tests and plots of bootstrap parameter estimates. The relative merit of these methods for comparing models and parameters was evaluated; LRTs combined with bootstrapping and randomization tests provided the most insight into the relationships between parameter estimates. Significant differences in growth of purple wrasse were found between sites in both length- and age-based models. A significant difference in the peak growth season was found between sites, and a large difference in growth rate between sexes was found at one site with the use of length-based models.
Resumo:
Size distribution within re- ported landings is an important aspect of northern Gulf of Mexico penaeid shrimp stock assessments. It reflects shrimp population characteristics such as numerical abundance of various sizes, age structure, and vital rates (e.g. recruitment, growth, and mortality), as well as effects of fishing, fishing power, fishing practices, sampling, size-grading, etc. The usual measure of shrimp size in archived landings data is count (C) the number of shrimp tails (abdomen or edible portion) per pound (0.4536 kg). Shrimp are marketed and landings reported in pounds within tail count categories. Statistically, these count categories are count class intervals or bins with upper and lower limits expressed in C. Count categories vary in width, overlap, and frequency of occurrence within the landings. The upper and lower limits of most count class intervals can be transformed to lower and upper limits (respectively) of class intervals expressed in pounds per shrimp tail, w, the reciprocal of C (i.e. w = 1/C). Age based stock assessments have relied on various algorithms to estimate numbers of shrimp from pounds landed within count categories. These algorithms required un- derlying explicit or implicit assumptions about the distribution of C or w. However, no attempts were made to assess the actual distribution of C or w. Therefore, validity of the algorithms and assumptions could not be determined. When different algorithms were applied to landings within the same size categories, they produced different estimates of numbers of shrimp. This paper demonstrates a method of simulating the distribution of w in reported biological year landings of shrimp. We used, as examples, landings of brown shrimp, Farfantepenaeus aztecus, from the northern Gulf of Mexico fishery in biological years 1986–2006. Brown shrimp biological year, Ti, is defined as beginning on 1 May of the same calendar year as Ti and ending on 30 April of the next calendar year, where subscript i is the place marker for biological year. Biological year landings encompass most if not all of the brown shrimp life cycle and life span. Simulated distributions of w reflect all factors influencing sizes of brown shrimp in the landings within a given biological year. Our method does not require a priori assumptions about the parent distributions of w or C, and it takes into account the variability in width, overlap, and frequency of occurrence of count categories within the landings. Simulated biological year distributions of w can be transformed to equivalent distributions of C. Our method may be useful in future testing of previously applied algorithms and development of new estimators based on statistical estimation theory and the underlying distribution of w or C. We also examine some applications of biological year distributions of w, and additional variables derived from them.
Resumo:
Neste trabalho é apresentado o desenvolvimento de um sistema de posicionamento dinâmico para uma pequena embarcação baseado em controle a estrutura variável com realimentação por visão computacional. Foram investigadas, na literatura, diversas técnicas desenvolvidas e escolheu-se o controle a estrutura variável devido, principalmente, ao modo de acionamento dos propulsores presentes no barco utilizado para os experimentos. Somando-se a isto, foi considerada importante a robustez que a técnica de controle escolhida apresenta, pois o modelo utilizado conta com incerteza em sua dinâmica. É apresentado ainda o projeto da superfície de deslizamento para realizar o controle a estrutura variável. Como instrumento de medição optou-se por utilizar técnicas de visão computacional em imagens capturadas a partir de uma webcam. A escolha por este tipo de sistema deve-se a alta precisão das medições aliada ao seu baixo custo. São apresentadas simulações e experimentos com controle a estrutura variável em tempo discreto utilizando a integral do erro da posição visando eliminar o erro em regime. Para realizar o controle que demanda o estado completo, são comparados quatro estimadores de estado realizados em tempo discreto: derivador aproximado; observador assintótico com uma frequência de amostragem igual a da câmera; observador assintótico com uma frequência de amostragem maior que a da câmera; e filtro de Kalman.
Para onde vamos com o sequestro de carbono? A rede sociotécnica do carbono assimilado por manguezais
Resumo:
O presente estudo defende a tese de que o desenvolvimento da capacidade de abordar a realidade de forma integrada, através de exercícios interdisciplinares e/ou transdisciplinares, torna a ciência mais interessante, completa e, ao mesmo tempo, mais comum. Para alcançá-la, em lugar de abordar o "estoque e sequestro de carbono em manguezais" como um objeto de estudo natural e pertinente às disciplinas Oceanografia, Ecologia, Engenharia Florestal e outras do mesmo campo, optou-se por descrever a rede sociotécnica do carbono assimilado por manguezais. Trata-se de um mergulho na prática científica e de um rastreamento dos vínculos sociais criados e recriados por nossa relação com os objetos tecnocientíficos. O texto resultante destes movimentos que inclui desde estimadores de biomassa vegetal e povos das florestas, até a Convenção do Clima, grandes corporações "verdes" e inventários nacionais de emissões proporciona uma reflexão sobre a forma como a ciência se desenvolveu e tem se apresentado ao mundo contemporâneo (moderno), regulando a ordem política. E é exatamente para tal reflexão que a pergunta "Para onde vamos com o sequestro de carbono?" convida. Depois de percebermos o carbono como um híbrido de natureza e cultura em seus aspectos natural-social, científico-político e local-global , concluímos que o desenvolvimento de metodologias e a elaboração de estimativas de estoque e sequestro de carbono florestal podem ser um caminho para a mitigação do aquecimento global, desde que não sejam colocados de antemão como um conhecimento essencial para tal fim, e que, por ser natural/científico, subjuga os demais saberes
Resumo:
Neste trabalho de dissertação apresentaremos uma classe de precondicionadores baseados na aproximação esparsa da inversa da matriz de coecientes, para a resolução de sistemas lineares esparsos de grandes portes através de métodos iterativos, mais especificamente métodos de Krylov. Para que um método de Krylov seja eficiente é extremamente necessário o uso de precondicionadores. No contexto atual, onde computadores de arquitetura híbrida são cada vez mais comuns temos uma demanda cada vez maior por precondicionadores paralelizáveis. Os métodos de inversa aproximada que serão descritos possuem aplicação paralela, pois so dependem de uma operação de produto matriz-vetor, que é altamente paralelizável. Além disso, alguns dos métodos também podem ser construídos em paralelo. A ideia principal é apresentar uma alternativa aos tradicionais precondicionadores que utilizam aproximações dos fatores LU, que apesar de robustos são de difícil paralelização.
Resumo:
MOTIVATION: Synthetic lethal interactions represent pairs of genes whose individual mutations are not lethal, while the double mutation of both genes does incur lethality. Several studies have shown a correlation between functional similarity of genes and their distances in networks based on synthetic lethal interactions. However, there is a lack of algorithms for predicting gene function from synthetic lethality interaction networks. RESULTS: In this article, we present a novel technique called kernelROD for gene function prediction from synthetic lethal interaction networks based on kernel machines. We apply our novel algorithm to Gene Ontology functional annotation prediction in yeast. Our experiments show that our method leads to improved gene function prediction compared with state-of-the-art competitors and that combining genetic and congruence networks leads to a further improvement in prediction accuracy.
Resumo:
A critical process in assessing the impact of marine sanctuaries on fish stocks is the movement of fish out into surrounding fished areas. A method is presented for estimating the yearly rate of emigration of animals from a protected (“no-take”) zone. Movement rates for exploited populations are usually inferred from tag-recovery studies, where tagged individuals are released into the sea at known locations and their location of recapture is reported by fishermen. There are three drawbacks, however, with this method of estimating movement rates: 1) if animals are tagged and released into both protected and fished areas, movement rates will be overestimated if the prohibition on recapturing tagged fish later from within the protected area is not made explicit; 2) the times of recapture are random; and 3) an unknown proportion of tagged animals are recaptured but not reported back to researchers. An estimation method is proposed which addresses these three drawbacks of tag-recovery data. An analytic formula and an associated double-hypergeometric likelihood method were derived. These two estimators of emigration rate were applied to tag recoveries from southern rock lobsters (Jasus edwardsii) released into a sanctuary and into its surrounding fished area in South Australia.
Resumo:
Bycatch, or the incidental catch of nontarget organisms during fi shing operations, is a major issue in U.S. shrimp trawl fisheries. Because bycatch is typically discarded at sea, total bycatch is usually estimated by extrapolating from an observed bycatch sample to the entire fleet with either mean-per-unit or ratio estimators. Using both field observations of commercial shrimp trawlers and computer simulations, I compared five methods for generating bycatch estimates that were used in past studies, a mean-per-unit estimator and four forms of the ratio estimator, respectively: 1) the mean fish catch per unit of effort, where unit effort was a proxy for sample size, 2) the mean of the individual fish to shrimp ratios, 3) the ratio of mean fish catch to mean shrimp catch, 4) the mean of the ratios of fish catch per time fished (a variable measure of effort), and 5) the ratio of mean fish catch per mean time fished. For field data, different methods used to estimate bycatch of Atlantic croaker, spot, and weakfish yielded extremely different results, with no discernible pattern in the estimates by method, geographic region, or species. Simulated fishing fleets were used to compare bycatch estimated by the fi ve methods with “actual” (simulated) bycatch. Simulations were conducted by using both normal and delta lognormal distributions of fish and shrimp and employed a range of values for several parameters, including mean catches of fish and shrimp, variability in the catches of fish and shrimp, variability in fishing effort, number of observations, and correlations between fish and shrimp catches. Results indicated that only the mean per unit estimators provided statistically unbiased estimates, while all other methods overestimated bycatch. The mean of the individual fish to shrimp ratios, the method used in the South Atlantic Bight before the 1990s, gave the most biased estimates. Because of the statistically significant two- and 3-way interactions among parameters, it is unlikely that estimates generated by one method can be converted or corrected to estimates made by another method: therefore bycatch estimates obtained with different methods should not be compared directly.
Resumo:
Longitudinal surveys of anglers or boat owners are widely used in recreational fishery management to estimate total catch over a fishing season. Survey designs with repeated measures of the same random sample over time are effective if the goal is to show statistically significant differences among point estimates for successive time intervals. However, estimators for total catch over the season that are based on longitudinal sampling will be less precise than stratified estimators based on successive independent samples. Conventional stratified variance estimators would be negatively biased if applied to such data because the samples for different time strata are not independent. We formulated new general estimators for catch rate, total catch, and respective variances that sum across time strata but also account for correlation stratum samples. A case study of the Japanese recreational fishery for ayu (Plecoglossus altivelis) showed that the conventional stratified variance estimate of total catch was about 10% of the variance estimated by our new method. Combining the catch data for each angler or boat owners throughout the season reduced the variance of the total catch estimate by about 75%. For successive independent surveys based on random independent samples, catch, and variance estimators derived from combined data would be the same as conventional stratified estimators when sample allocation is proportional to strata size. We are the first to report annual catch estimates for ayu in a Japanese river by formulating modified estimators for day-permit anglers.
Resumo:
In recent years there has been a growing interest amongst the speech research community into the use of spectral estimators which circumvent the traditional quasi-stationary assumption and provide greater time-frequency (t-f) resolution than conventional spectral estimators, such as the short time Fourier power spectrum (STFPS). One distribution in particular, the Wigner distribution (WD), has attracted considerable interest. However, experimental studies have indicated that, despite its improved t-f resolution, employing the WD as the front end of speech recognition system actually reduces recognition performance; only by explicitly re-introducing t-f smoothing into the WD are recognition rates improved. In this paper we provide an explanation for these findings. By treating the spectral estimation problem as one of optimization of a bias variance trade off, we show why additional t-f smoothing improves recognition rates, despite reducing the t-f resolution of the spectral estimator. A practical adaptive smoothing algorithm is presented, whicy attempts to match the degree of smoothing introduced into the WD with the time varying quasi-stationary regions within the speech waveform. The recognition performance of the resulting adaptively smoothed estimator is found to be comparable to that of conventional filterbank estimators, yet the average temporal sampling rate of the resulting spectral vectors is reduced by around a factor of 10. © 1992.
Resumo:
The long term goal of our work is to enable rapid prototyping design optimization to take place on geometries of arbitrary size in a spirit of a real time computer game. In recent papers we have reported the integration of a Level Set based geometry kernel with an octree-based cut-Cartesian mesh generator, RANS flow solver and post-processing all within a single piece of software - and all implemented in parallel with commodity PC clusters as the target. This work has shown that it is possible to eliminate all serial bottlenecks from the CED Process. This paper reports further progress towards our goal; in particular we report on the generation of viscous layer meshes to bridge the body to the flow across the cut-cells. The Level Set formulation, which underpins the geometry representation, is used as a natural mechanism to allow rapid construction of conformal layer meshes. The guiding principle is to construct the mesh which most closely approximates the body but remains solvable. This apparently novel approach is described and examples given.
Resumo:
Cluster analysis of ranking data, which occurs in consumer questionnaires, voting forms or other inquiries of preferences, attempts to identify typical groups of rank choices. Empirically measured rankings are often incomplete, i.e. different numbers of filled rank positions cause heterogeneity in the data. We propose a mixture approach for clustering of heterogeneous rank data. Rankings of different lengths can be described and compared by means of a single probabilistic model. A maximum entropy approach avoids hidden assumptions about missing rank positions. Parameter estimators and an efficient EM algorithm for unsupervised inference are derived for the ranking mixture model. Experiments on both synthetic data and real-world data demonstrate significantly improved parameter estimates on heterogeneous data when the incomplete rankings are included in the inference process.
Resumo:
The application of automated design optimization to real-world, complex geometry problems is a significant challenge - especially if the topology is not known a priori like in turbine internal cooling. The long term goal of our work is to focus on an end-to-end integration of the whole CFD Process, from solid model through meshing, solving and post-processing to enable this type of design optimization to become viable & practical. In recent papers we have reported the integration of a Level Set based geometry kernel with an octree-based cut- Cartesian mesh generator, RANS flow solver, post-processing & geometry editing all within a single piece of software - and all implemented in parallel with commodity PC clusters as the target. The cut-cells which characterize the approach are eliminated by exporting a body-conformal mesh guided by the underpinning Level Set. This paper extends this work still further with a simple scoping study showing how the basic functionality can be scripted & automated and then used as the basis for automated optimization of a generic gas turbine cooling geometry. Copyright © 2008 by W.N.Dawes.
Resumo:
Cambridge Flow Solutions Ltd, Compass House, Vision Park, Cambridge, CB4 9AD, UK Real-world simulation challenges are getting bigger: virtual aero-engines with multistage blade rows coupled with their secondary air systems & with fully featured geometry; environmental flows at meta-scales over resolved cities; synthetic battlefields. It is clear that the future of simulation is scalable, end-to-end parallelism. To address these challenges we have reported in a sequence of papers a series of inherently parallel building blocks based on the integration of a Level Set based geometry kernel with an octree-based cut-Cartesian mesh generator, RANS flow solver, post-processing and geometry management & editing. The cut-cells which characterize the approach are eliminated by exporting a body-conformal mesh driven by the underpinning Level Set and managed by mesh quality optimization algorithms; this permits third party flow solvers to be deployed. This paper continues this sequence by reporting & demonstrating two main novelties: variable depth volume mesh refinement enabling variable surface mesh refinement and a radical rework of the mesh generation into a bottom-up system based on Space Filling Curves. Also reported are the associated extensions to body-conformal mesh export. Everything is implemented in a scalable, parallel manner. As a practical demonstration, meshes of guaranteed quality are generated for a fully resolved, generic aircraft carrier geometry, a cooled disc brake assembly and a B747 in landing configuration. Copyright © 2009 by W.N.Dawes.
Resumo:
The background to this review paper is research we have performed over recent years aimed at developing a simulation system capable of handling large scale, real world applications implemented in an end-to-end parallel, scalable manner. The particular focus of this paper is the use of a Level Set solid modeling geometry kernel within this parallel framework to enable automated design optimization without topological restrictions and on geometries of arbitrary complexity. Also described is another interesting application of Level Sets: their use in guiding the export of a body-conformal mesh from our basic cut-Cartesian background octree - mesh - this permits third party flow solvers to be deployed. As a practical demonstrations meshes of guaranteed quality are generated and flow-solved for a B747 in full landing configuration and an automated optimization is performed on a cooled turbine tip geometry. Copyright © 2009 by W.N.Dawes.