120 resultados para Resampling
Resumo:
Coffee is one of the main products of Brazilian agriculture, the country is currently the largest producer and exporter. Knowing the growth pattern of a fruit can assist in the development of culture indicating for example, the times of increased fruit weight and its optimum harvest, essential to improve the management and quality of coffee. Some authors indicate that the growth curve of the coffee fruit has a double sigmoid shape. However, it consists of just a visual observation without exploring the use of regression models. The aims of this study were: i) determine if the growth pattern of the coffee fruit is really double sigmoidal; ii) to propose a new approach in weighted importance re-sampling to estimate the parameters of regression models and select the most suitable double sigmoidal model to describe the growth of coffee fruits; iii) to study the spatial distribution effect of the crop in the growth curve of coffee fruits. In the first article the aim was determine if the growth pattern of the coffee fruit is really double sigmoidal. The models double Gompertz and double Logistic showed significantly superior fit to models of simple sigmoid confirming that the standard of coffee fruits growth is really double sigmoidal. In the second article we propose to consider an approximation of the likelihood as the candidate distribution of the weighted importance resampling, aiming to facilitate the process of obtaining samples of marginal distributions of each parameter. This technique was effective since it provided parameters with practical interpretation and low computational effort, therefore, it can be used to estimate parameters of double sigmoidal growth curves. The nonlinear model double Logistic was the most appropriate to describe the growth curve of coffee fruits. In the third article aimed to verify the influence of different planting alignments and sun exposure faces in the fruits growth curve. A difference between the growth rates in the two stages of fruit development was identified, regardless the side. Although it has been proven differences in productivity and quality of coffee, there was no difference between the growth curves in the different planting alignments herein studied.
Resumo:
O presente trabalho trata da filtragem e reconstrução de sinais em frequência intermediária usando FPGA. É feito o desenvolvimento de algoritmos usando processamento digital de sinais e também a implementação dos mesmos, constando desde o projeto da placa de circuito impresso, montagem e teste. O texto apresenta um breve estudo de amostragem e reconstrução de sinais em geral. Especial atenção é dada à amostragem de sinais banda-passante e à análise de questões práticas de reconstrução de sinais em frequência intermediária. Dois sistemas de reconstrução de sinais baseados em processamento digital de sinais, mais especificamente reamostragem no domínio discreto, são apresentados e analisados. São também descritas teorias de processos de montagem e soldagem de placas eletrônicas com objetivo de definir uma metodologia de projeto, montagem e soldagem de placas eletrônicas. Tal metodologia é aplicada no projeto e manufatura do protótipo de um módulo de filtragem digital para repetidores de telefonia celular. O projeto, implementado usando FPGA, é baseado nos dois sistemas supracitados. Ao final do texto, resultados obtidos em experimentos de filtragem digital e reconstrução de sinais em frequência intermediária com o protótipo desenvolvido são apresentados.
Resumo:
A presente Dissertação de Mestrado tem como objetivo o estudo do problema de inversão sísmica baseada em refletores planos para arranjo fonte-comum (FC) e ponto-médiocomum (PMC). O modelo direto é descrito por camadas homogêneas, isotrópicas com interfaces plano-horizontais. O problema é relacionado ao empilhamento NMO baseado na otimização da função semblance, para seções PMC corrigidas de sobretempo normal (NMO). O estudo foi baseado em dois princípios. O primeiro princípio adotado foi de combinar dois grupos de métodos de inversão: um Método Global e um Método Local. O segundo princípio adotado foi o de cascata, segundo a teoria Wichert-Herglotz-Bateman, que estabelece que para conhecer uma camada inferior tem-se que conhecer primeiro a camada superior (dissecação). A aplicação do estudo é voltada à simulação sísmica de Bacia Sedimentar do Solimões e de Bacia Marinha para se obter uma distribuição local 1D de velocidades e espessuras para a subsuperfície em horizontes alvo. Sendo assim, limitamos a inversão entre 4 e 11 refletores, uma vez que na prática a indústria limita uma interpretação realizada apenas em número equivalente de 3 a 4 refletores principais. Ressalta-se que este modelo é aplicável como condição inicial ao imageamento de seções sísmicas em regiões geologicamente complexas com variação horizontal suave de velocidades. Os dados sintéticos foram gerados a partir dos modelos relacionados a informações geológicas, o que corresponde a uma forte informação a priori no modelo de inversão. Para a construção dos modelos relacionados aos projetos da Rede Risco Exploratório (FINEP) e de formação de recursos humanos da ANP em andamento, analisamos os seguintes assuntos relevantes: (1) Geologia de bacias sedimentares terrestre dos Solimões e ma rinha (estratigráfica, estrutural, tectônica e petrolífera); (2) Física da resolução vertical e horizontal; e (3) Discretização temporal-espacial no cubo de multi-cobertura. O processo de inversão é dependente do efeito da discretização tempo-espacial do campo de ondas, dos parâmetros físicos do levantamento sísmico, e da posterior reamostragem no cubo de cobertura múltipla. O modelo direto empregado corresponde ao caso do operador do empilhamento NMO (1D), considerando uma topografia de observação plana. O critério básico tomado como referência para a inversão e o ajuste de curvas é a norma 2 (quadrática). A inversão usando o presente modelo simples é computacionalmente atrativa por ser rápida, e conveniente por permitir que vários outros recursos possam ser incluídos com interpretação física lógica; por exemplo, a Zona de Fresnel Projetada (ZFP), cálculo direto da divergência esférica, inversão Dix, inversão linear por reparametrização, informações a priori, regularização. A ZFP mostra ser um conceito út il para estabelecer a abertura da janela espacial da inversão na seção tempo-distância, e representa a influência dos dados na resolução horizontal. A estimativa da ZFP indica uma abertura mínima com base num modelo adotado, e atualizável. A divergência esférica é uma função suave, e tem base física para ser usada na definição da matriz ponderação dos dados em métodos de inversão tomográfica. A necessidade de robustez na inversão pode ser analisada em seções sísmicas (FC, PMC) submetida a filtragens (freqüências de cantos: 5;15;75;85; banda-passante trapezoidal), onde se pode identificar, comparar e interpretar as informações contidas. A partir das seções, concluímos que os dados são contaminados com pontos isolados, o que propõe métodos na classe dos considerados robustos, tendo-se como referência a norma 2 (quadrados- mínimos) de ajuste de curvas. Os algoritmos foram desenvolvidos na linguagem de programação FORTRAN 90/95, usando o programa MATLAB para apresentação de resultados, e o sistema CWP/SU para modelagem sísmica sintética, marcação de eventos e apresentação de resultados.
Resumo:
Pós-graduação em Agronomia (Entomologia Agrícola) - FCAV
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
DIGITAL ELEVATION MODEL VALIDATION WITH NO GROUND CONTROL: APPLICATION TO THE TOPODATA DEM IN BRAZIL
Resumo:
Digital Elevation Model (DEM) validation is often carried out by comparing the data with a set of ground control points. However, the quality of a DEM can also be considered in terms of shape realism. Beyond visual analysis, it can be verified that physical and statistical properties of the terrestrial relief are fulfilled. This approach is applied to an extract of Topodata, a DEM obtained by resampling the SRTM DEM over the Brazilian territory with a geostatistical approach. Several statistical indicators are computed, and they show that the quality of Topodata in terms of shape rendering is improved with regards to SRTM.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Gastric cancer is the second leading cause of cancer-related death worldwide. The identification of new cancer biomarkers is necessary to reduce the mortality rates through the development of new screening assays and early diagnosis, as well as new target therapies. In this study, we performed a proteomic analysis of noncardia gastric neoplasias of individuals from Northern Brazil. The proteins were analyzed by two-dimensional electrophoresis and mass spectrometry. For the identification of differentially expressed proteins, we used statistical tests with bootstrapping resampling to control the type I error in the multiple comparison analyses. We identified 111 proteins involved in gastric carcinogenesis. The computational analysis revealed several proteins involved in the energy production processes and reinforced the Warburg effect in gastric cancer. ENO1 and HSPB1 expression were further evaluated. ENO1 was selected due to its role in aerobic glycolysis that may contribute to the Warburg effect. Although we observed two up-regulated spots of ENO1 in the proteomic analysis, the mean expression of ENO1 was reduced in gastric tumors by western blot. However, mean ENO1 expression seems to increase in more invasive tumors. This lack of correlation between proteomic and western blot analyses may be due to the presence of other ENO1 spots that present a slightly reduced expression, but with a high impact in the mean protein expression. In neoplasias, HSPB1 is induced by cellular stress to protect cells against apoptosis. In the present study, HSPB1 presented an elevated protein and mRNA expression in a subset of gastric cancer samples. However, no association was observed between HSPB1 expression and clinicopathological characteristics. Here, we identified several possible biomarkers of gastric cancer in individuals from Northern Brazil. These biomarkers may be useful for the assessment of prognosis and stratification for therapy if validated in larger clinical study sets.
Resumo:
Genetic characterization helps to assure breed integrity and to assign individuals to defined populations. The objective of this study was to characterize genetic diversity in six horse breeds and to analyse the population structure of the Franches-Montagnes breed, especially with regard to the degree of introgression with Warmblood. A total of 402 alleles from 50 microsatellite loci were used. The average number of alleles per locus was significantly lower in Thoroughbreds and Arabians. Average heterozygosities between breeds ranged from 0.61 to 0.72. The overall average of the coefficient of gene differentiation because of breed differences was 0.100, with a range of 0.036-0.263. No significant correlation was found between this parameter and the number of alleles per locus. An increase in the number of homozygous loci with increasing inbreeding could not be shown for the Franches-Montagnes horses. The proportion of shared alleles, combined with the neighbour-joining method, defined clusters for Icelandic Horse, Comtois, Arabians and Franches-Montagnes. A more disparate clustering could be seen for European Warmbloods and Thoroughbreds, presumably from frequent grading-up of Warmbloods with Thoroughbreds. Grading-up effects were also observed when Bayesian and Monte Carlo resampling approaches were used for individual assignment to a given population. Individual breed assignments to defined reference populations will be very difficult when introgression has occurred. The Bayesian approach within the Franches-Montagnes breed differentiated individuals with varied proportions of Warmblood.
Resumo:
Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed modesl and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated marginal residual vector by the Cholesky decomposition of the inverse of the estimated marginal variance matrix. Linear functions or the resulting "rotated" residuals are used to construct an empirical cumulative distribution function (ECDF), whose stochastic limit is characterized. We describe a resampling technique that serves as a computationally efficient parametric bootstrap for generating representatives of the stochastic limit of the ECDF. Through functionals, such representatives are used to construct global tests for the hypothesis of normal margional errors. In addition, we demonstrate that the ECDF of the predicted random effects, as described by Lange and Ryan (1989), can be formulated as a special case of our approach. Thus, our method supports both omnibus and directed tests. Our method works well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series).
Resumo:
Power calculations in a small sample comparative study, with a continuous outcome measure, are typically undertaken using the asymptotic distribution of the test statistic. When the sample size is small, this asymptotic result can be a poor approximation. An alternative approach, using a rank based test statistic, is an exact power calculation. When the number of groups is greater than two, the number of calculations required to perform an exact power calculation is prohibitive. To reduce the computational burden, a Monte Carlo resampling procedure is used to approximate the exact power function of a k-sample rank test statistic under the family of Lehmann alternative hypotheses. The motivating example for this approach is the design of animal studies, where the number of animals per group is typically small.
Resumo:
Various inference procedures for linear regression models with censored failure times have been studied extensively. Recent developments on efficient algorithms to implement these procedures enhance the practical usage of such models in survival analysis. In this article, we present robust inferences for certain covariate effects on the failure time in the presence of "nuisance" confounders under a semiparametric, partial linear regression setting. Specifically, the estimation procedures for the regression coefficients of interest are derived from a working linear model and are valid even when the function of the confounders in the model is not correctly specified. The new proposals are illustrated with two examples and their validity for cases with practical sample sizes is demonstrated via a simulation study.