905 resultados para Algorithms
Resumo:
The objective of this work was to compare the soybean crop mapping in the western of Parana State by MODIS/Terra and TM/Landsat 5 images. Firstly, it was generated a soybean crop mask using six TM images covering the crop season, which was used as a reference. The images were submitted to Parallelepiped and Maximum Likelihood digital classification algorithms, followed by visual inspection. Four MODIS images, covering the vegetative peak, were classified using the Parallelepiped method. The quality assessment of MODIS and TM classification was carried out through an Error Matrix, considering 100 sample points between soybean or not soybean, randomly allocated in each of the eight municipalities within the study area. The results showed that both the Overall Classification (OC) and the Kappa Index (KI) have produced values ranging from 0.55 to 0.80, considered good to very good performances, either in TM or MODIS images. When OC and KI, from both sensors were compared, it wasn't found no statistical difference between them. The soybean mapping, using MODIS, has produced 70% of reliance in terms of users. The main conclusion is that the mapping of soybean by MODIS is feasible, with the advantage to have better temporal resolution than Landsat, and to be available on the internet, free of charge.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educao Fsica
Resumo:
Universidade Estadual de Campinas . Faculdade de Educao Fsica
Resumo:
Universidade Estadual de Campinas . Faculdade de Educao Fsica
Resumo:
Universidade Estadual de Campinas. Faculdade de Educao Fsica
Resumo:
Universidade Estadual de Campinas . Faculdade de Educao Fsica
Resumo:
In this paper, space adaptivity is introduced to control the error in the numerical solution of hyperbolic systems of conservation laws. The reference numerical scheme is a new version of the discontinuous Galerkin method, which uses an implicit diffusive term in the direction of the streamlines, for stability purposes. The decision whether to refine or to unrefine the grid in a certain location is taken according to the magnitude of wavelet coefficients, which are indicators of local smoothness of the numerical solution. Numerical solutions of the nonlinear Euler equations illustrate the efficiency of the method. Springer 2005.
Resumo:
Due to the imprecise nature of biological experiments, biological data is often characterized by the presence of redundant and noisy data. This may be due to errors that occurred during data collection, such as contaminations in laboratorial samples. It is the case of gene expression data, where the equipments and tools currently used frequently produce noisy biological data. Machine Learning algorithms have been successfully used in gene expression data analysis. Although many Machine Learning algorithms can deal with noise, detecting and removing noisy instances from the training data set can help the induction of the target hypothesis. This paper evaluates the use of distance-based pre-processing techniques for noise detection in gene expression data classification problems. This evaluation analyzes the effectiveness of the techniques investigated in removing noisy data, measured by the accuracy obtained by different Machine Learning classifiers over the pre-processed data.
Resumo:
A definio das parcelas familiares em projetos de reforma agrria envolve questes tcnicas e sociais. Essas questes esto associadas principalmente s diferentes aptides agrcolas do solo nestes projetos. O objetivo deste trabalho foi apresentar mtodo para realizar o processo de ordenamento territorial em assentamentos de reforma agrria empregando Algoritmo Gentico (AG). O AG foi testado no Projeto de Assentamento Veredas, em Minas Gerais, e implementado com base no sistema de aptido agrcola das terras.
Resumo:
OBJETIVO: Desenvolver simulao computadorizada de ablao para produzir lentes de contato personalizadas a fim de corrigir aberraes de alta ordem. MTODOS: Usando dados reais de um paciente com ceratocone, mensurados em um aberrmetro ("wavefront") com sensor Hartmann-Shack, foram determinados as espessuras de lentes de contato que compensam essas aberraes assim como os nmeros de pulsos necessrios para fazer ablao as lentes especificamente para este paciente. RESULTADOS: Os mapas de correo so apresentados e os nmeros dos pulsos foram calculados, usando feixes com a largura de 0,5 mm e profundidade de ablao de 0,3 m. CONCLUSES: Os resultados simulados foram promissores, mas ainda precisam ser aprimorados para que o sistema de ablao "real" possa alcanar a preciso desejada.
Resumo:
OBJETIVO: Desenvolver a instrumentao e o "software" para topografia de crnea de grande-ngulo usando o tradicional disco de Plcido. O objetivo permitir o mapeamento de uma regio maior da crnea para topgrafos de crnea que usem a tcnica de Plcido, fazendo-se uma adaptao simples na mira. MTODOS: Utilizando o tradicional disco de Plcido de um topgrafo de crnea tradicional, 9 LEDs (Light Emitting Diodes) foram adaptados no anteparo cnico para que o paciente voluntrio pudesse fixar o olhar em diferentes direes. Para cada direo imagens de Plcido foram digitalizadas e processadas para formar, por meio de algoritmo envolvendo elementos sofisticados de computao grfica, um mapa tridimensional completo da crnea toda. RESULTADOS: Resultados apresentados neste trabalho mostram que uma regio de at 100% maior pode ser mapeada usando esta tcnica, permitindo que o clnico mapeie at prximo ao limbo da crnea. So apresentados aqui os resultados para uma superfcie esfrica de calibrao e tambm para uma crnea in vivo com alto grau de astigmatismo, mostrando a curvatura e elevao. CONCLUSO: Acredita-se que esta nova tcnica pode propiciar a melhoria de alguns processos, como por exemplo: adaptao de lentes de contato, algoritmos para ablaes costumizadas para hipermetropia, entre outros.
Resumo:
OBJETIVO: Estimar valores de referncia e funo de hierarquia de docentes em Sade Coletiva do Brasil por meio de anlise da distribuio do ndice h. MTODOS: A partir do portal da Coordenao de Aperfeioamento de Pessoal de Nvel Superior, 934 docentes foram identificados em 2008, dos quais 819 foram analisados. O ndice h de cada docente foi obtido na Web of Science mediante algoritmos de busca com controle para homonmias e alternativas de grafia de nome. Para cada regio e para o Brasil como um todo ajustou-se funo densidade de probabilidade exponencial aos parmetros mdia e taxa de decrscimo por regio. Foram identificadas medidas de posio e, com o complemento da funo probabilidade acumulada, funo de hierarquia entre autores conforme o ndice h por regio. RESULTADOS: Dos docentes, 29,8% no tinham qualquer registro de citao (h = 0). A mdia de h para o Pas foi 3,1, com maior mdia na regio Sul (4,7). A mediana de h para o Pas foi 2,1, tambm com maior mediana na Sul (3,2). Para uma padronizao de populao de autores em cem, os primeiros colocados para o Pas devem ter h = 16; na estratificao por regio, a primeira posio demanda valores mais altos no Nordeste, Sudeste e Sul, sendo nesta ltima h = 24. CONCLUSES: Avaliados pelos ndices h da Web of Science, a maioria dos autores em Sade Coletiva no supera h = 5. H diferenas entres as regies, com melhor desempenho para a Sul e valores semelhantes entre Sudeste e Nordeste.
Resumo:
Diagnostic methods have been an important tool in regression analysis to detect anomalies, such as departures from error assumptions and the presence of outliers and influential observations with the fitted models. Assuming censored data, we considered a classical analysis and Bayesian analysis assuming no informative priors for the parameters of the model with a cure fraction. A Bayesian approach was considered by using Markov Chain Monte Carlo Methods with Metropolis-Hasting algorithms steps to obtain the posterior summaries of interest. Some influence methods, such as the local influence, total local influence of an individual, local influence on predictions and generalized leverage were derived, analyzed and discussed in survival data with a cure fraction and covariates. The relevance of the approach was illustrated with a real data set, where it is shown that, by removing the most influential observations, the decision about which model best fits the data is changed.
Resumo:
The objective of this manuscript is to discuss the existing barriers for the dissemination of medical guidelines, and to present strategies that facilitate the adaptation of the recommendations into clinical practice. The literature shows that it usually takes several years until new scientific evidence is adopted in current practice, even when there is obvious impact in patients' morbidity and mortality. There are some examples where more than thirty years have elapsed since the first case reports about the use of a effective therapy were published until its utilization became routine. That is the case of fibrinolysis for the treatment of acute myocardial infarction. Some of the main barriers for the implementation of new recommendations are: the lack of knowledge of a new guideline, personal resistance to changes, uncertainty about the efficacy of the proposed recommendation, fear of potential side-effects, difficulties in remembering the recommendations, inexistence of institutional policies reinforcing the recommendation and even economical restrains. In order to overcome these barriers a strategy that involves a program with multiple tools is always the best. That must include the implementation of easy-to-use algorithms, continuous medical education materials and lectures, electronic or paper alerts, tools to facilitate evaluation and prescription, and periodic audits to show results to the practitioners involved in the process. It is also fundamental that the medical societies involved with the specific medical issue support the program for its scientific and ethical soundness. The creation of multidisciplinary committees in each institution and the inclusion of opinion leaders that have pro-active and lasting attitudes are the key-points for the program's success. In this manuscript we use as an example the implementation of a guideline for venous thromboembolism prophylaxis, but the concepts described here can be easily applied to any other guideline. Therefore, these concepts could be very useful for institutions and services that aim at quality improvement of patient care. Changes in current medical practice recommended by guidelines may take some time. However, if there is a broader participation of opinion leaders and the use of several tools listed here, they surely have a greater probability of reaching the main objectives: improvement in provided medical care and patient safety.
Resumo:
Background: Genome wide association studies (GWAS) are becoming the approach of choice to identify genetic determinants of complex phenotypes and common diseases. The astonishing amount of generated data and the use of distinct genotyping platforms with variable genomic coverage are still analytical challenges. Imputation algorithms combine directly genotyped markers information with haplotypic structure for the population of interest for the inference of a badly genotyped or missing marker and are considered a near zero cost approach to allow the comparison and combination of data generated in different studies. Several reports stated that imputed markers have an overall acceptable accuracy but no published report has performed a pair wise comparison of imputed and empiric association statistics of a complete set of GWAS markers. Results: In this report we identified a total of 73 imputed markers that yielded a nominally statistically significant association at P < 10(-5) for type 2 Diabetes Mellitus and compared them with results obtained based on empirical allelic frequencies. Interestingly, despite their overall high correlation, association statistics based on imputed frequencies were discordant in 35 of the 73 (47%) associated markers, considerably inflating the type I error rate of imputed markers. We comprehensively tested several quality thresholds, the haplotypic structure underlying imputed markers and the use of flanking markers as predictors of inaccurate association statistics derived from imputed markers. Conclusions: Our results suggest that association statistics from imputed markers showing specific MAF (Minor Allele Frequencies) range, located in weak linkage disequilibrium blocks or strongly deviating from local patterns of association are prone to have inflated false positive association signals. The present study highlights the potential of imputation procedures and proposes simple procedures for selecting the best imputed markers for follow-up genotyping studies.