904 resultados para Sweep algorithms
Resumo:
A definição das parcelas familiares em projetos de reforma agrária envolve questões técnicas e sociais. Essas questões estão associadas principalmente às diferentes aptidões agrícolas do solo nestes projetos. O objetivo deste trabalho foi apresentar método para realizar o processo de ordenamento territorial em assentamentos de reforma agrária empregando Algoritmo Genético (AG). O AG foi testado no Projeto de Assentamento Veredas, em Minas Gerais, e implementado com base no sistema de aptidão agrícola das terras.
Resumo:
OBJETIVO: Desenvolver simulação computadorizada de ablação para produzir lentes de contato personalizadas a fim de corrigir aberrações de alta ordem. MÉTODOS: Usando dados reais de um paciente com ceratocone, mensurados em um aberrômetro ("wavefront") com sensor Hartmann-Shack, foram determinados as espessuras de lentes de contato que compensam essas aberrações assim como os números de pulsos necessários para fazer ablação as lentes especificamente para este paciente. RESULTADOS: Os mapas de correção são apresentados e os números dos pulsos foram calculados, usando feixes com a largura de 0,5 mm e profundidade de ablação de 0,3 µm. CONCLUSÕES: Os resultados simulados foram promissores, mas ainda precisam ser aprimorados para que o sistema de ablação "real" possa alcançar a precisão desejada.
Resumo:
OBJETIVO: Desenvolver a instrumentação e o "software" para topografia de córnea de grande-ângulo usando o tradicional disco de Plácido. O objetivo é permitir o mapeamento de uma região maior da córnea para topógrafos de córnea que usem a técnica de Plácido, fazendo-se uma adaptação simples na mira. MÉTODOS: Utilizando o tradicional disco de Plácido de um topógrafo de córnea tradicional, 9 LEDs (Light Emitting Diodes) foram adaptados no anteparo cônico para que o paciente voluntário pudesse fixar o olhar em diferentes direções. Para cada direção imagens de Plácido foram digitalizadas e processadas para formar, por meio de algoritmo envolvendo elementos sofisticados de computação gráfica, um mapa tridimensional completo da córnea toda. RESULTADOS: Resultados apresentados neste trabalho mostram que uma região de até 100% maior pode ser mapeada usando esta técnica, permitindo que o clínico mapeie até próximo ao limbo da córnea. São apresentados aqui os resultados para uma superfície esférica de calibração e também para uma córnea in vivo com alto grau de astigmatismo, mostrando a curvatura e elevação. CONCLUSÃO: Acredita-se que esta nova técnica pode propiciar a melhoria de alguns processos, como por exemplo: adaptação de lentes de contato, algoritmos para ablações costumizadas para hipermetropia, entre outros.
Resumo:
OBJETIVO: Estimar valores de referência e função de hierarquia de docentes em Saúde Coletiva do Brasil por meio de análise da distribuição do índice h. MÉTODOS: A partir do portal da Coordenação de Aperfeiçoamento de Pessoal de Nível Superior, 934 docentes foram identificados em 2008, dos quais 819 foram analisados. O índice h de cada docente foi obtido na Web of Science mediante algoritmos de busca com controle para homonímias e alternativas de grafia de nome. Para cada região e para o Brasil como um todo ajustou-se função densidade de probabilidade exponencial aos parâmetros média e taxa de decréscimo por região. Foram identificadas medidas de posição e, com o complemento da função probabilidade acumulada, função de hierarquia entre autores conforme o índice h por região. RESULTADOS: Dos docentes, 29,8% não tinham qualquer registro de citação (h = 0). A média de h para o País foi 3,1, com maior média na região Sul (4,7). A mediana de h para o País foi 2,1, também com maior mediana na Sul (3,2). Para uma padronização de população de autores em cem, os primeiros colocados para o País devem ter h = 16; na estratificação por região, a primeira posição demanda valores mais altos no Nordeste, Sudeste e Sul, sendo nesta última h = 24. CONCLUSÕES: Avaliados pelos índices h da Web of Science, a maioria dos autores em Saúde Coletiva não supera h = 5. Há diferenças entres as regiões, com melhor desempenho para a Sul e valores semelhantes entre Sudeste e Nordeste.
Resumo:
Diagnostic methods have been an important tool in regression analysis to detect anomalies, such as departures from error assumptions and the presence of outliers and influential observations with the fitted models. Assuming censored data, we considered a classical analysis and Bayesian analysis assuming no informative priors for the parameters of the model with a cure fraction. A Bayesian approach was considered by using Markov Chain Monte Carlo Methods with Metropolis-Hasting algorithms steps to obtain the posterior summaries of interest. Some influence methods, such as the local influence, total local influence of an individual, local influence on predictions and generalized leverage were derived, analyzed and discussed in survival data with a cure fraction and covariates. The relevance of the approach was illustrated with a real data set, where it is shown that, by removing the most influential observations, the decision about which model best fits the data is changed.
Resumo:
The objective of this manuscript is to discuss the existing barriers for the dissemination of medical guidelines, and to present strategies that facilitate the adaptation of the recommendations into clinical practice. The literature shows that it usually takes several years until new scientific evidence is adopted in current practice, even when there is obvious impact in patients' morbidity and mortality. There are some examples where more than thirty years have elapsed since the first case reports about the use of a effective therapy were published until its utilization became routine. That is the case of fibrinolysis for the treatment of acute myocardial infarction. Some of the main barriers for the implementation of new recommendations are: the lack of knowledge of a new guideline, personal resistance to changes, uncertainty about the efficacy of the proposed recommendation, fear of potential side-effects, difficulties in remembering the recommendations, inexistence of institutional policies reinforcing the recommendation and even economical restrains. In order to overcome these barriers a strategy that involves a program with multiple tools is always the best. That must include the implementation of easy-to-use algorithms, continuous medical education materials and lectures, electronic or paper alerts, tools to facilitate evaluation and prescription, and periodic audits to show results to the practitioners involved in the process. It is also fundamental that the medical societies involved with the specific medical issue support the program for its scientific and ethical soundness. The creation of multidisciplinary committees in each institution and the inclusion of opinion leaders that have pro-active and lasting attitudes are the key-points for the program's success. In this manuscript we use as an example the implementation of a guideline for venous thromboembolism prophylaxis, but the concepts described here can be easily applied to any other guideline. Therefore, these concepts could be very useful for institutions and services that aim at quality improvement of patient care. Changes in current medical practice recommended by guidelines may take some time. However, if there is a broader participation of opinion leaders and the use of several tools listed here, they surely have a greater probability of reaching the main objectives: improvement in provided medical care and patient safety.
Resumo:
Background: Genome wide association studies (GWAS) are becoming the approach of choice to identify genetic determinants of complex phenotypes and common diseases. The astonishing amount of generated data and the use of distinct genotyping platforms with variable genomic coverage are still analytical challenges. Imputation algorithms combine directly genotyped markers information with haplotypic structure for the population of interest for the inference of a badly genotyped or missing marker and are considered a near zero cost approach to allow the comparison and combination of data generated in different studies. Several reports stated that imputed markers have an overall acceptable accuracy but no published report has performed a pair wise comparison of imputed and empiric association statistics of a complete set of GWAS markers. Results: In this report we identified a total of 73 imputed markers that yielded a nominally statistically significant association at P < 10(-5) for type 2 Diabetes Mellitus and compared them with results obtained based on empirical allelic frequencies. Interestingly, despite their overall high correlation, association statistics based on imputed frequencies were discordant in 35 of the 73 (47%) associated markers, considerably inflating the type I error rate of imputed markers. We comprehensively tested several quality thresholds, the haplotypic structure underlying imputed markers and the use of flanking markers as predictors of inaccurate association statistics derived from imputed markers. Conclusions: Our results suggest that association statistics from imputed markers showing specific MAF (Minor Allele Frequencies) range, located in weak linkage disequilibrium blocks or strongly deviating from local patterns of association are prone to have inflated false positive association signals. The present study highlights the potential of imputation procedures and proposes simple procedures for selecting the best imputed markers for follow-up genotyping studies.
Resumo:
Despite the wide distribution of transposable elements (TEs) in mammalian genomes, part of their evolutionary significance remains to be discovered. Today there is a substantial amount of evidence showing that TEs are involved in the generation of new exons in different species. In the present study, we searched 22,805 genes and reported the occurrence of TE-cassettes in coding sequences of 542 cow genes using the RepeatMasker program. Despite the significant number (542) of genes with TE insertions in exons only 14 (2.6%) of them were translated into protein, which we characterized as chimeric genes. From these chimeric genes, only the FAST kinase domains 3 (FASTKD3) gene, present on chromosome BTA 20, is a functional gene and showed evidence of the exaptation event. The genome sequence analysis showed that the last exon coding sequence of bovine FASTKD3 is similar to 85% similar to the ART2A retrotransposon sequence. In addition, comparison among FASTKD3 proteins shows that the last exon is very divergent from those of Homo sapiens, Pan troglodytes and Canis familiares. We suggest that the gene structure of bovine FASTKD3 gene could have originated by several ectopic recombinations between TE copies. Additionally, the absence of TE sequences in all other species analyzed suggests that the TE insertion is clade-specific, mainly in the ruminant lineage.
Resumo:
This paper presents a new statistical algorithm to estimate rainfall over the Amazon Basin region using the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). The algorithm relies on empirical relationships derived for different raining-type systems between coincident measurements of surface rainfall rate and 85-GHz polarization-corrected brightness temperature as observed by the precipitation radar (PR) and TMI on board the TRMM satellite. The scheme includes rain/no-rain area delineation (screening) and system-type classification routines for rain retrieval. The algorithm is validated against independent measurements of the TRMM-PR and S-band dual-polarization Doppler radar (S-Pol) surface rainfall data for two different periods. Moreover, the performance of this rainfall estimation technique is evaluated against well-known methods, namely, the TRMM-2A12 [ the Goddard profiling algorithm (GPROF)], the Goddard scattering algorithm (GSCAT), and the National Environmental Satellite, Data, and Information Service (NESDIS) algorithms. The proposed algorithm shows a normalized bias of approximately 23% for both PR and S-Pol ground truth datasets and a mean error of 0.244 mm h(-1) ( PR) and -0.157 mm h(-1)(S-Pol). For rain volume estimates using PR as reference, a correlation coefficient of 0.939 and a normalized bias of 0.039 were found. With respect to rainfall distributions and rain area comparisons, the results showed that the formulation proposed is efficient and compatible with the physics and dynamics of the observed systems over the area of interest. The performance of the other algorithms showed that GSCAT presented low normalized bias for rain areas and rain volume [0.346 ( PR) and 0.361 (S-Pol)], and GPROF showed rainfall distribution similar to that of the PR and S-Pol but with a bimodal distribution. Last, the five algorithms were evaluated during the TRMM-Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) 1999 field campaign to verify the precipitation characteristics observed during the easterly and westerly Amazon wind flow regimes. The proposed algorithm presented a cumulative rainfall distribution similar to the observations during the easterly regime, but it underestimated for the westerly period for rainfall rates above 5 mm h(-1). NESDIS(1) overestimated for both wind regimes but presented the best westerly representation. NESDIS(2), GSCAT, and GPROF underestimated in both regimes, but GPROF was closer to the observations during the easterly flow.
Resumo:
Context. Classical Be stars are rapid rotators of spectral type late O to early A and luminosity class V-III, which exhibit Balmer emission lines and often a near infrared excess originating in an equatorially concentrated circumstellar envelope, both produced by sporadic mass ejection episodes. The causes of the abnormal mass loss (the so-called Be phenomenon) are as yet unknown. Aims. For the first time, we can now study in detail Be stars outside the Earth's atmosphere with sufficient temporal resolution. We investigate the variability of the Be Star CoRoT-ID 102761769 observed with the CoRoT satellite in the exoplanet field during the initial run. Methods. One low-resolution spectrum of the star was obtained with the INT telescope at the Observatorio del Roque de los Muchachos. A time series analysis was performed using both cleanest and singular spectrum analysis algorithms to the CoRoT light curve. To identify the pulsation modes of the observed frequencies, we computed a set of models representative of CoRoT-ID 102761769 by varying its main physical parameters inside the uncertainties discussed. Results. We found two close frequencies related to the star. They are 2.465 c d(-1) (28.5 mu Hz) and 2.441 c d(-1) (28.2 mu Hz). The precision to which those frequencies were found is 0.018 c d(-1) (0.2 mu Hz). The projected stellar rotation was estimated to be 120 km s(-1) from the Fourier transform of spectral lines. If CoRoT-ID 102761769 is a typical Galactic Be star it rotates near the critical velocity. The critical rotation frequency of a typical B5-6 star is about 3.5 c d(-1) (40.5 mu Hz), which implies that the above frequencies are really caused by stellar pulsations rather than star's rotation.
Resumo:
Context. CoRoT is a pioneering space mission devoted to the analysis of stellar variability and the photometric detection of extrasolar planets. Aims. We present the list of planetary transit candidates detected in the first field observed by CoRoT, IRa01, the initial run toward the Galactic anticenter, which lasted for 60 days. Methods. We analysed 3898 sources in the coloured bands and 5974 in the monochromatic band. Instrumental noise and stellar variability were taken into account using detrending tools before applying various transit search algorithms. Results. Fifty sources were classified as planetary transit candidates and the most reliable 40 detections were declared targets for follow-up ground-based observations. Two of these targets have so far been confirmed as planets, CoRoT-1b and CoRoT-4b, for which a complete characterization and specific studies were performed.
Resumo:
Aims. In this work, we describe the pipeline for the fast supervised classification of light curves observed by the CoRoT exoplanet CCDs. We present the classification results obtained for the first four measured fields, which represent a one-year in-orbit operation. Methods. The basis of the adopted supervised classification methodology has been described in detail in a previous paper, as is its application to the OGLE database. Here, we present the modifications of the algorithms and of the training set to optimize the performance when applied to the CoRoT data. Results. Classification results are presented for the observed fields IRa01, SRc01, LRc01, and LRa01 of the CoRoT mission. Statistics on the number of variables and the number of objects per class are given and typical light curves of high-probability candidates are shown. We also report on new stellar variability types discovered in the CoRoT data. The full classification results are publicly available.
Resumo:
The VISTA near infrared survey of the Magellanic System (VMC) will provide deep YJK(s) photometry reaching stars in the oldest turn-off point throughout the Magellanic Clouds (MCs). As part of the preparation for the survey, we aim to access the accuracy in the star formation history (SFH) that can be expected from VMC data, in particular for the Large Magellanic Cloud (LMC). To this aim, we first simulate VMC images containing not only the LMC stellar populations but also the foreground Milky Way (MW) stars and background galaxies. The simulations cover the whole range of density of LMC field stars. We then perform aperture photometry over these simulated images, access the expected levels of photometric errors and incompleteness, and apply the classical technique of SFH-recovery based on the reconstruction of colour-magnitude diagrams (CMD) via the minimisation of a chi-squared-like statistics. We verify that the foreground MW stars are accurately recovered by the minimisation algorithms, whereas the background galaxies can be largely eliminated from the CMD analysis due to their particular colours and morphologies. We then evaluate the expected errors in the recovered star formation rate as a function of stellar age, SFR(t), starting from models with a known age-metallicity relation (AMR). It turns out that, for a given sky area, the random errors for ages older than similar to 0.4 Gyr seem to be independent of the crowding. This can be explained by a counterbalancing effect between the loss of stars from a decrease in the completeness and the gain of stars from an increase in the stellar density. For a spatial resolution of similar to 0.1 deg(2), the random errors in SFR(t) will be below 20% for this wide range of ages. On the other hand, due to the lower stellar statistics for stars younger than similar to 0.4 Gyr, the outer LMC regions will require larger areas to achieve the same level of accuracy in the SFR( t). If we consider the AMR as unknown, the SFH-recovery algorithm is able to accurately recover the input AMR, at the price of an increase of random errors in the SFR(t) by a factor of about 2.5. Experiments of SFH-recovery performed for varying distance modulus and reddening indicate that these parameters can be determined with (relative) accuracies of Delta(m-M)(0) similar to 0.02 mag and Delta E(B-V) similar to 0.01 mag, for each individual field over the LMC. The propagation of these errors in the SFR(t) implies systematic errors below 30%. This level of accuracy in the SFR(t) can reveal significant imprints in the dynamical evolution of this unique and nearby stellar system, as well as possible signatures of the past interaction between the MCs and the MW.
Resumo:
We study the star/galaxy classification efficiency of 13 different decision tree algorithms applied to photometric objects in the Sloan Digital Sky Survey Data Release Seven (SDSS-DR7). Each algorithm is defined by a set of parameters which, when varied, produce different final classification trees. We extensively explore the parameter space of each algorithm, using the set of 884,126 SDSS objects with spectroscopic data as the training set. The efficiency of star-galaxy separation is measured using the completeness function. We find that the Functional Tree algorithm (FT) yields the best results as measured by the mean completeness in two magnitude intervals: 14 <= r <= 21 (85.2%) and r >= 19 (82.1%). We compare the performance of the tree generated with the optimal FT configuration to the classifications provided by the SDSS parametric classifier, 2DPHOT, and Ball et al. We find that our FT classifier is comparable to or better in completeness over the full magnitude range 15 <= r <= 21, with much lower contamination than all but the Ball et al. classifier. At the faintest magnitudes (r > 19), our classifier is the only one that maintains high completeness (> 80%) while simultaneously achieving low contamination (similar to 2.5%). We also examine the SDSS parametric classifier (psfMag - modelMag) to see if the dividing line between stars and galaxies can be adjusted to improve the classifier. We find that currently stars in close pairs are often misclassified as galaxies, and suggest a new cut to improve the classifier. Finally, we apply our FT classifier to separate stars from galaxies in the full set of 69,545,326 SDSS photometric objects in the magnitude range 14 <= r <= 21.
Resumo:
We study the spin-1/2 Ising model on a Bethe lattice in the mean-field limit, with the interaction constants following one of two deterministic aperiodic sequences, the Fibonacci or period-doubling one. New algorithms of sequence generation were implemented, which were fundamental in obtaining long sequences and, therefore, precise results. We calculate the exact critical temperature for both sequences, as well as the critical exponents beta, gamma, and delta. For the Fibonacci sequence, the exponents are classical, while for the period-doubling one they depend on the ratio between the two exchange constants. The usual relations between critical exponents are satisfied, within error bars, for the period-doubling sequence. Therefore, we show that mean-field-like procedures may lead to nonclassical critical exponents.