56 resultados para Algoritmo Genético
Resumo:
Este trabalho aborda o problema de otimização em braquiterapia de alta taxa de dose no tratamento de pacientes com câncer, com vistas à definição do conjunto de tempos de parada. A técnica de solução adotada foi a Transgenética Computacional apoiada pelo método L-BFGS. O algoritmo desenvolvido foi empregado para gerar soluções não denominadas cujas distribuições de dose fossem capazes de eiminar o câncer e ao mesmo tempo preservar as regiões normais
Resumo:
Web services are computational solutions designed according to the principles of Service Oriented Computing. Web services can be built upon pre-existing services available on the Internet by using composition languages. We propose a method to generate WS-BPEL processes from abstract specifications provided with high-level control-flow information. The proposed method allows the composition designer to concentrate on high-level specifi- cations, in order to increase productivity and generate specifications that are independent of specific web services. We consider service orchestrations, that is compositions where a central process coordinates all the operations of the application. The process of generating compositions is based on a rule rewriting algorithm, which has been extended to support basic control-flow information.We created a prototype of the extended refinement method and performed experiments over simple case studies
Resumo:
Data clustering is applied to various fields such as data mining, image processing and pattern recognition technique. Clustering algorithms splits a data set into clusters such that elements within the same cluster have a high degree of similarity, while elements belonging to different clusters have a high degree of dissimilarity. The Fuzzy C-Means Algorithm (FCM) is a fuzzy clustering algorithm most used and discussed in the literature. The performance of the FCM is strongly affected by the selection of the initial centers of the clusters. Therefore, the choice of a good set of initial cluster centers is very important for the performance of the algorithm. However, in FCM, the choice of initial centers is made randomly, making it difficult to find a good set. This paper proposes three new methods to obtain initial cluster centers, deterministically, the FCM algorithm, and can also be used in variants of the FCM. In this work these initialization methods were applied in variant ckMeans.With the proposed methods, we intend to obtain a set of initial centers which are close to the real cluster centers. With these new approaches startup if you want to reduce the number of iterations to converge these algorithms and processing time without affecting the quality of the cluster or even improve the quality in some cases. Accordingly, cluster validation indices were used to measure the quality of the clusters obtained by the modified FCM and ckMeans algorithms with the proposed initialization methods when applied to various data sets
Resumo:
The Traveling Purchaser Problem is a variant of the Traveling Salesman Problem, where there is a set of markets and a set of products. Each product is available on a subset of markets and its unit cost depends on the market where it is available. The objective is to buy all the products, departing and returning to a domicile, at the least possible cost defined as the summation of the weights of the edges in the tour and the cost paid to acquire the products. A Transgenetic Algorithm, an evolutionary algorithm with basis on endosymbiosis, is applied to the Capacited and Uncapacited versions of this problem. Evolution in Transgenetic Algorithms is simulated with the interaction and information sharing between populations of individuals from distinct species. The computational results show that this is a very effective approach for the TPP regarding solution quality and runtime. Seventeen and nine new best results are presented for instances of the capacited and uncapacited versions, respectively
Resumo:
This dissertation describes the construction of a alternative didactic incorporating a historical approach with the use of the Roman abacus for teaching multiplication to students of 2nd year of elementary school, through activities ranging from the representation of numbers to multiplying with the Roman abacus, for learning the multiplication algorithm. Qualitative research was used as a methodological approach since the research object fits the goals of this research mode. Concerning the procedures, the research can be seen as a teaching experiment developed within the school environment. The instruments used for data collection were: observation, logbook, questionnaires, interviews and document analysis. The processing and analysis of data collected through the activities were classified and quantified in tables for easy viewing, interpretation, understanding, analysis of data and then transposed to charts. The analysis confirmed the research objectives and contributed to indicate the pedagogical use of the Roman abacus for teaching multiplication algorithm through several activities. Thus, it can be considered that this educational product will have important contributions for the teaching of this mathematical content, in Basic Education, particularly regarding to the multiplication process
Resumo:
The objective of the present thesis was to use the manipulation of oocytes enclosed in preantral follicles (MOEPF) as a tool for the female gametes rescue and optimization, from wild species of Caatinga biome. The thesis was divided into 4 experiments. At first experiment, it was performed the estimative and description of the agouti (Dasyprocta leporina) preantral follicles (PF) histologic and ultrastructural features, in which it was estimated 4419.8 ± 532.26 and 5397.52 ± 574.91 follicles for the right and left ovary, respectively, and the majority (86,63%) belonged to the primordial follicles category (P<0.05). Most of the population consists of morphologically normal follicles (70.78%), presenting a large and central nuclei and uniform cytoplasm. At ultrastructural evaluation it was verified the presence of a great number of round mitochondrias associated to lipid droplets. In the second experiment, it was performed the estimative and description of yellow-toothed cavies (Galea spixii) PF characteristics, also, the evaluation of the effect of solid surface vitrification (SSV) on the in situ PF morphology. The total of 416.0 ± 342.8 PF was estimated for the ovary pair and the presence of a large quantity of primary follicles (P<0.05) was evidenced. Most of the PF was morphologically normal (94.6%), in which the oocyte nuclei presented condensed granules of heterochromatin. Round or elongated shaped mitochondria constituted the most abundant organelles. In regard of the SSV, the protocol using the dimethylsulfoxide (DMSO) 3M possibility the preservation of 69.5% of morphologically normal PF, which was evidenced by the light and transmission electronic microscopy. At third experiment, the evaluation of the SSV procedure on the morphology and viability in situ PF form collared peccaries (Pecari tajacu) was performed. No differences were observed among treatments, in which the use of DMSO, ethylene glycol (EG) and dimethylformamide (DMF) as cryoprotectants, regardless its concentration, promoted the morphology preservation of much than 70% of PF. Concerning the PF viability, the DMSO and EG promoted the best preservation. The fourth experiment aimed to evaluate the effect of α MEM+ or TCM199 associated or not to 50 ng of FSHr on the morphology, activation and growth of collared peccaries PF, in vitro cultured (IVC) during 1 or 7 days and the effect on the extracellular matrix (ECM). After 7 days of IVC only the use of TCM199/FSH maintained the proportion of intact PF, similar to day 1(63.2%), however, no differences were observed among treatments (P>0.05). Also, an improvement of the proportion of intact growing PF was verified (P>0.05). By the Ag-NOR analysis it was observed that only the treatment using TCM199/FSH promoted the maintenance of cell proliferation similar to day 1 (P>0.05). The picrosirius red stain revealed that ECM remained intact in all treatments (P>0.05). Thus, as the general conclusion, the use of MOEPF in the refereed species allowed the knowledge of aspects related to its reproductive morphology and physiology, enabling the germplasm conservation, with the possibility of germplasm bank formation, as the elucidation of mechanisms related to the PF survive and in vitro development.
Resumo:
The reverse time migration algorithm (RTM) has been widely used in the seismic industry to generate images of the underground and thus reduce the risk of oil and gas exploration. Its widespread use is due to its high quality in underground imaging. The RTM is also known for its high computational cost. Therefore, parallel computing techniques have been used in their implementations. In general, parallel approaches for RTM use a coarse granularity by distributing the processing of a subset of seismic shots among nodes of distributed systems. Parallel approaches with coarse granularity for RTM have been shown to be very efficient since the processing of each seismic shot can be performed independently. For this reason, RTM algorithm performance can be considerably improved by using a parallel approach with finer granularity for the processing assigned to each node. This work presents an efficient parallel algorithm for 3D reverse time migration with fine granularity using OpenMP. The propagation algorithm of 3D acoustic wave makes up much of the RTM. Different load balancing were analyzed in order to minimize possible losses parallel performance at this stage. The results served as a basis for the implementation of other phases RTM: backpropagation and imaging condition. The proposed algorithm was tested with synthetic data representing some of the possible underground structures. Metrics such as speedup and efficiency were used to analyze its parallel performance. The migrated sections show that the algorithm obtained satisfactory performance in identifying subsurface structures. As for the parallel performance, the analysis clearly demonstrate the scalability of the algorithm achieving a speedup of 22.46 for the propagation of the wave and 16.95 for the RTM, both with 24 threads.
Resumo:
There are authentication models which use passwords, keys, personal identifiers (cards, tags etc) to authenticate a particular user in the authentication/identification process. However, there are other systems that can use biometric data, such as signature, fingerprint, voice, etc., to authenticate an individual in a system. In another hand, the storage of biometric can bring some risks such as consistency and protection problems for these data. According to this problem, it is necessary to protect these biometric databases to ensure the integrity and reliability of the system. In this case, there are models for security/authentication biometric identification, for example, models and Fuzzy Vault and Fuzzy Commitment systems. Currently, these models are mostly used in the cases for protection of biometric data, but they have fragile elements in the protection process. Therefore, increasing the level of security of these methods through changes in the structure, or even by inserting new layers of protection is one of the goals of this thesis. In other words, this work proposes the simultaneous use of encryption (Encryption Algorithm Papilio) with protection models templates (Fuzzy Vault and Fuzzy Commitment) in identification systems based on biometric. The objective of this work is to improve two aspects in Biometric systems: safety and accuracy. Furthermore, it is necessary to maintain a reasonable level of efficiency of this data through the use of more elaborate classification structures, known as committees. Therefore, we intend to propose a model of a safer biometric identification systems for identification.
Resumo:
O NAVSTAR/GPS (NAVigation System with Timing And Ranging/Global Po- sitioning System), mais conhecido por GPS, _e um sistema de navegacão baseado em sat_elites desenvolvido pelo departamento de defesa norte-americano em meados de 1970. Criado inicialmente para fins militares, o GPS foi adaptado para o uso civil. Para fazer a localização, o receptor precisa fazer a aquisição de sinais dos satélites visíveis. Essa etapa é de extrema importância, pois é responsável pela detecção dos satélites visíveis, calculando suas respectivas frequências e fases iniciais. Esse processo pode demandar bastante tempo de processamento e precisa ser implementado de forma eficiente. Várias técnicas são utilizadas atualmente, mas a maioria delas colocam em conflito questões de projeto tais como, complexidade computacional, tempo de aquisição e recursos computacionais. Objetivando equilibrar essas questões, foi desenvolvido um método que reduz a complexidade do processo de aquisição utilizando algumas estratégias, a saber, redução do efeito doppler, amostras e tamanho do sinal utilizados, além do paralelismo. Essa estratégia é dividida em dois passos, um grosseiro em todo o espaço de busca e um fino apenas na região identificada previamente pela primeira etapa. Devido a busca grosseira, o limiar do algoritmo convencional não era mais aceitável. Nesse sentido, um novo limiar foi estabelecido baseado na variância dos picos de correlação. Inicialmente, é feita uma busca com pouca precisão comparando a variância dos cinco maiores picos de correlação encontrados. Caso a variância ultrapasse um certo limiar, a região de maior pico torna-se candidata à detecção. Por fim, essa região passa por um refinamento para se ter a certeza de detecção. Os resultados mostram que houve uma redução significativa na complexidade e no tempo de execução, sem que tenha sido necessário utilizar algoritmos muito complexos.
Resumo:
Recentemente diversas técnicas de computação evolucionárias têm sido utilizadas em áreas como estimação de parâmetros de processos dinâmicos lineares e não lineares ou até sujeitos a incertezas. Isso motiva a utilização de algoritmos como o otimizador por nuvem de partículas (PSO) nas referidas áreas do conhecimento. Porém, pouco se sabe sobre a convergência desse algoritmo e, principalmente, as análises e estudos realizados têm se concentrado em resultados experimentais. Por isso, é objetivo deste trabalho propor uma nova estrutura para o PSO que permita analisar melhor a convergência do algoritmo de forma analítica. Para isso, o PSO é reestruturado para assumir uma forma matricial e reformulado como um sistema linear por partes. As partes serão analisadas de forma separada e será proposta a inserção de um fator de esquecimento que garante que a parte mais significativa deste sistema possua autovalores dentro do círculo de raio unitário. Também será realizada a análise da convergência do algoritmo como um todo, utilizando um critério de convergência quase certa, aplicável a sistemas chaveados. Na sequência, serão realizados testes experimentais de maneira a verificar o comportamento dos autovalores após a inserção do fator de esquecimento. Posteriormente, os algoritmos de identificação de parâmetros tradicionais serão combinados com o PSO matricial, de maneira a tornar os resultados da identificação tão bons ou melhores que a identificação apenas com o PSO ou, apenas com os algoritmos tradicionais. Os resultados mostram a convergência das partículas em uma região delimitada e que as funções obtidas após a combinação do algoritmo PSO matricial com os algoritmos convencionais, apresentam maior generalização para o sistema apresentado. As conclusões a que se chega é que a hibridização, apesar de limitar a busca por uma partícula mais apta do PSO, permite um desempenho mínimo para o algoritmo e ainda possibilita melhorar o resultado obtido com os algoritmos tradicionais, permitindo a representação do sistema aproximado em quantidades maiores de frequências.
Resumo:
With the growth of energy consumption worldwide, conventional reservoirs, the reservoirs called "easy exploration and production" are not meeting the global energy demand. This has led many researchers to develop projects that will address these needs, companies in the oil sector has invested in techniques that helping in locating and drilling wells. One of the techniques employed in oil exploration process is the reverse time migration (RTM), in English, Reverse Time Migration, which is a method of seismic imaging that produces excellent image of the subsurface. It is algorithm based in calculation on the wave equation. RTM is considered one of the most advanced seismic imaging techniques. The economic value of the oil reserves that require RTM to be localized is very high, this means that the development of these algorithms becomes a competitive differentiator for companies seismic processing. But, it requires great computational power, that it still somehow harms its practical success. The objective of this work is to explore the implementation of this algorithm in unconventional architectures, specifically GPUs using the CUDA by making an analysis of the difficulties in developing the same, as well as the performance of the algorithm in the sequential and parallel version