887 resultados para Algoritmo Boosting
Resumo:
This work proposes a collaborative system for marking dangerous points in the transport routes and generation of alerts to drivers. It consisted of a proximity warning system for a danger point that is fed by the driver via a mobile device equipped with GPS. The system will consolidate data provided by several different drivers and generate a set of points common to be used in the warning system. Although the application is designed to protect drivers, the data generated by it can serve as inputs for the responsible to improve signage and recovery of public roads
Resumo:
The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors
Resumo:
This work presents a scalable and efficient parallel implementation of the Standard Simplex algorithm in the multicore architecture to solve large scale linear programming problems. We present a general scheme explaining how each step of the standard Simplex algorithm was parallelized, indicating some important points of the parallel implementation. Performance analysis were conducted by comparing the sequential time using the Simplex tableau and the Simplex of the CPLEXR IBM. The experiments were executed on a shared memory machine with 24 cores. The scalability analysis was performed with problems of different dimensions, finding evidence that our parallel standard Simplex algorithm has a better parallel efficiency for problems with more variables than constraints. In comparison with CPLEXR , the proposed parallel algorithm achieved a efficiency of up to 16 times better
Resumo:
The present essay shows strategies of improvement in a well succeded evolutionary metaheuristic to solve the Asymmetric Traveling Salesman Problem. Such steps consist in a Memetic Algorithm projected mainly to this problem. Basically this improvement applied optimizing techniques known as Path-Relinking and Vocabulary Building. Furthermore, this last one has being used in two different ways, in order to evaluate the effects of the improvement on the evolutionary metaheuristic. These methods were implemented in C++ code and the experiments were done under instances at TSPLIB library, being possible to observe that the procedures purposed reached success on the tests done
Resumo:
Frequentemente, os indivíduos com perda auditiva têm dificuldade de entender a fala no ambiente ruidoso. OBJETIVO: O objetivo deste estudo foi avaliar clinicamente o desempenho dos indivíduos adultos com deficiência auditiva neurossensorial, com relação à percepção da fala, utilizando o aparelho de amplificação sonora individual digital com o algoritmo de redução de ruído denominado Speech Sensitive Processing, ativado e desativado na presença de um ruído. MATERIAL E MÉTODO: Este estudo de casos foi realizado em 32 indivíduos com deficiência auditiva neurossensorial de graus leve, moderado ou leve a moderado. Foi realizada a avaliação por meio de um teste de percepção de fala, onde se pesquisou o reconhecimento de sentenças na presença de um ruído, para obter a relação sinal/ruído, utilizando o aparelho auditivo digital. RESULTADOS: O algoritmo pôde proporcionar benefício para a maioria dos indivíduos deficientes auditivos, na pesquisa da relação sinal/ruído e os resultados apontaram diferença estatisticamente significante na condição em que o algoritmo encontrava-se ativado, comparado quando o algoritmo não se encontrava ativado. CONCLUSÃO: O uso do algoritmo de redução de ruído deve ser pensado como alternativa clínica, pois observamos a eficácia desse sistema na redução do ruído, melhorando a percepção da fala.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This work seeks to propose and evaluate a change to the Ant Colony Optimization based on the results of experiments performed on the problem of Selective Ride Robot (PRS, a new problem, also proposed in this paper. Four metaheuristics are implemented, GRASP, VNS and two versions of Ant Colony Optimization, and their results are analyzed by running the algorithms over 32 instances created during this work. The metaheuristics also have their results compared to an exact approach. The results show that the algorithm implemented using the GRASP metaheuristic show good results. The version of the multicolony ant colony algorithm, proposed and evaluated in this work, shows the best results
Resumo:
Este trabalho aborda o problema de otimização em braquiterapia de alta taxa de dose no tratamento de pacientes com câncer, com vistas à definição do conjunto de tempos de parada. A técnica de solução adotada foi a Transgenética Computacional apoiada pelo método L-BFGS. O algoritmo desenvolvido foi empregado para gerar soluções não denominadas cujas distribuições de dose fossem capazes de eiminar o câncer e ao mesmo tempo preservar as regiões normais
Resumo:
Web services are computational solutions designed according to the principles of Service Oriented Computing. Web services can be built upon pre-existing services available on the Internet by using composition languages. We propose a method to generate WS-BPEL processes from abstract specifications provided with high-level control-flow information. The proposed method allows the composition designer to concentrate on high-level specifi- cations, in order to increase productivity and generate specifications that are independent of specific web services. We consider service orchestrations, that is compositions where a central process coordinates all the operations of the application. The process of generating compositions is based on a rule rewriting algorithm, which has been extended to support basic control-flow information.We created a prototype of the extended refinement method and performed experiments over simple case studies
Resumo:
Data clustering is applied to various fields such as data mining, image processing and pattern recognition technique. Clustering algorithms splits a data set into clusters such that elements within the same cluster have a high degree of similarity, while elements belonging to different clusters have a high degree of dissimilarity. The Fuzzy C-Means Algorithm (FCM) is a fuzzy clustering algorithm most used and discussed in the literature. The performance of the FCM is strongly affected by the selection of the initial centers of the clusters. Therefore, the choice of a good set of initial cluster centers is very important for the performance of the algorithm. However, in FCM, the choice of initial centers is made randomly, making it difficult to find a good set. This paper proposes three new methods to obtain initial cluster centers, deterministically, the FCM algorithm, and can also be used in variants of the FCM. In this work these initialization methods were applied in variant ckMeans.With the proposed methods, we intend to obtain a set of initial centers which are close to the real cluster centers. With these new approaches startup if you want to reduce the number of iterations to converge these algorithms and processing time without affecting the quality of the cluster or even improve the quality in some cases. Accordingly, cluster validation indices were used to measure the quality of the clusters obtained by the modified FCM and ckMeans algorithms with the proposed initialization methods when applied to various data sets
Resumo:
The Traveling Purchaser Problem is a variant of the Traveling Salesman Problem, where there is a set of markets and a set of products. Each product is available on a subset of markets and its unit cost depends on the market where it is available. The objective is to buy all the products, departing and returning to a domicile, at the least possible cost defined as the summation of the weights of the edges in the tour and the cost paid to acquire the products. A Transgenetic Algorithm, an evolutionary algorithm with basis on endosymbiosis, is applied to the Capacited and Uncapacited versions of this problem. Evolution in Transgenetic Algorithms is simulated with the interaction and information sharing between populations of individuals from distinct species. The computational results show that this is a very effective approach for the TPP regarding solution quality and runtime. Seventeen and nine new best results are presented for instances of the capacited and uncapacited versions, respectively
Resumo:
Considerando a crescente utilização de técnicas de processamento digital de sinais em aplicações de sistemas eletrônicos e ou de potência, este artigo discute o uso da Transformada Discreta de Fourier Recursiva (TDFR) para identificação do ângulo de fase, da freqüência e da amplitude das tensões fundamentais da rede, independente de distorções na forma de onda ou de transitórios na amplitude. Será discutido que, se a freqüência fundamental das tensões medidas coincide com a freqüência a qual a TDF foi projetada, um simples algoritmo TDFR é completamente capaz de fornecer as informações requeridas de fase, freqüência e amplitude. Dois algoritmos adicionais são propostos para garantir seu desempenho correto quando a freqüência difere do seu valor nominal: um deles para a correção do erro de fase do sinal de saída e outro para identificação da amplitude do componente fundamental. Além disto, destaca-se que através dos algoritmos propostos, independentemente do sinal de entrada, a identificação do componente fundamental pode ser realizada em, no máximo, 2 ciclos da rede. Uma análise dos resultados evidenciados pela TDFR foi desenvolvida através de simulações computacionais. Também serão apresentados resultados experimentais referentes ao sincronismo de um gerador síncrono com a rede elétrica, através dos sinais fornecidos pela TDFR.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
No problema de minimização de troca de ferramentas procura-se por uma sequência para processar um conjunto de tarefas de modo que o número requerido de trocas de ferramentas seja o menor possível. Neste trabalho propõe-se um algoritmo para resolver este problema baseado em um ordenamento parcial das tarefas. Uma sequência ótima é obtida expandindo-se as sequências parciais enumeradas. Testes computacionais são apresentados.
Resumo:
This paper presents the Benders decomposition technique and Branch and Bound algorithm used in the reactive power planning in electric energy systems. The Benders decomposition separates the planning problem into two subproblems: an investment subproblem (master) and the operation subproblem (slave), which are solved alternately. The operation subproblem is solved using a successive linear programming (SLP) algorithm while the investment subproblem, which is an integer linear programming (ILP) problem with discrete variables, is resolved using a Branch and Bound algorithm especially developed to resolve this type of problem.