32 resultados para algoritmo, localizzazione, sonar


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work proposes a collaborative system for marking dangerous points in the transport routes and generation of alerts to drivers. It consisted of a proximity warning system for a danger point that is fed by the driver via a mobile device equipped with GPS. The system will consolidate data provided by several different drivers and generate a set of points common to be used in the warning system. Although the application is designed to protect drivers, the data generated by it can serve as inputs for the responsible to improve signage and recovery of public roads

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work presents a scalable and efficient parallel implementation of the Standard Simplex algorithm in the multicore architecture to solve large scale linear programming problems. We present a general scheme explaining how each step of the standard Simplex algorithm was parallelized, indicating some important points of the parallel implementation. Performance analysis were conducted by comparing the sequential time using the Simplex tableau and the Simplex of the CPLEXR IBM. The experiments were executed on a shared memory machine with 24 cores. The scalability analysis was performed with problems of different dimensions, finding evidence that our parallel standard Simplex algorithm has a better parallel efficiency for problems with more variables than constraints. In comparison with CPLEXR , the proposed parallel algorithm achieved a efficiency of up to 16 times better

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present essay shows strategies of improvement in a well succeded evolutionary metaheuristic to solve the Asymmetric Traveling Salesman Problem. Such steps consist in a Memetic Algorithm projected mainly to this problem. Basically this improvement applied optimizing techniques known as Path-Relinking and Vocabulary Building. Furthermore, this last one has being used in two different ways, in order to evaluate the effects of the improvement on the evolutionary metaheuristic. These methods were implemented in C++ code and the experiments were done under instances at TSPLIB library, being possible to observe that the procedures purposed reached success on the tests done

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work seeks to propose and evaluate a change to the Ant Colony Optimization based on the results of experiments performed on the problem of Selective Ride Robot (PRS, a new problem, also proposed in this paper. Four metaheuristics are implemented, GRASP, VNS and two versions of Ant Colony Optimization, and their results are analyzed by running the algorithms over 32 instances created during this work. The metaheuristics also have their results compared to an exact approach. The results show that the algorithm implemented using the GRASP metaheuristic show good results. The version of the multicolony ant colony algorithm, proposed and evaluated in this work, shows the best results

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Este trabalho aborda o problema de otimização em braquiterapia de alta taxa de dose no tratamento de pacientes com câncer, com vistas à definição do conjunto de tempos de parada. A técnica de solução adotada foi a Transgenética Computacional apoiada pelo método L-BFGS. O algoritmo desenvolvido foi empregado para gerar soluções não denominadas cujas distribuições de dose fossem capazes de eiminar o câncer e ao mesmo tempo preservar as regiões normais

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Web services are computational solutions designed according to the principles of Service Oriented Computing. Web services can be built upon pre-existing services available on the Internet by using composition languages. We propose a method to generate WS-BPEL processes from abstract specifications provided with high-level control-flow information. The proposed method allows the composition designer to concentrate on high-level specifi- cations, in order to increase productivity and generate specifications that are independent of specific web services. We consider service orchestrations, that is compositions where a central process coordinates all the operations of the application. The process of generating compositions is based on a rule rewriting algorithm, which has been extended to support basic control-flow information.We created a prototype of the extended refinement method and performed experiments over simple case studies

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data clustering is applied to various fields such as data mining, image processing and pattern recognition technique. Clustering algorithms splits a data set into clusters such that elements within the same cluster have a high degree of similarity, while elements belonging to different clusters have a high degree of dissimilarity. The Fuzzy C-Means Algorithm (FCM) is a fuzzy clustering algorithm most used and discussed in the literature. The performance of the FCM is strongly affected by the selection of the initial centers of the clusters. Therefore, the choice of a good set of initial cluster centers is very important for the performance of the algorithm. However, in FCM, the choice of initial centers is made randomly, making it difficult to find a good set. This paper proposes three new methods to obtain initial cluster centers, deterministically, the FCM algorithm, and can also be used in variants of the FCM. In this work these initialization methods were applied in variant ckMeans.With the proposed methods, we intend to obtain a set of initial centers which are close to the real cluster centers. With these new approaches startup if you want to reduce the number of iterations to converge these algorithms and processing time without affecting the quality of the cluster or even improve the quality in some cases. Accordingly, cluster validation indices were used to measure the quality of the clusters obtained by the modified FCM and ckMeans algorithms with the proposed initialization methods when applied to various data sets

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Traveling Purchaser Problem is a variant of the Traveling Salesman Problem, where there is a set of markets and a set of products. Each product is available on a subset of markets and its unit cost depends on the market where it is available. The objective is to buy all the products, departing and returning to a domicile, at the least possible cost defined as the summation of the weights of the edges in the tour and the cost paid to acquire the products. A Transgenetic Algorithm, an evolutionary algorithm with basis on endosymbiosis, is applied to the Capacited and Uncapacited versions of this problem. Evolution in Transgenetic Algorithms is simulated with the interaction and information sharing between populations of individuals from distinct species. The computational results show that this is a very effective approach for the TPP regarding solution quality and runtime. Seventeen and nine new best results are presented for instances of the capacited and uncapacited versions, respectively

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation describes the construction of a alternative didactic incorporating a historical approach with the use of the Roman abacus for teaching multiplication to students of 2nd year of elementary school, through activities ranging from the representation of numbers to multiplying with the Roman abacus, for learning the multiplication algorithm. Qualitative research was used as a methodological approach since the research object fits the goals of this research mode. Concerning the procedures, the research can be seen as a teaching experiment developed within the school environment. The instruments used for data collection were: observation, logbook, questionnaires, interviews and document analysis. The processing and analysis of data collected through the activities were classified and quantified in tables for easy viewing, interpretation, understanding, analysis of data and then transposed to charts. The analysis confirmed the research objectives and contributed to indicate the pedagogical use of the Roman abacus for teaching multiplication algorithm through several activities. Thus, it can be considered that this educational product will have important contributions for the teaching of this mathematical content, in Basic Education, particularly regarding to the multiplication process

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The reverse time migration algorithm (RTM) has been widely used in the seismic industry to generate images of the underground and thus reduce the risk of oil and gas exploration. Its widespread use is due to its high quality in underground imaging. The RTM is also known for its high computational cost. Therefore, parallel computing techniques have been used in their implementations. In general, parallel approaches for RTM use a coarse granularity by distributing the processing of a subset of seismic shots among nodes of distributed systems. Parallel approaches with coarse granularity for RTM have been shown to be very efficient since the processing of each seismic shot can be performed independently. For this reason, RTM algorithm performance can be considerably improved by using a parallel approach with finer granularity for the processing assigned to each node. This work presents an efficient parallel algorithm for 3D reverse time migration with fine granularity using OpenMP. The propagation algorithm of 3D acoustic wave makes up much of the RTM. Different load balancing were analyzed in order to minimize possible losses parallel performance at this stage. The results served as a basis for the implementation of other phases RTM: backpropagation and imaging condition. The proposed algorithm was tested with synthetic data representing some of the possible underground structures. Metrics such as speedup and efficiency were used to analyze its parallel performance. The migrated sections show that the algorithm obtained satisfactory performance in identifying subsurface structures. As for the parallel performance, the analysis clearly demonstrate the scalability of the algorithm achieving a speedup of 22.46 for the propagation of the wave and 16.95 for the RTM, both with 24 threads.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are authentication models which use passwords, keys, personal identifiers (cards, tags etc) to authenticate a particular user in the authentication/identification process. However, there are other systems that can use biometric data, such as signature, fingerprint, voice, etc., to authenticate an individual in a system. In another hand, the storage of biometric can bring some risks such as consistency and protection problems for these data. According to this problem, it is necessary to protect these biometric databases to ensure the integrity and reliability of the system. In this case, there are models for security/authentication biometric identification, for example, models and Fuzzy Vault and Fuzzy Commitment systems. Currently, these models are mostly used in the cases for protection of biometric data, but they have fragile elements in the protection process. Therefore, increasing the level of security of these methods through changes in the structure, or even by inserting new layers of protection is one of the goals of this thesis. In other words, this work proposes the simultaneous use of encryption (Encryption Algorithm Papilio) with protection models templates (Fuzzy Vault and Fuzzy Commitment) in identification systems based on biometric. The objective of this work is to improve two aspects in Biometric systems: safety and accuracy. Furthermore, it is necessary to maintain a reasonable level of efficiency of this data through the use of more elaborate classification structures, known as committees. Therefore, we intend to propose a model of a safer biometric identification systems for identification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O NAVSTAR/GPS (NAVigation System with Timing And Ranging/Global Po- sitioning System), mais conhecido por GPS, _e um sistema de navegacão baseado em sat_elites desenvolvido pelo departamento de defesa norte-americano em meados de 1970. Criado inicialmente para fins militares, o GPS foi adaptado para o uso civil. Para fazer a localização, o receptor precisa fazer a aquisição de sinais dos satélites visíveis. Essa etapa é de extrema importância, pois é responsável pela detecção dos satélites visíveis, calculando suas respectivas frequências e fases iniciais. Esse processo pode demandar bastante tempo de processamento e precisa ser implementado de forma eficiente. Várias técnicas são utilizadas atualmente, mas a maioria delas colocam em conflito questões de projeto tais como, complexidade computacional, tempo de aquisição e recursos computacionais. Objetivando equilibrar essas questões, foi desenvolvido um método que reduz a complexidade do processo de aquisição utilizando algumas estratégias, a saber, redução do efeito doppler, amostras e tamanho do sinal utilizados, além do paralelismo. Essa estratégia é dividida em dois passos, um grosseiro em todo o espaço de busca e um fino apenas na região identificada previamente pela primeira etapa. Devido a busca grosseira, o limiar do algoritmo convencional não era mais aceitável. Nesse sentido, um novo limiar foi estabelecido baseado na variância dos picos de correlação. Inicialmente, é feita uma busca com pouca precisão comparando a variância dos cinco maiores picos de correlação encontrados. Caso a variância ultrapasse um certo limiar, a região de maior pico torna-se candidata à detecção. Por fim, essa região passa por um refinamento para se ter a certeza de detecção. Os resultados mostram que houve uma redução significativa na complexidade e no tempo de execução, sem que tenha sido necessário utilizar algoritmos muito complexos.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recentemente diversas técnicas de computação evolucionárias têm sido utilizadas em áreas como estimação de parâmetros de processos dinâmicos lineares e não lineares ou até sujeitos a incertezas. Isso motiva a utilização de algoritmos como o otimizador por nuvem de partículas (PSO) nas referidas áreas do conhecimento. Porém, pouco se sabe sobre a convergência desse algoritmo e, principalmente, as análises e estudos realizados têm se concentrado em resultados experimentais. Por isso, é objetivo deste trabalho propor uma nova estrutura para o PSO que permita analisar melhor a convergência do algoritmo de forma analítica. Para isso, o PSO é reestruturado para assumir uma forma matricial e reformulado como um sistema linear por partes. As partes serão analisadas de forma separada e será proposta a inserção de um fator de esquecimento que garante que a parte mais significativa deste sistema possua autovalores dentro do círculo de raio unitário. Também será realizada a análise da convergência do algoritmo como um todo, utilizando um critério de convergência quase certa, aplicável a sistemas chaveados. Na sequência, serão realizados testes experimentais de maneira a verificar o comportamento dos autovalores após a inserção do fator de esquecimento. Posteriormente, os algoritmos de identificação de parâmetros tradicionais serão combinados com o PSO matricial, de maneira a tornar os resultados da identificação tão bons ou melhores que a identificação apenas com o PSO ou, apenas com os algoritmos tradicionais. Os resultados mostram a convergência das partículas em uma região delimitada e que as funções obtidas após a combinação do algoritmo PSO matricial com os algoritmos convencionais, apresentam maior generalização para o sistema apresentado. As conclusões a que se chega é que a hibridização, apesar de limitar a busca por uma partícula mais apta do PSO, permite um desempenho mínimo para o algoritmo e ainda possibilita melhorar o resultado obtido com os algoritmos tradicionais, permitindo a representação do sistema aproximado em quantidades maiores de frequências.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work proposes a new autonomous navigation strategy assisted by genetic algorithm with dynamic planning for terrestrial mobile robots, called DPNA-GA (Dynamic Planning Navigation Algorithm optimized with Genetic Algorithm). The strategy was applied in environments - both static and dynamic - in which the location and shape of the obstacles is not known in advance. In each shift event, a control algorithm minimizes the distance between the robot and the object and maximizes the distance from the obstacles, rescheduling the route. Using a spatial location sensor and a set of distance sensors, the proposed navigation strategy is able to dynamically plan optimal collision-free paths. Simulations performed in different environments demonstrated that the technique provides a high degree of flexibility and robustness. For this, there were applied several variations of genetic parameters such as: crossing rate, population size, among others. Finally, the simulation results successfully demonstrate the effectiveness and robustness of DPNA-GA technique, validating it for real applications in terrestrial mobile robots.