948 resultados para Neural algorithm
Resumo:
This paper presents a genetic algorithm for the Resource Constrained Project Scheduling Problem (RCPSP). The chromosome representation of the problem is based on random keys. The schedule is constructed using a heuristic priority rule in which the priorities of the activities are defined by the genetic algorithm. The heuristic generates parameterized active schedules. The approach was tested on a set of standard problems taken from the literature and compared with other approaches. The computational results validate the effectiveness of the proposed algorithm.
Resumo:
Mestrado em Computação e Instrumentação Médica
Resumo:
This work addresses the signal propagation and the fractional-order dynamics during the evolution of a genetic algorithm (GA). In order to investigate the phenomena involved in the GA population evolution, the mutation is exposed to excitation perturbations during some generations and the corresponding fitness variations are evaluated. Three distinct fitness functions are used to study their influence in the GA dynamics. The input and output signals are studied revealing a fractional-order dynamic evolution, characteristic of a long-term system memory.
Resumo:
Neste trabalho pretende-se introduzir os conceitos associados às redes neuronais e a sua aplicação no controlo de sistemas, neste caso na área da robótica autónoma. Foi utilizado um AGV de modo a testar experimentalmente um controlo através de uma rede neuronal artificial. A grande vantagem das redes neuronais artificiais é estas poderem ser ensinadas a funcionarem como se pretende. A partir desta caraterística foram efetuadas duas abordagens na implementação do AGV disponibilizado. A primeira abordagem ensinava a rede neuronal a funcionar como o controlo por lógica difusa que foi implementado no AGV aquando do seu desenvolvimento. A segunda abordagem foi ensinar a rede neuronal artificial a funcionar a partir de dados retirados de um controlo remoto simples implementado no AGV. Ambas as abordagens foram inicialmente implementadas e simuladas no MATLAB, antes de se efetuar a sua implementação no AGV. O MATLAB é utilizado para efetuar o treino das redes neuronais multicamada proactivas através do algoritmo de treino por retropropagação de Levenberg-Marquardt. A implementação de uma rede neuronal artificial na primeira abordagem foi implementada em três fases, MATLAB, posteriormente linguagem de programação C no computador e por fim, microcontrolador PIC no AGV, permitindo assim diferenciar o desenvolvimento destas técnicas em várias plataformas. Durante o desenvolvimento da segunda abordagem foi desenvolvido uma aplicação Android que permite monitorizar e controlar o AGV remotamente. Os resultados obtidos pela implementação da rede neuronal a partir do controlo difuso e do controlo remoto foram satisfatórios, pois o AGV percorria os percursos testados corretamente, em ambos os casos. Por fim concluiu-se que é viável a aplicação das redes neuronais no controlo de um AGV. Mais ainda, é possível utilizar o sistema desenvolvido para implementar e testar novas RNA.
Resumo:
Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Ciência e Sistemas de Informação Geográfica
Resumo:
IEEE International Symposium on Circuits and Systems, pp. 724 – 727, Seattle, EUA
Resumo:
The computations performed by the brain ultimately rely on the functional connectivity between neurons embedded in complex networks. It is well known that the neuronal connections, the synapses, are plastic, i.e. the contribution of each presynaptic neuron to the firing of a postsynaptic neuron can be independently adjusted. The modulation of effective synaptic strength can occur on time scales that range from tens or hundreds of milliseconds, to tens of minutes or hours, to days, and may involve pre- and/or post-synaptic modifications. The collection of these mechanisms is generally believed to underlie learning and memory and, hence, it is fundamental to understand their consequences in the behavior of neurons.(...)
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Informática
Resumo:
Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.
Resumo:
An adaptive antenna array combines the signal of each element, using some constraints to produce the radiation pattern of the antenna, while maximizing the performance of the system. Direction of arrival (DOA) algorithms are applied to determine the directions of impinging signals, whereas beamforming techniques are employed to determine the appropriate weights for the array elements, to create the desired pattern. In this paper, a detailed analysis of both categories of algorithms is made, when a planar antenna array is used. Several simulation results show that it is possible to point an antenna array in a desired direction based on the DOA estimation and on the beamforming algorithms. A comparison of the performance in terms of runtime and accuracy of the used algorithms is made. These characteristics are dependent on the SNR of the incoming signal.
Resumo:
The prediction of the time and the efficiency of the remediation of contaminated soils using soil vapor extraction remain a difficult challenge to the scientific community and consultants. This work reports the development of multiple linear regression and artificial neural network models to predict the remediation time and efficiency of soil vapor extractions performed in soils contaminated separately with benzene, toluene, ethylbenzene, xylene, trichloroethylene, and perchloroethylene. The results demonstrated that the artificial neural network approach presents better performances when compared with multiple linear regression models. The artificial neural network model allowed an accurate prediction of remediation time and efficiency based on only soil and pollutants characteristics, and consequently allowing a simple and quick previous evaluation of the process viability.
Resumo:
The container loading problem (CLP) is a combinatorial optimization problem for the spatial arrangement of cargo inside containers so as to maximize the usage of space. The algorithms for this problem are of limited practical applicability if real-world constraints are not considered, one of the most important of which is deemed to be stability. This paper addresses static stability, as opposed to dynamic stability, looking at the stability of the cargo during container loading. This paper proposes two algorithms. The first is a static stability algorithm based on static mechanical equilibrium conditions that can be used as a stability evaluation function embedded in CLP algorithms (e.g. constructive heuristics, metaheuristics). The second proposed algorithm is a physical packing sequence algorithm that, given a container loading arrangement, generates the actual sequence by which each box is placed inside the container, considering static stability and loading operation efficiency constraints.
Resumo:
Dissertação para a obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Energia
Resumo:
“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.
Resumo:
This paper presents a new parallel implementation of a previously hyperspectral coded aperture (HYCA) algorithm for compressive sensing on graphics processing units (GPUs). HYCA method combines the ideas of spectral unmixing and compressive sensing exploiting the high spatial correlation that can be observed in the data and the generally low number of endmembers needed in order to explain the data. The proposed implementation exploits the GPU architecture at low level, thus taking full advantage of the computational power of GPUs using shared memory and coalesced accesses to memory. The proposed algorithm is evaluated not only in terms of reconstruction error but also in terms of computational performance using two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN. Experimental results using real data reveals signficant speedups up with regards to serial implementation.