983 resultados para decision algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An adaptive antenna array combines the signal of each element, using some constraints to produce the radiation pattern of the antenna, while maximizing the performance of the system. Direction of arrival (DOA) algorithms are applied to determine the directions of impinging signals, whereas beamforming techniques are employed to determine the appropriate weights for the array elements, to create the desired pattern. In this paper, a detailed analysis of both categories of algorithms is made, when a planar antenna array is used. Several simulation results show that it is possible to point an antenna array in a desired direction based on the DOA estimation and on the beamforming algorithms. A comparison of the performance in terms of runtime and accuracy of the used algorithms is made. These characteristics are dependent on the SNR of the incoming signal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Demand response can play a very relevant role in the context of power systems with an intensive use of distributed energy resources, from which renewable intermittent sources are a significant part. More active consumers participation can help improving the system reliability and decrease or defer the required investments. Demand response adequate use and management is even more important in competitive electricity markets. However, experience shows difficulties to make demand response be adequately used in this context, showing the need of research work in this area. The most important difficulties seem to be caused by inadequate business models and by inadequate demand response programs management. This paper contributes to developing methodologies and a computational infrastructure able to provide the involved players with adequate decision support on demand response programs and contracts design and use. The presented work uses DemSi, a demand response simulator that has been developed by the authors to simulate demand response actions and programs, which includes realistic power system simulation. It includes an optimization module for the application of demand response programs and contracts using deterministic and metaheuristic approaches. The proposed methodology is an important improvement in the simulator while providing adequate tools for demand response programs adoption by the involved players. A machine learning method based on clustering and classification techniques, resulting in a rule base concerning DR programs and contracts use, is also used. A case study concerning the use of demand response in an incident situation is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The container loading problem (CLP) is a combinatorial optimization problem for the spatial arrangement of cargo inside containers so as to maximize the usage of space. The algorithms for this problem are of limited practical applicability if real-world constraints are not considered, one of the most important of which is deemed to be stability. This paper addresses static stability, as opposed to dynamic stability, looking at the stability of the cargo during container loading. This paper proposes two algorithms. The first is a static stability algorithm based on static mechanical equilibrium conditions that can be used as a stability evaluation function embedded in CLP algorithms (e.g. constructive heuristics, metaheuristics). The second proposed algorithm is a physical packing sequence algorithm that, given a container loading arrangement, generates the actual sequence by which each box is placed inside the container, considering static stability and loading operation efficiency constraints.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação apresentada na faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores

Relevância:

20.00% 20.00%

Publicador:

Resumo:

“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a new parallel implementation of a previously hyperspectral coded aperture (HYCA) algorithm for compressive sensing on graphics processing units (GPUs). HYCA method combines the ideas of spectral unmixing and compressive sensing exploiting the high spatial correlation that can be observed in the data and the generally low number of endmembers needed in order to explain the data. The proposed implementation exploits the GPU architecture at low level, thus taking full advantage of the computational power of GPUs using shared memory and coalesced accesses to memory. The proposed algorithm is evaluated not only in terms of reconstruction error but also in terms of computational performance using two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN. Experimental results using real data reveals signficant speedups up with regards to serial implementation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a step count algorithm designed to work in real-time using low computational power. This proposal is our first step for the development of an indoor navigation system, based on Pedestrian Dead Reckoning (PDR). We present two approaches to solve this problem and compare them based in their error on step counting, as well as, the capability of their use in a real time system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an ankle mounted Inertial Navigation System (INS) used to estimate the distance traveled by a pedestrian. This distance is estimated by the number of steps given by the user. The proposed method is based on force sensors to enhance the results obtained from an INS. Experimental results have shown that, depending on the step frequency, the traveled distance error varies between 2.7% and 5.6%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multi-criteria decision analysis(MCDA) has been one of the fastest-growing areas of operations research during the last decades. The academic attention devoted to MCDA motivated the development of a great variety of approaches and methods within the field. These methods distinguish themselves in terms of procedures, theoretical assumptions and type of decision addressed. This diversity poses challenges to the process of selecting the most suited method for a specific real-world decision problem. In this paper we present a case study in a real-world decision problem arising in the painting sector of an automobile plant. We tackle the problem by resorting to the well-known AHP method and to the MCDA method proposed by Pereira and Fontes (2012) (MMASSI). By relying on two, rather than one, MCDA methods we expect to improve the confidence and robustness of the obtained results. The contributions of this paper are twofold: first, we intend to investigate the contrasts and similarities of the results obtained by distinct MCDA approaches (AHP and MMASSI); secondly, we expect to enrich the literature of the field with a real-world MCDA case study on a complex decision making problem since there is a paucity of applied research work addressing real decision problems faced by organizations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multi-criteria decision analysis (MCDA) has been one of the fastest-growing areas of operations research during the last decades. The academic attention devoted to MCDA motivated the development of a great variety of approaches and methods within the field. These methods distinguish themselves in terms of procedures, theoretical assumptions and type of decision addressed. This diversity poses challenges to the process of selecting the most suited method for a specific real-world decision problem. In this paper we present a case study in a real-world decision problem arising in the painting sector of an automobile plant. We tackle the problem by resorting to the well-known AHP method and to the MCDA method proposed by Pereira and Fontes (2012) (MMASSI). By relying on two, rather than one, MCDA methods we expect to improve the confidence and robustness of the obtained results. The contributions of this paper are twofold: first, we intend to investigate the contrasts and similarities of the results obtained by distinct MCDA approaches (AHP and MMASSI); secondly, we expect to enrich the literature of the field with a real-world MCDA case study on a complex decision making problem since there is a paucity of applied research work addressing real decision problems faced by organizations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces a new method to blindly unmix hyperspectral data, termed dependent component analysis (DECA). This method decomposes a hyperspectral images into a collection of reflectance (or radiance) spectra of the materials present in the scene (endmember signatures) and the corresponding abundance fractions at each pixel. DECA assumes that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. These abudances are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. This method overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mestrado em Engenharia Informática, Área de Especialização em Tecnologias do Conhecimento e da Decisão

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A otimização nos sistemas de suporte à decisão atuais assume um carácter fortemente interdisciplinar relacionando-se com a necessidade de integração de diferentes técnicas e paradigmas na resolução de problemas reais complexos, sendo que a computação de soluções ótimas em muitos destes problemas é intratável. Os métodos de pesquisa heurística são conhecidos por permitir obter bons resultados num intervalo temporal aceitável. Muitas vezes, necessitam que a parametrização seja ajustada de forma a permitir obter bons resultados. Neste sentido, as estratégias de aprendizagem podem incrementar o desempenho de um sistema, dotando-o com a capacidade de aprendizagem, por exemplo, qual a técnica de otimização mais adequada para a resolução de uma classe particular de problemas, ou qual a parametrização mais adequada de um dado algoritmo num determinado cenário. Alguns dos métodos de otimização mais usados para a resolução de problemas do mundo real resultaram da adaptação de ideias de várias áreas de investigação, principalmente com inspiração na natureza - Meta-heurísticas. O processo de seleção de uma Meta-heurística para a resolução de um dado problema é em si um problema de otimização. As Híper-heurísticas surgem neste contexto como metodologias eficientes para selecionar ou gerar heurísticas (ou Meta-heurísticas) na resolução de problemas de otimização NP-difícil. Nesta dissertação pretende-se dar uma contribuição para o problema de seleção de Metaheurísticas respetiva parametrização. Neste sentido é descrita a especificação de uma Híperheurística para a seleção de técnicas baseadas na natureza, na resolução do problema de escalonamento de tarefas em sistemas de fabrico, com base em experiência anterior. O módulo de Híper-heurística desenvolvido utiliza um algoritmo de aprendizagem por reforço (QLearning), que permite dotar o sistema da capacidade de seleção automática da Metaheurística a usar no processo de otimização, assim como a respetiva parametrização. Finalmente, procede-se à realização de testes computacionais para avaliar a influência da Híper- Heurística no desempenho do sistema de escalonamento AutoDynAgents. Como conclusão genérica, é possível afirmar que, dos resultados obtidos é possível concluir existir vantagem significativa no desempenho do sistema quando introduzida a Híper-heurística baseada em QLearning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main objective of this work is to report on the development of a multi-criteria methodology to support the assessment and selection of an Information System (IS) framework in a business context. The objective is to select a technological partner that provides the engine to be the basis for the development of a customized application for shrinkage reduction on the supply chains management. Furthermore, the proposed methodology di ers from most of the ones previously proposed in the sense that 1) it provides the decision makers with a set of pre-defined criteria along with their description and suggestions on how to measure them and 2)it uses a continuous scale with two reference levels and thus no normalization of the valuations is required. The methodology here proposed is has been designed to be easy to understand and use, without a specific support of a decision making analyst.