957 resultados para Graphic ProcessingUnits, GPUs
Resumo:
Tool path generation is one of the most complex problems in Computer Aided Manufacturing. Although some efficient strategies have been developed, most of them are only useful for standard machining. However, the algorithms used for tool path computation demand a higher computation performance, which makes the implementation on many existing systems very slow or even impractical. Hardware acceleration is an incremental solution that can be cleanly added to these systems while keeping everything else intact. It is completely transparent to the user. The cost is much lower and the development time is much shorter than replacing the computers by faster ones. This paper presents an optimisation that uses a specific graphic hardware approach using the power of multi-core Graphic Processing Units (GPUs) in order to improve the tool path computation. This improvement is applied on a highly accurate and robust tool path generation algorithm. The paper presents, as a case of study, a fully implemented algorithm used for turning lathe machining of shoe lasts. A comparative study will show the gain achieved in terms of total computing time. The execution time is almost two orders of magnitude faster than modern PCs.
Resumo:
Many models exist in the literature to explain the success of technological innovation. However, no studies have been made regarding graphic formats representing the technological innovation models and their impact, or on the understanding of these models by non-specialists in technology management. Thus, the main objective of this paper is to propose a new graphic configuration to represent the technological innovation management. Based on the literature, the innovation model is presented in the traditional format. Next, the same model is designed in the graphic format - named `the see-saw of competitiveness` - showing the interfaces among the identified factors. The two graphic formats were compared by a group of graduate students in terms of the ease in understanding the conceptual model of innovation. The statistical analysis shows that the seesaw of competitiveness is preferred.
Resumo:
Various authors have written about the importance of drawing in design methodology. Their general conclusion points drawing as an essential tool for design research, as it allows investigation of several alternative solutions in design process (Cross, 2007). The recent profound changes in design nature (Norman, 2011), justify a discussion on the purpose of drawing in design courses. As a consequence of this new reality, the educational institutions face the challenge of the definition of their curricular structures and teaching methodologies. Among others, concepts such as collaboration and multidisciplinary design approaches have been discussed as strategies for design education (Heller and Talarico, 2011, pp. 82-85). In this context, and using our teaching activity experience in Drawing and Design areas, the authors discuss: how can drawing methods be included in the current design teaching? can drawing be considered as an interdisciplinary approach? what contributions can these methodologies provide to the educational/learning process? Based on these concerns, we developed an interdisciplinary project in the Graphic Design Course with two curricular units: Drawing 1 and Aesthetic and Design Theory 1. In this article the authors present the aims and process developed, and discuss the outcomes of this pedagogical experience.
Resumo:
No decorrer do projeto SELEAG foi desenvolvido um jogo de aventura gráfica educativo com o propósito de ensinar história, cultura e relações sociais aos alunos. Este jogo foi avaliado em contexto de sala de aula em diversos países, obtendo resultados positivos. No entanto, por motivos técnicos, alguns dos objetivos propostos pelo projeto não puderam ser devidamente explorados, como permitir que o jogo fosse extensível por outros educadores ou suportar a colaboração online entre os jogadores. Nomeadamente, as ferramentas utilizadas para desenvolver o jogo eram demasiado complicadas para serem utilizadas fora da equipa de desenvolvimento, o que limitou a extensibilidade do projeto, e tornou impossível que educadores sem conhecimentos de programação fossem também capazes de traduzir os seus conteúdos educativos para este formato. Além disso, apesar do jogo possuir algumas funcionalidades de colaboração online, toda a interação era efetuada externamente ao jogo, através de um fórum de mensagens, o que demonstrou ser pouco motivante para os jogadores, pois muitos deles nem se aperceberam que havia uma componente de colaboração no jogo. O objetivo desta tese incide sobre estes dois problemas, e consistiu em desenvolver um editor e motor de jogo com uma interface simples de utilizar, que não necessita de conhecimentos prévios de programação, e que permite criar jogos de aventura gráfica com uma componente de colaboração online verdadeiramente embebida na jogabilidade. A aplicação desenvolvida foi testada por um conjunto de utilizadores de diversas áreas, tendo-se obtido resultados que demonstram a acessibilidade e simplicidade da mesma, independentemente do nível de experiência prévio de programação do utilizador. A componente de colaboração online foi também muito bem recebida pelos utilizadores, os quais demonstraram bastante interesse em ver jogos de aventura gráfica com componente de colaboração online serem desenvolvidos no futuro.
Resumo:
This letter presents a new parallel method for hyperspectral unmixing composed by the efficient combination of two popular methods: vertex component analysis (VCA) and sparse unmixing by variable splitting and augmented Lagrangian (SUNSAL). First, VCA extracts the endmember signatures, and then, SUNSAL is used to estimate the abundance fractions. Both techniques are highly parallelizable, which significantly reduces the computing time. A design for the commodity graphics processing units of the two methods is presented and evaluated. Experimental results obtained for simulated and real hyperspectral data sets reveal speedups up to 100 times, which grants real-time response required by many remotely sensed hyperspectral applications.
Resumo:
Dissertação de Mestrado em Engenharia Informática
Resumo:
The application of compressive sensing (CS) to hyperspectral images is an active area of research over the past few years, both in terms of the hardware and the signal processing algorithms. However, CS algorithms can be computationally very expensive due to the extremely large volumes of data collected by imaging spectrometers, a fact that compromises their use in applications under real-time constraints. This paper proposes four efficient implementations of hyperspectral coded aperture (HYCA) for CS, two of them termed P-HYCA and P-HYCA-FAST and two additional implementations for its constrained version (CHYCA), termed P-CHYCA and P-CHYCA-FAST on commodity graphics processing units (GPUs). HYCA algorithm exploits the high correlation existing among the spectral bands of the hyperspectral data sets and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. The proposed P-HYCA and P-CHYCA implementations have been developed using the compute unified device architecture (CUDA) and the cuFFT library. Moreover, this library has been replaced by a fast iterative method in the P-HYCA-FAST and P-CHYCA-FAST implementations that leads to very significant speedup factors in order to achieve real-time requirements. The proposed algorithms are evaluated not only in terms of reconstruction error for different compressions ratios but also in terms of computational performance using two different GPU architectures by NVIDIA: 1) GeForce GTX 590; and 2) GeForce GTX TITAN. Experiments are conducted using both simulated and real data revealing considerable acceleration factors and obtaining good results in the task of compressing remotely sensed hyperspectral data sets.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
As simulações que pretendam modelar fenómenos reais com grande precisão em tempo útil exigem enormes quantidades de recursos computacionais, sejam estes de processamento, de memória, ou comunicação. Se até há pouco tempo estas capacidades estavam confinadas a grandes supercomputadores, com o advento dos processadores multicore e GPUs manycore os recursos necessários para este tipo de problemas estão agora acessíveis a preços razoáveis não só a investigadores como aos utilizadores em geral. O presente trabalho está focado na otimização de uma aplicação que simula o comportamento dinâmico de materiais granulares secos, um problema do âmbito da Engenharia Civil, mais especificamente na área da Geotecnia, na qual estas simulações permitem por exemplo investigar a deslocação de grandes massas sólidas provocadas pelo colapso de taludes. Assim, tem havido interesse em abordar esta temática e produzir simulações representativas de situações reais, nomeadamente por parte do CGSE (Australian Research Council Centre of Excellence for Geotechnical Science and Engineering) da Universidade de Newcastle em colaboração com um membro da UNIC (Centro de Investigação em Estruturas de Construção da FCT/UNL) que tem vindo a desenvolver a sua própria linha de investigação, que se materializou na implementação, em CUDA, de um algoritmo para GPUs que possibilita simulações de sistemas com um elevado número de partículas. O trabalho apresentado consiste na otimização, assente na premissa da não alteração (ou alteração mínima) do código original, da supracitada implementação, de forma a obter melhorias significativas tanto no tempo global de execução da aplicação, como no aumento do número de partículas a simular. Ao mesmo tempo, valida-se a formulação proposta ao conseguir simulações que refletem, com grande precisão, os fenómenos físicos. Com as otimizações realizadas, conseguiu-se obter uma redução de cerca de 30% do tempo inicial cumprindo com os requisitos de correção e precisão necessários.
Resumo:
En el projecte s’ha dut a terme un estudi sobre la tecnologia que aporten les targetes gràfiques (GPU) dins l’àmbit de programació d’aplicacions que tradicionalment eren executades en la CPU o altrament conegut com a GPGPU. S’ha fet una anàlisi profunda del marc tecnològic actual explicant part del maquinari de les targetes gràfiques i de què tracta el GPGPU. També s’han estudiat les diferents opcions que existeixen per poder realitzar els tests de rendiment que permetran avaluar el programari, quin programari està dissenyat per ser executat amb aquesta tecnologia i quin és el procediment a seguir per poder utilitzar-los. S’han efectuat diverses proves per avaluar el rendiment de programari dissenyat o compatible d’executar en la GPU, realitzant taules comparatives amb els temps de còmput. Un cop finalitzades les diferents proves del programari, es pot concloure que no tota aplicació processada en la GPU aporta un benefici. Per poder veure millores és necessari que l’aplicació reuneixi una sèrie de requisits com que disposi d’un elevat nombre d’operacions que es puguin realitzar en paral lel, que no existeixin condicionants per a l’execució de les operacions i que sigui un procés amb càlcul aritmètic intensiu.
Resumo:
Las herramientas de análisis de secuencias genómicas permiten a los biólogos identificar y entender regiones fundamentales que tienen implicación en enfermedades genéticas. Actualmente existe una necesidad de dotar al ámbito científico de herramientas de análisis eficientes. Este proyecto lleva a cabo una caracterización y análisis del rendimiento de algoritmos utilizados en la comparación de secuencias genómicas completas, y ejecutadas en arquitecturas MultiCore y ManyCore. A partir del análisis se evalúa la idoneidad de este tipo de arquitecturas para resolver el problema de comparar secuencias genómicas. Finalmente se propone una serie de modificaciones en las implementaciones de estos algoritmos con el objetivo de mejorar el rendimiento.
Resumo:
Las aplicaciones de alineamiento de secuencias son una herramienta importante para la comunidad científica. Estas aplicaciones bioinformáticas son usadas en muchos campos distintos como pueden ser la medicina, la biología, la farmacología, la genética, etc. A día de hoy los algoritmos de alineamiento de secuencias tienen una complejidad elevada y cada día tienen que manejar un volumen de datos más grande. Por esta razón se deben buscar alternativas para que estas aplicaciones sean capaces de manejar el aumento de tamaño que los bancos de secuencias están sufriendo día a día. En este proyecto se estudian y se investigan mejoras en este tipo de aplicaciones como puede ser el uso de sistemas paralelos que pueden mejorar el rendimiento notablemente.
Resumo:
A graphical processing unit (GPU) is a hardware device normally used to manipulate computer memory for the display of images. GPU computing is the practice of using a GPU device for scientific or general purpose computations that are not necessarily related to the display of images. Many problems in econometrics have a structure that allows for successful use of GPU computing. We explore two examples. The first is simple: repeated evaluation of a likelihood function at different parameter values. The second is a more complicated estimator that involves simulation and nonparametric fitting. We find speedups from 1.5 up to 55.4 times, compared to computations done on a single CPU core. These speedups can be obtained with very little expense, energy consumption, and time dedicated to system maintenance, compared to equivalent performance solutions using CPUs. Code for the examples is provided.
Resumo:
This article details the use of photographic rectification as support for the graphic documentation of historical and archaeological heritage and specifically the southern facade of the Torre del Pretori (Praetorium Tower) in Tarragona. The Praetorium Tower is part of a larger monumental complex and one of the towers that connected different parts of the Tarraco Provincial Forum, the politic-administrative centre of the ancient capital of Hispania Citerioris. It is therefore a valuable example of the evolution of Roman urban architecture. The aim of this project is to provide accurate graphic documentation of the structure to facilitate the restoration and conservation of the tower, as well as to provide a more profound architectural and archaeological understanding of the Roman forum. The use of photographic rectification enabled us to overcome the spatial and time difficulties involved in collecting data caused by the size and location of the building. Specific software made it easier to obtain accurate two-dimensional images. For this reason, in our case, photographic rectification helped us to make a direct analysis of the monument and facilitated interpretation of the architectural stratigraphy. We currently separate the line of research into two concepts: the construction processes and the architecture of the building. The documentation collected permitted various analyses: the characterisation of the building modules, identification of the tools used to work the building materials, etc. In conclusion, the use of orthoimages is a powerful tool that permits the systematic study of a Roman building that has evolved over the centuries and is now in a modern urban context.
Resumo:
The purpose of this thesis was to investigate Job Definition Format (JDF) and how it could be used in printing house's systems. JDF is a very new information exchange standard, and it gives a lot of opportunities to the printing industry. JDF is the first standard, that has an ability to carry a print job from genesis through completion. Besides, JDF has an ability to bridge the communication gap between production and management information services. In the study of JDF we focused on examining how JDF will effect on printing industry .The thesis also examines printing houses's systems ability to work with JDF standard. The result of the study is a comprehensive picture, what is JDF. We also researched the system developers' visions, how JDF will effect on their products in the future.