957 resultados para Graphic ProcessingUnits, GPUs


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Summer Graphic Design workshops Co-sponsored by the Continuing Education Division and the Graphic Design Dept. Schedule C & D, three weeks each, tuition $550-$650.00, New Times in London $1,735.00

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Summer Graphic Design workshops Co-sponsored by the Continuing Education Division and the Graphic Design Dept. Schedule A & B, tuition $410.00, Applied Semiotics for Design Seminar $625.00

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Summer Graphic Design workshops Co-sponsored by the Continuing Education Division and the Graphic Design Dept. Schedule B, C & D, tuition $450.00

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Summer Graphic Design workshops Co-sponsored by the Continuing Education Division and the Graphic Design Dept. Schedule B, C & D, tuition $450.00

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação apresentada no Programa de Pós-graduação em Comunicação - Mestrado da Universidade Municipal de São Caetano do Sul

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: The purpose of this study was to compare the dental movement that occurs during the processing of maxillary complete dentures with 3 different base thicknesses, using 2 investment methods, and microwave polymerization.Methods: A sample of 42 denture models was randomly divided into 6 groups (n = 7), with base thicknesses of 1.25, 2.50, and 3.75 mm and gypsum or silicone flask investment. Points were demarcated on the distal surface of the second molars and on the back of the gypsum cast at the alveolar ridge level to allow linear and angular measurement using AutoCAD software. The data were subjected to analysis of variance with double factor, Tukey test and Fisher (post hoc).Results: Angular analysis of the varying methods and their interactions generated a statistical difference (P = 0.023) when the magnitudes of molar inclination were compared. Tooth movement was greater for thin-based prostheses, 1.25 mm (-0.234), versus thick 3.75 mm (0.2395), with antagonistic behavior. Prosthesis investment with silicone (0.053) showed greater vertical change compared with the gypsum investment (0.032). There was a difference between the point of analysis, demonstrating that the changes were not symmetric.Conclusions: All groups evaluated showed change in the position of artificial teeth after processing. The complete denture with a thin base (1.25 mm) and silicone investment showed the worst results, whereas intermediate thickness (2.50 mm) was demonstrated to be ideal for the denture base.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A CMOS audio-equalizer based on a parallel-array of 2nd-order bandpass-sections is presented and realized with triode transconductors. It has a programmable 12db-boost/cut on each of its three decade-bands, easily achieved through the linear dependence of gm on VDS. In accordance with a 0.8μm n-well double-metal fabrication process, a range of simulations supports theoretical analysis and circuit performance at different boost/cut scenarios. For VDD=3.3V, fullboosting stand-by prover consumption is 1.05mW. THD=-42.61dB@1Vpp and may be improved by balanced structures. Thermal- and I/f-noise spectral densities are 3.2μV/Hz12 and 18.2μV/Hz12@20Hz, respectively, for a dynamic range of 52.3dB@1Vpp. The equalizer effective area is 2.4mm2. The drawback of the existing transmission-zero due to the feedthrough-capacitance of a triode input-device is also addressed. The proposed topology can be extended to the design of more complex graphic-equalizers and hearing-aids.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

SAFT techniques are based on the sequential activation, in emission and reception, of the array elements and the post-processing of all the received signals to compose the image. Thus, the image generation can be divided into two stages: (1) the excitation and acquisition stage, where the signals received by each element or group of elements are stored; and (2) the beamforming stage, where the signals are combined together to obtain the image pixels. The use of Graphics Processing Units (GPUs), which are programmable devices with a high level of parallelism, can accelerate the computations of the beamforming process, that usually includes different functions such as dynamic focusing, band-pass filtering, spatial filtering or envelope detection. This work shows that using GPU technology can accelerate, in more than one order of magnitude with respect to CPU implementations, the beamforming and post-processing algorithms in SAFT imaging. ©2009 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The supply chain management, postponement and demand management operations are of strategic importance to the economic success of organizations because they directly influence the production process. The aim of this paper is to analyze the influence of the postponement in an enterprise production system with make-to-stock and with seasonal demand. The research method used was a case study, the instruments of data collection were semi-structured interviews, document analysis and site visits. The research is based on the following issues: Demand Management which can be understood as a practice that allows you to manage and coordinate the supply chain in reverse, in which consumers trigger actions for the delivery of products. The Supply Chain Management is able to allow the addition of value, exceeding the expectations of consumers, developing a relationship with suppliers and customer's win-win. The Postponement strategy must fit the characteristics of markets that require variety of customized products and services, lower cost and higher quality, aiming to support decision making. The production system make-to-stock shows enough interest to organizations that are operating in markets with high demand variability. © 2011 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Huge image collections are becoming available lately. In this scenario, the use of Content-Based Image Retrieval (CBIR) systems has emerged as a promising approach to support image searches. The objective of CBIR systems is to retrieve the most similar images in a collection, given a query image, by taking into account image visual properties such as texture, color, and shape. In these systems, the effectiveness of the retrieval process depends heavily on the accuracy of ranking approaches. Recently, re-ranking approaches have been proposed to improve the effectiveness of CBIR systems by taking into account the relationships among images. The re-ranking approaches consider the relationships among all images in a given dataset. These approaches typically demands a huge amount of computational power, which hampers its use in practical situations. On the other hand, these methods can be massively parallelized. In this paper, we propose to speedup the computation of the RL-Sim algorithm, a recently proposed image re-ranking approach, by using the computational power of Graphics Processing Units (GPU). GPUs are emerging as relatively inexpensive parallel processors that are becoming available on a wide range of computer systems. We address the image re-ranking performance challenges by proposing a parallel solution designed to fit the computational model of GPUs. We conducted an experimental evaluation considering different implementations and devices. Experimental results demonstrate that significant performance gains can be obtained. Our approach achieves speedups of 7x from serial implementation considering the overall algorithm and up to 36x on its core steps.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pós-graduação em Biofísica Molecular - IBILCE

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This Phd thesis was entirely developed at the Telescopio Nazionale Galileo (TNG, Roque de los Muchachos, La Palma Canary Islands) with the aim of designing, developing and implementing a new Graphical User Interface (GUI) for the Near Infrared Camera Spectrometer (NICS) installed on the Nasmyth A of the telescope. The idea of a new GUI for NICS has risen for optimizing the astronomers work through a set of powerful tools not present in the existing GUI, such as the possibility to move automatically, an object on the slit or do a very preliminary images analysis and spectra extraction. The new GUI also provides a wide and versatile image display, an automatic procedure to find out the astronomical objects and a facility for the automatic image crosstalk correction. In order to test the overall correct functioning of the new GUI for NICS, and providing some information on the atmospheric extinction at the TNG site, two telluric standard stars have been spectroscopically observed within some engineering time, namely Hip031303 and Hip031567. The used NICS set-up is as follows: Large Field (0.25'' /pixel) mode, 0.5'' slit and spectral dispersion through the AMICI prism (R~100), and the higher resolution (R~1000) JH and HK grisms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Microprocessori basati su singolo processore (CPU), hanno visto una rapida crescita di performances ed un abbattimento dei costi per circa venti anni. Questi microprocessori hanno portato una potenza di calcolo nell’ordine del GFLOPS (Giga Floating Point Operation per Second) sui PC Desktop e centinaia di GFLOPS su clusters di server. Questa ascesa ha portato nuove funzionalità nei programmi, migliori interfacce utente e tanti altri vantaggi. Tuttavia questa crescita ha subito un brusco rallentamento nel 2003 a causa di consumi energetici sempre più elevati e problemi di dissipazione termica, che hanno impedito incrementi di frequenza di clock. I limiti fisici del silicio erano sempre più vicini. Per ovviare al problema i produttori di CPU (Central Processing Unit) hanno iniziato a progettare microprocessori multicore, scelta che ha avuto un impatto notevole sulla comunità degli sviluppatori, abituati a considerare il software come una serie di comandi sequenziali. Quindi i programmi che avevano sempre giovato di miglioramenti di prestazioni ad ogni nuova generazione di CPU, non hanno avuto incrementi di performance, in quanto essendo eseguiti su un solo core, non beneficiavano dell’intera potenza della CPU. Per sfruttare appieno la potenza delle nuove CPU la programmazione concorrente, precedentemente utilizzata solo su sistemi costosi o supercomputers, è diventata una pratica sempre più utilizzata dagli sviluppatori. Allo stesso tempo, l’industria videoludica ha conquistato una fetta di mercato notevole: solo nel 2013 verranno spesi quasi 100 miliardi di dollari fra hardware e software dedicati al gaming. Le software houses impegnate nello sviluppo di videogames, per rendere i loro titoli più accattivanti, puntano su motori grafici sempre più potenti e spesso scarsamente ottimizzati, rendendoli estremamente esosi in termini di performance. Per questo motivo i produttori di GPU (Graphic Processing Unit), specialmente nell’ultimo decennio, hanno dato vita ad una vera e propria rincorsa alle performances che li ha portati ad ottenere dei prodotti con capacità di calcolo vertiginose. Ma al contrario delle CPU che agli inizi del 2000 intrapresero la strada del multicore per continuare a favorire programmi sequenziali, le GPU sono diventate manycore, ovvero con centinaia e centinaia di piccoli cores che eseguono calcoli in parallelo. Questa immensa capacità di calcolo può essere utilizzata in altri campi applicativi? La risposta è si e l’obiettivo di questa tesi è proprio quello di constatare allo stato attuale, in che modo e con quale efficienza pùo un software generico, avvalersi dell’utilizzo della GPU invece della CPU.