927 resultados para graphics processor
Resumo:
Optical activity is the ability of chiral substances to rotate the plane of plane-polarized light and is measured using an instrument called a polarimeter. An educational software application to explore, both interactively and visually, the concepts related to polarimetry to facilitate their understanding was developed. The software was field-tested and a questionnaire evaluating the graphics interface, usability and the software as an educational tool, was answered by students. The results characterized the computer application developed as an auxiliary tool for assisting teachers in lectures and students in the learning process.
Resumo:
This report describes a study about the feasibility of using a conventional digital camera, a cell-phone camera, an optical microscope, and a scanner as digital image capture devices on printed microzones. An array containing nine circular zones was drawn using graphics software and printed onto transparency film by a laser printer. Due to its superior analytical performance, the scanner was chosen for the quantitative determination of Fe2+ in pharmaceutical samples. The data achieved using scanned images did not differ statistically from those attained by the reference spectrophotometric method at the confidence level of 0.05.
Resumo:
The use of spreadsheet softwares is not widespread in Chemical Education in Brazil as a computational education tool. By its turn the Qualitative Analytical Chemistry is considered a discipline with classical and non-flexible content. Thus in this work the spreadsheet software Excel® was evaluated as a teaching tool in a Qualitative Analytical Chemistry course for calculations of concentrations of the species in equilibrium in solutions of acids. After presenting the theory involved in such calculations the students were invited to elaborate the representation of the distribution of these species in a graphical form, using the spreadsheet software. Then the teaching team evaluated the resulting graphics regarding form and contents. The graphics with conceptual and/or formal errors were returned for correction, revealing significant improvement in the second presentation in all cases. The software showed to be motivating for the content of the discipline, improving the learning interest, while it was possible to prove that even in classical disciplines it is possible to introduce new technologies to help the teaching process.
Resumo:
Suomessa tuotetaan sähköä eri energialähteistä sekä menetelmistä. Pääosa sähköstä tuotetaan ydinenergian-, vesivoiman-, ja maakaasun avulla. Sähköä hyödynnetään niin kuluttaja- ja ammattilaiskäytössä erilaisten sähkölaitteiden muodossa. Useat sähkölaitteista toimivat joko AC- tai DC-virralla, tietyllä jännitealueella. Mikäli sähkölaitteiden valmistajat haluavat markkinoida tuotteitansa CE-hyväksytysti, tulee niiden täyttää tietyt vaatimukset. Eräs toteutettava direktiivi on pienjännitedirektiivi, minkä toteuttamalla sähkölaitetta voidaan markkinoida Euroopan Talousalueella, tällöin sähkölaite täyttää CE-merkinnän vaatimukset. Valmistettavista sähkölaitteista on kyettävä suorittamaan sähköturvamittauksia, sähköturvamittalaitteilla, jotka myös kehittyvät ja hyödyntävät nykyaikaista teknologiaa. Nykyaikaisen teknologian ansiosta sähköturvamittalaitteita voidaan soveltaa myös muunlaiseen sähköisiin mittauksiin,kuten jännite- ja virtamittauksin.Diplomityössä esitellään, kuinka modernin optisen tiedonsiirron omaavan sähköturvamittalaitteen prosessorikortti oheiskomponentteineen on suunniteltu ja toteutettu valmiiksi prototyypiksi.
Resumo:
Sähkökäytön suunnittelussa säätöä voidaan testata useassa tapauksessa reaaliaikasimulaattorilla todellisen laitteiston sijaan. Monet reaaliaikasimulaatioiden perustana käytetyt algoritmit soveltuvat täysinohjatulle invertterisillalle. Eräissä sovelluksissa halutaan kuitenkin käyttää puoliksiohjattua siltaa. Puoliksiohjattulla sillalla mallin kausaalisuus voi kääntyä, mitä perinteiset reaaliaikasimulaattorit eivät pysty simuloimaan Tässä työssä oli tavoitteena kehittää reaaliaikasimulaattori puoliksiohjatulle kestomagneettitahtikonekäytölle. Emulaattoriin mallinnettiin todellisen käytön kestomagneettitahtikone ja invertterisilta. Simulaattori toteutettiin digitaaliselle signaaliprosessorille (DSP) ja mittauksiin liittyvät oheislaitteet mallinnettiin FPGA-piirille. Emulaattoriin liitettiin erillinen säätäjä, jota käytettiin myös todellisen sähkökäytön säätämiseen. Emulaattorilla ja todellisella käytöllä tehtyjä mittauksia verrattiin ja emuloimalla saadut tulokset vastasivat melko hyvin todellisesta käytöstä mitattuja.
Resumo:
App Engine on lyhenne englanninkielisistä termeistä application, sovellus ja engine, moottori. Kyseessä on Google, Inc. -konsernin toteuttama kaupallinen palvelu, joka noudattaa pilvimallin tietojenkäsittelyn periaatteita ja mahdollistaa asiakkaan oman sovelluskehityksen. Järjestelmään on mahdollista ohjelmoida itse ideoitu palvelu Internet - verkon välityksellä käytettäväksi, joko yksityisesti tai julkisesti. Kyse on siis hajautetusta palvelinjärjestelmästä, jonka tarjoaa dynaamisesti kuormitukseen sopeutuvan sovellusalustan, jossa asiakas ei vuokraa virtuaalikoneita. Myös järjestelmän tarjoama tallennuskapasiteetti on saatavilla joustavasti. Itse kandidaatintyössä syvennytään yksityiskohtaisemmin sovelluksen toteuttamiseen palvelussa, rajoitteisiin ja soveltuvuuteen. Alussa käydään läpi pilvikäsite, joista monilla tietokoneiden käyttäjillä on epäselvä käsitys. Erilaisia kokonaisuuksia voidaan luoda erittäin monella tavalla, joista rajaamme käsittelyn kohteeksi toteuttamiskelpoiset yleiset ratkaisut.
Resumo:
Vaihtosuuntaajan IGBT-moduulin liitosten lämpötiloja ei voida suoraan mitata, joten niiden arviointiin tarvitaan reaaliaikainen lämpömalli. Tässä työssä on tavoitteena kehittää tähän tarkoitukseen C-kielellä implementoitu ratkaisu, joka on riittävän tarkka ja samalla mahdollisimman laskennallisesti tehokas. Ohjelmallisen toteutuksen täytyy myös sopia erilaisille moduulityypeille ja sen on tarvittaessa otettava huomioon saman moduulin muiden sirujen lämmittävä vaikutus toisiinsa. Kirjallisuuskatsauksen perusteella valitaan olemassa olevista lämpömalleista käytännön toteutuksen pohjaksi lämpöimpedanssimatriisiin perustuva malli. Lämpöimpedanssimatriisista tehdään Simulink-ohjelmalla s-tason simulointimalli, jota käytetään referenssinä muun muassa implementoinnin tarkkuuden verifiointiin. Lämpömalli tarvitsee tiedon vaihtosuuntaajan häviöistä, joten työssä on selvitetty eri vaihtoehtoja häviölaskentaan. Lämpömallin kehittäminen s-tason mallista valmiiksi C-kieliseksi koodiksi on kuvattu tarkasti. Ensin s-tason malli diskretoidaan z-tasoon. Z-tason siirtofunktiot muutetaan puolestaan ensimmäisen kertaluvun differenssiyhtälöiksi. Työssä kehitetty monen aikatason lämpömalli saadaan jakamalla ensimmäisen kertaluvun differenssiyhtälöt eri aikatasoille suoritettavaksi sen mukaan, mikä niiden kuvaileman termin vaatima päivitysnopeus on. Tällainen toteutus voi parhaimmillaan kuluttaa alle viidesosan kellojaksoja verrattuna suoraviivaiseen yhden aikatason toteutukseen. Implementoinnin tarkkuus on hyvä. Implementoinnin vaatimia suoritusaikoja testattiin Texas Instrumentsin TMS320C6727- prosessorilla (300 MHz). Esimerkkimallin laskemisen määritettiin kuluttavan vaihtosuuntaajan toimiessa 5 kHz kytkentätaajuudella vain 0,4 % prosessorin kellojaksoista. Toteutuksen tarkkuus ja laskentakapasiteetin vähäinen vaatimus mahdollistavat lämpömallin käyttämisen lämpösuojaukseen ja lisäämisen osaksi muuta jo prosessorilla olemassa olevaa systeemiä.
Resumo:
The objective of the thesis was to create three tutorials for MeVEA Simulation Software to instruct the new users to the modeling methodology used in the MeVEA Simulation Software. MeVEA Simulation Software is a real-time simulation software based on multibody dynamics. The simulation software is designed to create simulation models of complete mechatronical system. The thesis begins with a more detail description of the MeVEA Simulation Software and its components. The thesis presents the three simulation models and written theory of the steps of model creation. The first tutorial introduces the basic features which are used in most simulation models. The basic features include bodies, constrains, forces, basic hydraulics and motors. The second tutorial introduces the power transmission components, tyres and user input definitions for the different components in power transmission systems. The third tutorial introduces the definitions of two different types of collisions and collision graphics used in MeVEA Simulation Software.
Resumo:
Engraved illustrations are based on the original oil paintings of several Finnish artists: A. v. Becker, A. Edelfelt, R. W. Ekman, W. Holmberg, K. E. Jansson, O. Kleineh, J. Knutson, B. Lindholm, H. Munsterhjelm och B. Reinhold.
Resumo:
Engraved illustrations are based on the original oil paintings of several Finnish artists: A. v. Becker, A. Edelfelt, R. W. Ekman, W. Holmberg, K. E. Jansson, O. Kleineh, J. Knutson, B. Lindholm, H. Munsterhjelm och B. Reinhold.
Resumo:
As technology geometries have shrunk to the deep submicron regime, the communication delay and power consumption of global interconnections in high performance Multi- Processor Systems-on-Chip (MPSoCs) are becoming a major bottleneck. The Network-on- Chip (NoC) architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication issues such as performance limitations of long interconnects and integration of large number of Processing Elements (PEs) on a chip. The choice of routing protocol and NoC structure can have a significant impact on performance and power consumption in on-chip networks. In addition, building a high performance, area and energy efficient on-chip network for multicore architectures requires a novel on-chip router allowing a larger network to be integrated on a single die with reduced power consumption. On top of that, network interfaces are employed to decouple computation resources from communication resources, to provide the synchronization between them, and to achieve backward compatibility with existing IP cores. Three adaptive routing algorithms are presented as a part of this thesis. The first presented routing protocol is a congestion-aware adaptive routing algorithm for 2D mesh NoCs which does not support multicast (one-to-many) traffic while the other two protocols are adaptive routing models supporting both unicast (one-to-one) and multicast traffic. A streamlined on-chip router architecture is also presented for avoiding congested areas in 2D mesh NoCs via employing efficient input and output selection. The output selection utilizes an adaptive routing algorithm based on the congestion condition of neighboring routers while the input selection allows packets to be serviced from each input port according to its congestion level. Moreover, in order to increase memory parallelism and bring compatibility with existing IP cores in network-based multiprocessor architectures, adaptive network interface architectures are presented to use multiple SDRAMs which can be accessed simultaneously. In addition, a smart memory controller is integrated in the adaptive network interface to improve the memory utilization and reduce both memory and network latencies. Three Dimensional Integrated Circuits (3D ICs) have been emerging as a viable candidate to achieve better performance and package density as compared to traditional 2D ICs. In addition, combining the benefits of 3D IC and NoC schemes provides a significant performance gain for 3D architectures. In recent years, inter-layer communication across multiple stacked layers (vertical channel) has attracted a lot of interest. In this thesis, a novel adaptive pipeline bus structure is proposed for inter-layer communication to improve the performance by reducing the delay and complexity of traditional bus arbitration. In addition, two mesh-based topologies for 3D architectures are also introduced to mitigate the inter-layer footprint and power dissipation on each layer with a small performance penalty.
Resumo:
This work presents a geometric nonlinear dynamic analysis of plates and shells using eight-node hexahedral isoparametric elements. The main features of the present formulation are: (a) the element matrices are obtained using reduced integrations with hourglass control; (b) an explicit Taylor-Galerkin scheme is used to carry out the dynamic analysis, solving the corresponding equations of motion in terms of velocity components; (c) the Truesdell stress rate tensor is used; (d) the vector processor facilities existing in modern supercomputers were used. The results obtained are comparable with previous solutions in terms of accuracy and computational performance.
Resumo:
It is well known that the numerical solutions of incompressible viscous flows are of great importance in Fluid Dynamics. The graphics output capabilities of their computational codes have revolutionized the communication of ideas to the non-specialist public. In general those codes include, in their hydrodynamic features, the visualization of flow streamlines - essentially a form of contour plot showing the line patterns of the flow - and the magnitudes and orientations of their velocity vectors. However, the standard finite element formulation to compute streamlines suffers from the disadvantage of requiring the determination of boundary integrals, leading to cumbersome implementations at the construction of the finite element code. In this article, we introduce an efficient way - via an alternative variational formulation - to determine the streamlines for fluid flows, which does not need the computation of contour integrals. In order to illustrate the good performance of the alternative formulation proposed, we capture the streamlines of three viscous models: Stokes, Navier-Stokes and Viscoelastic flows.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.