48 resultados para Application specific instruction-set processor

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

As the development of integrated circuit technology continues to follow Moore’s law the complexity of circuits increases exponentially. Traditional hardware description languages such as VHDL and Verilog are no longer powerful enough to cope with this level of complexity and do not provide facilities for hardware/software codesign. Languages such as SystemC are intended to solve these problems by combining the powerful expression of high level programming languages and hardware oriented facilities of hardware description languages. To fully replace older languages in the desing flow of digital systems SystemC should also be synthesizable. The devices required by modern high speed networks often share the same tight constraints for e.g. size, power consumption and price with embedded systems but have also very demanding real time and quality of service requirements that are difficult to satisfy with general purpose processors. Dedicated hardware blocks of an application specific instruction set processor are one way to combine fast processing speed, energy efficiency, flexibility and relatively low time-to-market. Common features can be identified in the network processing domain making it possible to develop specialized but configurable processor architectures. One such architecture is the TACO which is based on transport triggered architecture. The architecture offers a high degree of parallelism and modularity and greatly simplified instruction decoding. For this M.Sc.(Tech) thesis, a simulation environment for the TACO architecture was developed with SystemC 2.2 using an old version written with SystemC 1.0 as a starting point. The environment enables rapid design space exploration by providing facilities for hw/sw codesign and simulation and an extendable library of automatically configured reusable hardware blocks. Other topics that are covered are the differences between SystemC 1.0 and 2.2 from the viewpoint of hardware modeling, and compilation of a SystemC model into synthesizable VHDL with Celoxica Agility SystemC Compiler. A simulation model for a processor for TCP/IP packet validation was designed and tested as a test case for the environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reusability has become more popular factor in modern software engineering. This is mainly because object-orientation has brought methods that allow reusing more easily. Today more and more application developer thinks how they can reuse already existing applications in their work. If the developer wants to use existing components outside the current project, he can use design patterns, class libraries or frameworks. These provide solution for specific or general problems that has been already encountered. Application frameworks are collection of classes that provides base for the developer. Application frameworks are mostly implementation phase tools, but can also be used in application design. The main purpose of the frameworks is separate domain specific functionalities from the application specific. Usually the frameworks are divided into two categories: black and white box. Difference between those categories is the way the reuse is done. The application frameworks provide properties that can be examined and compared between different frameworks. These properties are: extensibility, reusability, modularity and scalability. These examine how framework will handle different platforms, changes in framework, increasing demand for resources, etc. Generally application frameworks do have these properties in good level. When comparing general purpose framework and more specific purpose framework, the main difference can be located in reusability of frameworks. It is mainly because the framework designed to specific domain can have constraints from external systems and resources. With general purpose framework these are set by the application developed based on the framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Universal Converter (UNICON) –projektin osana suunniteltiin sähkömoottorikäyttöjen ohjaukseen ja mittaukseen soveltuva digitaaliseen signaaliprosessoriin (DSP) pohjautuva sulautettu järjestelmä. Riittävän laskentatehon varmistamiseksi päädyttiin käyttämään moniprosessorijärjestelmää. Prosessorijärjestelmässä käytettävää DSP-piiriä valittaessa valintaperusteina olivat piirien tarjoama prosessointiteho ja moniprosessorituki. Analog Devices:n SHARC-sarjan DSP-piirit täyttivät parhaiten asetetut vaatimukset: Ne tarjoavat tehokkaan käskykannan lisäksi suuren sisäisen muistin ja sisäänrakennetun moniprosessorituen. Järjestelmän mittalaiteluonteisuudesta johtuen keskeinen suunnitteluparametri oli luoda nopeat tiedonsiirtoyhteydet mittausantureilta DSP-järjestelmään. Tämä toteutettiin käyttäen ohjelmointavia FPGA-logiikkapiirejä digitaalimuotoisen mittausdatan vastaanotossa ja esikäsittelyssä. Tiedonsiirtoyhteys PC-tietokoneelle toteutettiin käyttäen erityistä liityntäkorttia DSP-järjestelmän ja PC-tietokoneen välillä. Liityntäkortin päätehtävänä on puskuroida siirrettävä data. Järjestelyllä estetään PC-tietokoneen vaikutus DSP-järjestelmän toimintaan, jotta kyetään takaamaan järjestelmän reaaliaikainen toiminta kaikissa olosuhteissa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mikropiirien valmistus- ja suunnittelutekniikoiden kehittyminen mahdollistaa yhä monimutkaisempien mikropiirien valmistamisen. Piirien verifioinnista onkin tullut prosessin aikaa vievin osa,sillä kompleksisuuden kasvaessa kasvaa verifioinnin tarve eksponentiaalisesti. Vaikka erinäisiä strategioita piirien integroinnin verifiointiin on esitetty, mm. verifioinnin jakaminen koko suunnitteluprosessin ajalle, jopa yli puolet koko piirin suunnitteluun ja valmistukseen käytetystä työmäärästä kuluu verifiointiin. Uudelleenkäytettävät komponentit ovat pääosassa piirin suunnittelussa, mutta verifioinnissa uudelleenkäytettävyyttä ei ole otettu kunnolla käyttöön ainakaan verifiointiohjelmistojen osalta. Tämä diplomityö esittelee uudelleenkäytettävän mikropiirien verifiointiohjelmistoarkkitehtuurin, jolla saadaan verifiointitaakkaa vähennettyä poistamalla verifioinnissa käytettävien ohjelmistojen uudelleensuunnittelun ja toteuttamisen tarvetta.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The past few decades have seen a considerable increase in the number of parallel and distributed systems. With the development of more complex applications, the need for more powerful systems has emerged and various parallel and distributed environments have been designed and implemented. Each of the environments, including hardware and software, has unique strengths and weaknesses. There is no single parallel environment that can be identified as the best environment for all applications with respect to hardware and software properties. The main goal of this thesis is to provide a novel way of performing data-parallel computation in parallel and distributed environments by utilizing the best characteristics of difference aspects of parallel computing. For the purpose of this thesis, three aspects of parallel computing were identified and studied. First, three parallel environments (shared memory, distributed memory, and a network of workstations) are evaluated to quantify theirsuitability for different parallel applications. Due to the parallel and distributed nature of the environments, networks connecting the processors in these environments were investigated with respect to their performance characteristics. Second, scheduling algorithms are studied in order to make them more efficient and effective. A concept of application-specific information scheduling is introduced. The application- specific information is data about the workload extractedfrom an application, which is provided to a scheduling algorithm. Three scheduling algorithms are enhanced to utilize the application-specific information to further refine their scheduling properties. A more accurate description of the workload is especially important in cases where the workunits are heterogeneous and the parallel environment is heterogeneous and/or non-dedicated. The results obtained show that the additional information regarding the workload has a positive impact on the performance of applications. Third, a programming paradigm for networks of symmetric multiprocessor (SMP) workstations is introduced. The MPIT programming paradigm incorporates the Message Passing Interface (MPI) with threads to provide a methodology to write parallel applications that efficiently utilize the available resources and minimize the overhead. The MPIT allows for communication and computation to overlap by deploying a dedicated thread for communication. Furthermore, the programming paradigm implements an application-specific scheduling algorithm. The scheduling algorithm is executed by the communication thread. Thus, the scheduling does not affect the execution of the parallel application. Performance results achieved from the MPIT show that considerable improvements over conventional MPI applications are achieved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tehoelektoniikkalaitteella tarkoitetaan ohjaus- ja säätöjärjestelmää, jolla sähköä muokataan saatavilla olevasta muodosta haluttuun uuteen muotoon ja samalla hallitaan sähköisen tehon virtausta lähteestä käyttökohteeseen. Tämä siis eroaa signaalielektroniikasta, jossa sähköllä tyypillisesti siirretään tietoa hyödyntäen eri tiloja. Tehoelektroniikkalaitteita vertailtaessa katsotaan yleensä niiden luotettavuutta, kokoa, tehokkuutta, säätötarkkuutta ja tietysti hintaa. Tyypillisiä tehoelektroniikkalaitteita ovat taajuudenmuuttajat, UPS (Uninterruptible Power Supply) -laitteet, hitsauskoneet, induktiokuumentimet sekä erilaiset teholähteet. Perinteisesti näiden laitteiden ohjaus toteutetaan käyttäen mikroprosessoreja, ASIC- (Application Specific Integrated Circuit) tai IC (Intergrated Circuit) -piirejä sekä analogisia säätimiä. Tässä tutkimuksessa on analysoitu FPGA (Field Programmable Gate Array) -piirien soveltuvuutta tehoelektroniikan ohjaukseen. FPGA-piirien rakenne muodostuu erilaisista loogisista elementeistä ja niiden välisistä yhdysjohdoista.Loogiset elementit ovat porttipiirejä ja kiikkuja. Yhdysjohdot ja loogiset elementit ovat piirissä kiinteitä eikä koostumusta tai lukumäärää voi jälkikäteen muuttaa. Ohjelmoitavuus syntyy elementtien välisistä liitännöistä. Piirissä on lukuisia, jopa miljoonia kytkimiä, joiden asento voidaan asettaa. Siten piirin peruselementeistä voidaan muodostaa lukematon määrä erilaisia toiminnallisia kokonaisuuksia. FPGA-piirejä on pitkään käytetty kommunikointialan tuotteissa ja siksi niiden kehitys on viime vuosina ollut nopeaa. Samalla hinnat ovat pudonneet. Tästä johtuen FPGA-piiristä on tullut kiinnostava vaihtoehto myös tehoelektroniikkalaitteiden ohjaukseen. Väitöstyössä FPGA-piirien käytön soveltuvuutta on tutkittu käyttäen kahta vaativaa ja erilaista käytännön tehoelektroniikkalaitetta: taajuudenmuuttajaa ja hitsauskonetta. Molempiin testikohteisiin rakennettiin alan suomalaisten teollisuusyritysten kanssa soveltuvat prototyypit,joiden ohjauselektroniikka muutettiin FPGA-pohjaiseksi. Lisäksi kehitettiin tätä uutta tekniikkaa hyödyntävät uudentyyppiset ohjausmenetelmät. Prototyyppien toimivuutta verrattiin vastaaviin perinteisillä menetelmillä ohjattuihin kaupallisiin tuotteisiin ja havaittiin FPGA-piirien mahdollistaman rinnakkaisen laskennantuomat edut molempien tehoelektroniikkalaitteiden toimivuudessa. Työssä on myösesitetty uusia menetelmiä ja työkaluja FPGA-pohjaisen säätöjärjestelmän kehitykseen ja testaukseen. Esitetyillä menetelmillä tuotteiden kehitys saadaan mahdollisimman nopeaksi ja tehokkaaksi. Lisäksi työssä on kehitetty FPGA:n sisäinen ohjaus- ja kommunikointiväylärakenne, joka palvelee tehoelektroniikkalaitteiden ohjaussovelluksia. Uusi kommunikointirakenne edistää lisäksi jo tehtyjen osajärjestelmien uudelleen käytettävyyttä tulevissa sovelluksissa ja tuotesukupolvissa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tuulivoima on Euroopassa nopeimmin kasvava energian tuotantomuoto. Tuulivoimateollisuuden arvioidaan kasvavan Suomessa huomattavasti lähivuosien aikana ennakoidun syöttötariffipäätöksen myötä, jolloin kilpailu alalla tulee kasvamaan. Tavoitteena oli kehittää tuulivoimalan tornin valmistusta Levator Oy:ssä hitsaustuotantoa tehostamalla ja tuotannon ohjattavuutta parantamalla. Kehitystyöhön kuului toisen hitsauslinjan käyttöönoton suunnittelu ja ohjeiston laatiminen työnjohdolle. Toisen hitsauslinjan käyttöönoton suunnittelun tarkoituksena oli suunnitella muutokset nykyiseen tuotantoon uuden linjan käyttöönoton mahdollistamiseksi. Suunnittelu aloitettiin valitsemalla hitsausprosessit, jonka jälkeen suunniteltiin laitetarpeet työvaihe-analyysien pohjalta. Tuotantolayout muutettiin nykyisestä funktionaalisesta tuotannosta tuotantosoluista koostuvaksi tuotantolinjaksi, jolloin materiaalien virtautus parani huomattavasti. Tuotannon ohjaustavaksi valittiin kapeikko-ohjaus. Ohjeiston laatimisen tarkoituksena oli kerätä ja dokumentoida kaikki tuotannossa tarvittava tieto. Ohjeiston sisältää laadunohjaus, materiaalivirtojen ohjaus ja työnohjaus osiot, joiden tarkoituksena on helpottaa työnjohtamista. Ohjeisto määrittelee yhtenäiset tuotannon toimintatavat, jolloin tuotannon ohjattavuus helpottuu. Tavoitteet täyttyivät, kun toisen tuotantolinjan käyttöönoton vaatimat muutokset aloitettiin suunnitelmien mukaisesti syyskuussa 2009. Ohjeiston sisältö saatiin määriteltyä ja eri osioiden pilotit saatiin valmiiksi joulukuun aikana. Tuotannon ohjattavuus kehittyi huomattavasti ja samalla tuottavuus parani merkittävästi.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The growing population on earth along with diminishing fossil deposits and the climate change debate calls out for a better utilization of renewable, bio-based materials. In a biorefinery perspective, the renewable biomass is converted into many different products such as fuels, chemicals, and materials, quite similar to the petroleum refinery industry. Since forests cover about one third of the land surface on earth, ligno-cellulosic biomass is the most abundant renewable resource available. The natural first step in a biorefinery is separation and isolation of the different compounds the biomass is comprised of. The major components in wood are cellulose, hemicellulose, and lignin, all of which can be made into various end-products. Today, focus normally lies on utilizing only one component, e.g., the cellulose in the Kraft pulping process. It would be highly desirable to utilize all the different compounds, both from an economical and environmental point of view. The separation process should therefore be optimized. Hemicelluloses can partly be extracted with hot-water prior to pulping. Depending in the severity of the extraction, the hemicelluloses are degraded to various degrees. In order to be able to choose from a variety of different end-products, the hemicelluloses should be as intact as possible after the extraction. The main focus of this work has been on preserving the hemicellulose molar mass throughout the extraction at a high yield by actively controlling the extraction pH at the high temperatures used. Since it has not been possible to measure pH during an extraction due to the high temperatures, the extraction pH has remained a “black box”. Therefore, a high-temperature in-line pH measuring system was developed, validated, and tested for hot-water wood extractions. One crucial step in the measurements is calibration, therefore extensive efforts was put on developing a reliable calibration procedure. Initial extractions with wood showed that the actual extraction pH was ~0.35 pH units higher than previously believed. The measuring system was also equipped with a controller connected to a pump. With this addition it was possible to control the extraction to any desired pH set point. When the pH dropped below the set point, the controller started pumping in alkali and by that the desired set point was maintained very accurately. Analyses of the extracted hemicelluloses showed that less hemicelluloses were extracted at higher pH but with a higher molar-mass. Monomer formation could, at a certain pH level, be completely inhibited. Increasing the temperature, but maintaining a specific pH set point, would speed up the extraction without degrading the molar-mass of the hemicelluloses and thereby intensifying the extraction. The diffusion of the dissolved hemicelluloses from the wood particle is a major part of the extraction process. Therefore, a particle size study ranging from 0.5 mm wood particles to industrial size wood chips was conducted to investigate the internal mass transfer of the hemicelluloses. Unsurprisingly, it showed that hemicelluloses were extracted faster from smaller wood particles than larger although it did not seem to have a substantial effect on the average molar mass of the extracted hemicelluloses. However, smaller particle sizes require more energy to manufacture and thus increases the economic cost. Since bark comprises 10 – 15 % of a tree, it is important to also consider it in a biorefinery concept. Spruce inner and outer bark was hot-water extracted separately to investigate the possibility to isolate the bark hemicelluloses. It was showed that the bark hemicelluloses comprised mostly of pectic material and differed considerably from the wood hemicelluloses. The bark hemicelluloses, or pectins, could be extracted at lower temperatures than the wood hemicelluloses. A chemical characterization, done separately on inner and outer bark, showed that inner bark contained over 10 % stilbene glucosides that could be extracted already at 100 °C with aqueous acetone.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The shift towards a knowledge-based economy has inevitably prompted the evolution of patent exploitation. Nowadays, patent is more than just a prevention tool for a company to block its competitors from developing rival technologies, but lies at the very heart of its strategy for value creation and is therefore strategically exploited for economic pro t and competitive advantage. Along with the evolution of patent exploitation, the demand for reliable and systematic patent valuation has also reached an unprecedented level. However, most of the quantitative approaches in use to assess patent could arguably fall into four categories and they are based solely on the conventional discounted cash flow analysis, whose usability and reliability in the context of patent valuation are greatly limited by five practical issues: the market illiquidity, the poor data availability, discriminatory cash-flow estimations, and its incapability to account for changing risk and managerial flexibility. This dissertation attempts to overcome these impeding barriers by rationalizing the use of two techniques, namely fuzzy set theory (aiming at the first three issues) and real option analysis (aiming at the last two). It commences with an investigation into the nature of the uncertainties inherent in patent cash flow estimation and claims that two levels of uncertainties must be properly accounted for. Further investigation reveals that both levels of uncertainties fall under the categorization of subjective uncertainty, which differs from objective uncertainty originating from inherent randomness in that uncertainties labelled as subjective are highly related to the behavioural aspects of decision making and are usually witnessed whenever human judgement, evaluation or reasoning is crucial to the system under consideration and there exists a lack of complete knowledge on its variables. Having clarified their nature, the application of fuzzy set theory in modelling patent-related uncertain quantities is effortlessly justified. The application of real option analysis to patent valuation is prompted by the fact that both patent application process and the subsequent patent exploitation (or commercialization) are subject to a wide range of decisions at multiple successive stages. In other words, both patent applicants and patentees are faced with a large variety of courses of action as to how their patent applications and granted patents can be managed. Since they have the right to run their projects actively, this flexibility has value and thus must be properly accounted for. Accordingly, an explicit identification of the types of managerial flexibility inherent in patent-related decision making problems and in patent valuation, and a discussion on how they could be interpreted in terms of real options are provided in this dissertation. Additionally, the use of the proposed techniques in practical applications is demonstrated by three fuzzy real option analysis based models. In particular, the pay-of method and the extended fuzzy Black-Scholes model are employed to investigate the profitability of a patent application project for a new process for the preparation of a gypsum-fibre composite and to justify the subsequent patent commercialization decision, respectively; a fuzzy binomial model is designed to reveal the economic potential of a patent licensing opportunity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Object detection is a fundamental task of computer vision that is utilized as a core part in a number of industrial and scientific applications, for example, in robotics, where objects need to be correctly detected and localized prior to being grasped and manipulated. Existing object detectors vary in (i) the amount of supervision they need for training, (ii) the type of a learning method adopted (generative or discriminative) and (iii) the amount of spatial information used in the object model (model-free, using no spatial information in the object model, or model-based, with the explicit spatial model of an object). Although some existing methods report good performance in the detection of certain objects, the results tend to be application specific and no universal method has been found that clearly outperforms all others in all areas. This work proposes a novel generative part-based object detector. The generative learning procedure of the developed method allows learning from positive examples only. The detector is based on finding semantically meaningful parts of the object (i.e. a part detector) that can provide additional information to object location, for example, pose. The object class model, i.e. the appearance of the object parts and their spatial variance, constellation, is explicitly modelled in a fully probabilistic manner. The appearance is based on bio-inspired complex-valued Gabor features that are transformed to part probabilities by an unsupervised Gaussian Mixture Model (GMM). The proposed novel randomized GMM enables learning from only a few training examples. The probabilistic spatial model of the part configurations is constructed with a mixture of 2D Gaussians. The appearance of the parts of the object is learned in an object canonical space that removes geometric variations from the part appearance model. Robustness to pose variations is achieved by object pose quantization, which is more efficient than previously used scale and orientation shifts in the Gabor feature space. Performance of the resulting generative object detector is characterized by high recall with low precision, i.e. the generative detector produces large number of false positive detections. Thus a discriminative classifier is used to prune false positive candidate detections produced by the generative detector improving its precision while keeping high recall. Using only a small number of positive examples, the developed object detector performs comparably to state-of-the-art discriminative methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tämän tutkimuksen tavoitteena oli laatia tarkastelumalli, jonka perusteella pystyttäisiin analysoimaan tutkimuksessa tarkasteltavan kaukolämpöjohdon lämpöhäviöiden talteenot-toratkaisun taloudellista kannattavuutta yleisellä tasolla sekä sen mahdollisissa sovellus-kohteissa. Työssä tarkastellaan kaukolämpöjohtoa, jonka sisään on sijoitettu lämmönke-ruuputki. Lämmönkeruuputken on tarkoitus kerätä lämpöä kaukolämpöjohdon vaipasta sekä pitää vaipan lämpötilaa ympäristön lämpötilaa matalampana, jolloin johdon ulkopuo-lisia lämpöhäviöitä ei synny. Tarkastelumalli laadittiin perustuen lämpöpumppuprosessin ja kaukolämpöverkoston yleisiin mitoitusperiaatteisiin sekä ratkaisuun liittyvien järjestelmien osalta kerättyihin tarkas-teluhetkeä edustaviin kustannustietoihin. Tarkastelumallista laadittiin Excel-laskentataulukkona, jota voidaan tulevaisuudessa soveltaa järjestelmän sovelluskohdekoh-taiseen tarkasteluun sekä mitoitukseen. Lasketut takaisinmaksuajat osoittautuivat kaikissa tarkastelluissa tapauksissa järjestelmien arvioitua teknistä käyttöikää lyhyemmäksi. Järjestelmällä voisi olla tietynlaisissa sovellus-kohteissa myös strateginen, kaukolämpöliiketoiminnan riskejä vähentävä merkitys.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Työssä tarkastellaan kolmen eri valmistajan signaaliprosessoriperheitä. Työn tavoitteena on tutkia prosessoreiden teknistä soveltuvuutta suunnitteilla olevaan taajuusmuuttajatuoteperheeseen. Työn alkuosassa käydään taajuusmuuttajan rakenne läpi ja selostetaan oikosulkumoottorin yleisimmät ohjausmenetelmät. Työssä selvitetään myös signaaliprosessorin ja integroitujen oheispiirien toimintaa. Työn painopiste prosessoreiden teknisten ominaisuuksien vertailussa. Työssä on vertailtu muun muassa prosessoreiden sisäistä rakennetta, käskykantojen ominaisuuksia, keskeytysten palveluun kuluvaa aikaa ja oheispiirien ominaisuuksia. Oheispiirien, erityisesti analogiadigitaalimuuntimen halutunlainen toiminta on moottorinohjausohjelmiston kannalta tärkeää. Työhön sisällytetyt prosessoriperheet on pisteytetty tarkasteltujen ominaisuuksien osalta. Vertailun tuloksena on esitetty haettuun tarkoitukseen teknisesti soveltuvin prosessoriperhe ja prosessorityyppi. Työssä ei kuitenkaan voida antaa yleistä paremmuusjärjestystä tutkituille prosessoreille.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis deals with a hardware accelerated Java virtual machine, named REALJava. The REALJava virtual machine is targeted for resource constrained embedded systems. The goal is to attain increased computational performance with reduced power consumption. While these objectives are often seen as trade-offs, in this context both of them can be attained simultaneously by using dedicated hardware. The target level of the computational performance of the REALJava virtual machine is initially set to be as fast as the currently available full custom ASIC Java processors. As a secondary goal all of the components of the virtual machine are designed so that the resulting system can be scaled to support multiple co-processor cores. The virtual machine is designed using the hardware/software co-design paradigm. The partitioning between the two domains is flexible, allowing customizations to the resulting system, for instance the floating point support can be omitted from the hardware in order to decrease the size of the co-processor core. The communication between the hardware and the software domains is encapsulated into modules. This allows the REALJava virtual machine to be easily integrated into any system, simply by redesigning the communication modules. Besides the virtual machine and the related co-processor architecture, several performance enhancing techniques are presented. These include techniques related to instruction folding, stack handling, method invocation, constant loading and control in time domain. The REALJava virtual machine is prototyped using three different FPGA platforms. The original pipeline structure is modified to suit the FPGA environment. The performance of the resulting Java virtual machine is evaluated against existing Java solutions in the embedded systems field. The results show that the goals are attained, both in terms of computational performance and power consumption. Especially the computational performance is evaluated thoroughly, and the results show that the REALJava is more than twice as fast as the fastest full custom ASIC Java processor. In addition to standard Java virtual machine benchmarks, several new Java applications are designed to both verify the results and broaden the spectrum of the tests.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Avhandlingen behandlar temat territoriell autonomi ur ett globalt perspektiv. Syftet är dels att kartlägga de territoriella autonomierna i världen och dels att visa hur en ny metod som fuzzy-set kan användas inom ämnesområdet jämförande politik. Forskningsproblemet är att försöka finna de bakgrundsfaktorer som förklarar uppkomsten av territoriell autonomi som sådant. Territoriella autonomier ses som särlösningar inom stater. Dessa regioner har erhållit en specialställning i förhållande till andra regioner inom respektive stat och även i förhållande till centralmakten i övrigt. Regionerna kan därför ses som undantag inom det enhetliga federala, regionala eller decentraliserade systemet inom en viss stat ifråga. En kartläggning visar att det finns 65 specialregioner fördelade på 25 stater i världen. De flesta av dessa utgörs av öar. Resultaten visar att det finns två vägar vilka leder till territoriell autonomi i allmänhet. Den ena vägen är en kombination av etnisk särprägel och liten befolkningsmängd, medan den andra vägen utgörs av kombinationen av historiska orsaker och geografiskt avstånd. Båda vägar är lika giltiga och förutsättningen är en demokratisk miljö.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.