102 resultados para NOC


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increasing complexity of integrated circuits has boosted the development of communications architectures like Networks-on-Chip (NoCs), as an architecture; alternative for interconnection of Systems-on-Chip (SoC). Networks-on-Chip complain for component reuse, parallelism and scalability, enhancing reusability in projects of dedicated applications. In the literature, lots of proposals have been made, suggesting different configurations for networks-on-chip architectures. Among all networks-on-chip considered, the architecture of IPNoSys is a non conventional one, since it allows the execution of operations, while the communication process is performed. This study aims to evaluate the execution of data-flow based applications on IPNoSys, focusing on their adaptation against the design constraints. Data-flow based applications are characterized by the flowing of continuous stream of data, on which operations are executed. We expect that these type of applications can be improved when running on IPNoSys, because they have a programming model similar to the execution model of this network. By observing the behavior of these applications when running on IPNoSys, were performed changes in the execution model of the network IPNoSys, allowing the implementation of an instruction level parallelism. For these purposes, analysis of the implementations of dataflow applications were performed and compared

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Alongside the advances of technologies, embedded systems are increasingly present in our everyday. Due to increasing demand for functionalities, many tasks are split among processors, requiring more efficient communication architectures, such as networks on chip (NoC). The NoCs are structures that have routers with channel point-to-point interconnect the cores of system on chip (SoC), providing communication. There are several networks on chip in the literature, each with its specific characteristics. Among these, for this work was chosen the Integrated Processing System NoC (IPNoSyS) as a network on chip with different characteristics compared to general NoCs, because their routing components also accumulate processing function, ie, units have functional able to execute instructions. With this new model, packets are processed and routed by the router architecture. This work aims at improving the performance of applications that have repetition, since these applications spend more time in their execution, which occurs through repeated execution of his instructions. Thus, this work proposes to optimize the runtime of these structures by employing a technique of instruction-level parallelism, in order to optimize the resources offered by the architecture. The applications are tested on a dedicated simulator and the results compared with the original version of the architecture, which in turn, implements only packet level parallelism

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have recently proposed an extension to Petri nets in order to be able to directly deal with all aspects of embedded digital systems. This extension is meant to be used as an internal model of our co-design environment. After analyzing relevant related work, and presenting a short introduction to our extension as a background material, we describe the details of the timing model we use in our approach, which is mainly based in Merlin's time model. We conclude the paper by discussing an example of its usage. © 2004 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Medo e a ansiedade são emoções que têm origem nas reações de defesa que os animais exibem diante de ameaças que podem comprometer sua integridade física ou a própria sobrevivência, tais como confrontos com o predador ou com animais da mesma espécie. Em se tratando da espécie humana, estas respostas defensivas eliciadas representariam a ocorrência de transtornos de ansiedade e, a busca por sua compreensão, resultou no desenvolvimento de modelos animais de ansiedade, dentre os quais se destaca o labirinto em cruz elevado (LCE) que é baseado na aversão natural de roedores a espaços abertos. Com relação aos substratos neurais envolvidos nestas manifestações, cabe destacar a matéria cinzenta periaquedutal bem como estruturas prosencefálicas, como o córtex pré-frontal (CPFm), uma estrutura límbica que tem sido frequentemente descrita como relevante na neurobiologia da ansiedade. O óxido nítrico (NO) tem sido investigado em diferentes estruturas cerebrais de roedores nas quais foram evidenciadas respostas pró-aversivas. Sendo o CPFm uma estrutura que contém neurônios nitrérgicos, este estudo teve o objetivo de investigar o efeito da facilitação nitrérgica através da injeção intra-CPFm de um doador de NO, o NOC-9 [6-(Hidroxi-1-metil-2-nitrosohidrazino)-N-metil-1-hexanamina], sobre o comportamento de camundongos expostos ao labirinto em cruz elevado (LCE). Métodos e Resultados: Camundongos Suíços machos (25-35g, n = 53) receberam implante de cânula guia no CPFm. Cinco dias após, os animais receberam microinjeção de veículo ou NOC-9 nas doses de (1,875 nmol; 18,75 nmol; 37,5 nmol ou 75nmol) e, após cinco minutos, foram expostos... (Resumo completo, clicar acesso eletrônico abaixo)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This volume is a collection of the work done in a three years-lasting PhD, focused in the analysis of Central and Southern Adriatic marine sediments, deriving from the collection of a borehole and many cores, achieved thanks to the good seismic-stratigraphic knowledge of the study area. The work was made out within European projects EC-EURODELTA (coordinated by Fabio Trincardi, ISMAR-CNR), EC-EUROSTRATAFORM (coordinated by Phil P. E. Weaver, NOC, UK), and PROMESS1 (coordinated by Serge Bernè, IFREMER, France). The analysed sedimentary successions presented highly expanded stratigraphic intervals, particularly for the last 400 kyr, 60 kyr and 6 kyr BP. These three different time-intervals resulted in a tri-partition of the PhD thesis. The study consisted of the analysis of planktic and benthic foraminifers’ assemblages (more than 560 samples analysed), as well as in preparing the material for oxygen and carbon stable isotope analyses, and interpreting and discussing the obtained dataset. The chronologic framework of the last 400 kyr was achieved for borehole PRAD1-2 (within the work-package WP6 of PROMESS1 project), collected in 186.5 m water depth. The proposed chronology derives from a multi-disciplinary approach, consisting of the integration of numerous and independent proxies, some of which analysed by other specialists within the project. The final framework based on: micropaleontology (calcareous nannofossils and foraminifers’ bioevents), climatic cyclicity (foraminifers’ assemblages), geochemistry (oxygen stable isotope, made out on planktic and benthic records), paleomagnetism, radiometric ages (14C AMS), teprhochronology, identification of sapropel-equivalent levels (Se). It’s worth to note the good consistency between the oxygen stable isotope curve obtained for borehole PRAD1-2 and other deeper Mediterranean records. The studied proxies allowed the recognition of all the isotopic intervals from MIS10 to MIS1 in PRAD1-2 record, and the base of the borehole has been ascribed to the early MIS11. Glacial and interglacial intervals identified in the Central Adriatic record have been analysed in detail for the paleo-environmental reconstruction, as well. For instance, glacial stages MIS6, MIS8 and MIS10 present peculiar foraminifers’ assemblages, composed by benthic species typical of polar regions and no longer living in the Central Adriatic nowadays. Moreover, a deepening trend in the paleo-bathymetry during glacial intervals was observed, from MIS10 (inner-shelf environment) to MIS4 (mid-shelf environment).Ten sapropel-equivalent levels have been recognised in PRAD1-2 Central Adriatic record. They showed different planktic foraminifers’ assemblages, which allowed the first distinction of events occurred during warm-climate (Se5, Se7), cold-climate (Se4, Se6 and Se8) and temperate-intermediate-climate (Se1, Se3, Se9, Se’, Se10) conditions, consistently with literature. Cold-climate sapropel equivalents are characterised by the absence of an oligotrophic phase, whereas warm-temeprate-climate sapropel equivalents present both the oligotrophic and the eutrophic phases (except for Se1). Sea floor conditions vary, according to benthic foraminifers’ assemblages, from relatively well oxygenated (Se1, Se3), to dysoxic (Se9, Se’, Se10), to highly dysoxic (Se4, Se6, Se8) to events during which benthic foraminifers are absent (Se5, Se7). These two latter levels are also characterised by the lamination of the sediment, feature never observed in literature in such shallow records. The enhanced stratification of the water column during the events Se8, Se7, Se6, Se5, Se4, and the concurring strong dilution of shallow water, pointed out by the isotope record, lead to the hypothesis of a period of intense precipitation in the Central Adriatic region, possibly due to a northward shift of the African Monsoon. Finally, the expression of Central Adriatic PRAD1-2 Se5 equivalent was compared with the same event, as registered in other Eastern Mediterranean areas. The sequence of substantially the same planktic foraminifers’ bioevents has been consistently recognised, indicating a similar evolution of the water column all over the Eastern Mediterranean; yet, the synchronism of these events cannot be demonstrated. A high resolution analysis of late Holocene (last 6000 years BP) climate change was carried out for the Adriatic area, through the recognition of planktic and benthic foraminifers’ bioevents. In particular, peaks of planktic Globigerinoides sacculifer (four during the last 5500 years BP in the most expanded core) have been interpreted, based on the ecological requirements of this species, as warm-climate, arid intervals, correspondent to periods of relative climatic optimum, such as, for instance, the Medieval Warm Period, the Roman Age, the Late Bronze Age and the Copper Age. Consequently, the minima in the abundance of this biomarker could correspond to relatively cooler and more rainy periods. These conclusions are in good agreement with the isotopic and the pollen data. The Last Occurrence (LO) of G. sacculifer has been dated in this work at an average age of 550 years BP, and it is the best bioevent approximating the base of the Little Ice Age in the Adriatic. Recent literature reports the same bioevent in the Levantine Basin, showing a rather consistent age. Therefore, the LO of G. sacculifer has the potential to be extended to all the Eastern Mediterranean. Within the Little Ice Age, benthic foraminifer V. complanata shows two distinct peaks in the shallower Adriatic cores analysed, collected hundred kilometres apart, inside the mud belt environment. Based on the ecological requirements of this species, these two peaks have been interpreted as the more intense (cold and rainy) oscillations inside the LIA. The chronologic framework of the analysed cores is robust, being based on several range-finding 14C AMS ages, on estimates of the secular variation of the magnetic field, on geochemical estimates of the activity depth of 210Pb short-lived radionuclide (for the core-top ages), and is in good agreement with tephrochronologic, pollen and foraminiferal data. The intra-holocenic climate oscillations find out in the Adriatic have been compared with those pointed out in literature from other records of the Northern Hemisphere, and the chronologic constraint seems quite good. Finally, the sedimentary successions analysed allowed the review and the update of the foraminifers’ ecobiostratigraphy available from literature for the Adriatic region, thanks to the achievement of 16 ecobiozones for the last 60 kyr BP. Some bioevents are restricted to the Central Adriatic (for instance the LO of benthic Hyalinea balthica , approximating the MIS3/MIS2 boundary), others occur all over the Adriatic basin (for instance the LO of planktic Globorotalia inflata during MIS3, individuating Dansgaard-Oeschger cycle 8 (Denekamp)).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The sustained demand for faster,more powerful chips has beenmet by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SOC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MPSOC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NOCS) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the on-chip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation focuses on all of the above points, by describing a NoC architectural implementation called ×pipes; a NoC simulation environment within a cycle-accurate MPSoC emulator called MPARM; a NoC design flow consisting of a front-end tool for optimal NoC instantiation, called SunFloor, and a set of back-end facilities for the study of NoC physical implementations. This dissertation proves the viability of NoCs for current and upcoming designs, by outlining their advantages (alongwith a fewtradeoffs) and by providing a full NoC implementation framework. It also presents some examples of additional extensions of NoCs, allowing e.g. for increased fault tolerance, and outlines where NoCsmay find further application scenarios, such as in stacked chips.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The scale down of transistor technology allows microelectronics manufacturers such as Intel and IBM to build always more sophisticated systems on a single microchip. The classical interconnection solutions based on shared buses or direct connections between the modules of the chip are becoming obsolete as they struggle to sustain the increasing tight bandwidth and latency constraints that these systems demand. The most promising solution for the future chip interconnects are the Networks on Chip (NoC). NoCs are network composed by routers and channels used to inter- connect the different components installed on the single microchip. Examples of advanced processors based on NoC interconnects are the IBM Cell processor, composed by eight CPUs that is installed on the Sony Playstation III and the Intel Teraflops pro ject composed by 80 independent (simple) microprocessors. On chip integration is becoming popular not only in the Chip Multi Processor (CMP) research area but also in the wider and more heterogeneous world of Systems on Chip (SoC). SoC comprehend all the electronic devices that surround us such as cell-phones, smart-phones, house embedded systems, automotive systems, set-top boxes etc... SoC manufacturers such as ST Microelectronics , Samsung, Philips and also Universities such as Bologna University, M.I.T., Berkeley and more are all proposing proprietary frameworks based on NoC interconnects. These frameworks help engineers in the switch of design methodology and speed up the development of new NoC-based systems on chip. In this Thesis we propose an introduction of CMP and SoC interconnection networks. Then focusing on SoC systems we propose: • a detailed analysis based on simulation of the Spidergon NoC, a ST Microelectronics solution for SoC interconnects. The Spidergon NoC differs from many classical solutions inherited from the parallel computing world. Here we propose a detailed analysis of this NoC topology and routing algorithms. Furthermore we propose aEqualized a new routing algorithm designed to optimize the use of the resources of the network while also increasing its performance; • a methodology flow based on modified publicly available tools that combined can be used to design, model and analyze any kind of System on Chip; • a detailed analysis of a ST Microelectronics-proprietary transport-level protocol that the author of this Thesis helped developing; • a simulation-based comprehensive comparison of different network interface designs proposed by the author and the researchers at AST lab, in order to integrate shared-memory and message-passing based components on a single System on Chip; • a powerful and flexible solution to address the time closure exception issue in the design of synchronous Networks on Chip. Our solution is based on relay stations repeaters and allows to reduce the power and area demands of NoC interconnects while also reducing its buffer needs; • a solution to simplify the design of the NoC by also increasing their performance and reducing their power and area consumption. We propose to replace complex and slow virtual channel-based routers with multiple and flexible small Multi Plane ones. This solution allows us to reduce the area and power dissipation of any NoC while also increasing its performance especially when the resources are reduced. This Thesis has been written in collaboration with the Advanced System Technology laboratory in Grenoble France, and the Computer Science Department at Columbia University in the city of New York.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The digital electronic market development is founded on the continuous reduction of the transistors size, to reduce area, power, cost and increase the computational performance of integrated circuits. This trend, known as technology scaling, is approaching the nanometer size. The lithographic process in the manufacturing stage is increasing its uncertainty with the scaling down of the transistors size, resulting in a larger parameter variation in future technology generations. Furthermore, the exponential relationship between the leakage current and the threshold voltage, is limiting the threshold and supply voltages scaling, increasing the power density and creating local thermal issues, such as hot spots, thermal runaway and thermal cycles. In addiction, the introduction of new materials and the smaller devices dimension are reducing transistors robustness, that combined with high temperature and frequently thermal cycles, are speeding up wear out processes. Those effects are no longer addressable only at the process level. Consequently the deep sub-micron devices will require solutions which will imply several design levels, as system and logic, and new approaches called Design For Manufacturability (DFM) and Design For Reliability. The purpose of the above approaches is to bring in the early design stages the awareness of the device reliability and manufacturability, in order to introduce logic and system able to cope with the yield and reliability loss. The ITRS roadmap suggests the following research steps to integrate the design for manufacturability and reliability in the standard CAD automated design flow: i) The implementation of new analysis algorithms able to predict the system thermal behavior with the impact to the power and speed performances. ii) High level wear out models able to predict the mean time to failure of the system (MTTF). iii) Statistical performance analysis able to predict the impact of the process variation, both random and systematic. The new analysis tools have to be developed beside new logic and system strategies to cope with the future challenges, as for instance: i) Thermal management strategy that increase the reliability and life time of the devices acting to some tunable parameter,such as supply voltage or body bias. ii) Error detection logic able to interact with compensation techniques as Adaptive Supply Voltage ASV, Adaptive Body Bias ABB and error recovering, in order to increase yield and reliability. iii) architectures that are fundamentally resistant to variability, including locally asynchronous designs, redundancy, and error correcting signal encodings (ECC). The literature already features works addressing the prediction of the MTTF, papers focusing on thermal management in the general purpose chip, and publications on statistical performance analysis. In my Phd research activity, I investigated the need for thermal management in future embedded low-power Network On Chip (NoC) devices.I developed a thermal analysis library, that has been integrated in a NoC cycle accurate simulator and in a FPGA based NoC simulator. The results have shown that an accurate layout distribution can avoid the onset of hot-spot in a NoC chip. Furthermore the application of thermal management can reduce temperature and number of thermal cycles, increasing the systemreliability. Therefore the thesis advocates the need to integrate a thermal analysis in the first design stages for embedded NoC design. Later on, I focused my research in the development of statistical process variation analysis tool that is able to address both random and systematic variations. The tool was used to analyze the impact of self-timed asynchronous logic stages in an embedded microprocessor. As results we confirmed the capability of self-timed logic to increase the manufacturability and reliability. Furthermore we used the tool to investigate the suitability of low-swing techniques in the NoC system communication under process variations. In this case We discovered the superior robustness to systematic process variation of low-swing links, which shows a good response to compensation technique as ASV and ABB. Hence low-swing is a good alternative to the standard CMOS communication for power, speed, reliability and manufacturability. In summary my work proves the advantage of integrating a statistical process variation analysis tool in the first stages of the design flow.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nel documento vengono principalmente trattati i principali meccanismi per il controllo di flusso per le NoC. Vengono trattati vari schemi di switching, gli stessi schemi associati all'introduzione dei Virtual Channel, alcuni low-level flow control, e due soluzioni per gli end-to-end flow control: Credit Based e CTC (STMicroelectronics). Nel corso della trattazione vengono presentate alcune possibili modifiche a CTC per incrementarne le prestazioni mantenendo la scalabilità che lo contraddistingue: queste sono le "back-to-back request" e "multiple incoming connections". Infine vengono introdotti alcune soluzioni per l'implementazione della qualità di servizio per le reti su chip. Proprio per il supporto al QoS viene introdotto CTTC: una versione di CTC con il supporto alla Time Division Multiplexing su rete Spidergon.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Il lavoro è stato suddiviso in tre macro-aree. Una prima riguardante un'analisi teorica di come funzionano le intrusioni, di quali software vengono utilizzati per compierle, e di come proteggersi (usando i dispositivi che in termine generico si possono riconoscere come i firewall). Una seconda macro-area che analizza un'intrusione avvenuta dall'esterno verso dei server sensibili di una rete LAN. Questa analisi viene condotta sui file catturati dalle due interfacce di rete configurate in modalità promiscua su una sonda presente nella LAN. Le interfacce sono due per potersi interfacciare a due segmenti di LAN aventi due maschere di sotto-rete differenti. L'attacco viene analizzato mediante vari software. Si può infatti definire una terza parte del lavoro, la parte dove vengono analizzati i file catturati dalle due interfacce con i software che prima si occupano di analizzare i dati di contenuto completo, come Wireshark, poi dei software che si occupano di analizzare i dati di sessione che sono stati trattati con Argus, e infine i dati di tipo statistico che sono stati trattati con Ntop. Il penultimo capitolo, quello prima delle conclusioni, invece tratta l'installazione di Nagios, e la sua configurazione per il monitoraggio attraverso plugin dello spazio di disco rimanente su una macchina agent remota, e sui servizi MySql e DNS. Ovviamente Nagios può essere configurato per monitorare ogni tipo di servizio offerto sulla rete.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

I continui sviluppi nel campo della fabbricazione dei circuiti integrati hanno comportato frequenti travolgimenti nel design, nell’implementazione e nella scalabilità dei device elettronici, così come nel modo di utilizzarli. Anche se la legge di Moore ha anticipato e caratterizzato questo trend nelle ultime decadi, essa stessa si trova a fronteggiare attualmente enormi limitazioni, superabili solo attraverso un diverso approccio nella produzione di chip, consistente in pratica nella sovrapposizione verticale di diversi strati collegati elettricamente attraverso speciali vias. Sul singolo strato, le network on chip sono state suggerite per ovviare le profonde limitazioni dovute allo scaling di strutture di comunicazione condivise. Questa tesi si colloca principalmente nel contesto delle nascenti piattaforme multicore ad alte prestazioni basate sulle 3D NoC, in cui la network on chip viene estesa nelle 3 direzioni. L’obiettivo di questo lavoro è quello di fornire una serie di strumenti e tecniche per poter costruire e aratterizzare una piattaforma tridimensionale, cosi come dimostrato nella realizzazione del testchip 3D NOC fabbricato presso la fonderia IMEC. Il primo contributo è costituito sia una accurata caratterizzazione delle interconnessioni verticali (TSVs) (ovvero delle speciali vias che attraversano l’intero substrato del die), sia dalla caratterizzazione dei router 3D (in cui una o più porte sono estese nella direzione verticale) ed infine dal setup di un design flow 3D utilizzando interamente CAD 2D. Questo primo step ci ha permesso di effettuare delle analisi dettagliate sia sul costo sia sulle varie implicazioni. Il secondo contributo è costituito dallo sviluppo di alcuni blocchi funzionali necessari per garantire il corretto funziomento della 3D NoC, in presenza sia di guasti nelle TSVs (fault tolerant links) che di deriva termica nei vari clock tree dei vari die (alberi di clock indipendenti). Questo secondo contributo è costituito dallo sviluppo delle seguenti soluzioni circuitali: 3D fault tolerant link, Look Up Table riconfigurabili e un sicnronizzatore mesocrono. Il primo è costituito fondamentalmente un bus verticale equipaggiato con delle TSV di riserva da utilizzare per rimpiazzare le vias guaste, più la logica di controllo per effettuare il test e la riconfigurazione. Il secondo è rappresentato da una Look Up Table riconfigurabile, ad alte prestazioni e dal costo contenuto, necesaria per bilanciare sia il traffico nella NoC che per bypassare link non riparabili. Infine la terza soluzione circuitale è rappresentata da un sincronizzatore mesocrono necessario per garantire la sincronizzazione nel trasferimento dati da un layer and un altro nelle 3D Noc. Il terzo contributo di questa tesi è dato dalla realizzazione di un interfaccia multicore per memorie 3D (stacked 3D DRAM) ad alte prestazioni, e dall’esplorazione architetturale dei benefici e del costo di questo nuovo sistema in cui il la memoria principale non è piu il collo di bottiglia dell’intero sistema. Il quarto ed ultimo contributo è rappresentato dalla realizzazione di un 3D NoC test chip presso la fonderia IMEC, e di un circuito full custom per la caratterizzazione della variability dei parametri RC delle interconnessioni verticali.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multi-Processor SoC (MPSOC) design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. Scaling down of process technologies has increased process and dynamic variations as well as transistor wearout. Because of this, delay variations increase and impact the performance of the MPSoCs. The interconnect architecture inMPSoCs becomes a single point of failure as it connects all other components of the system together. A faulty processing element may be shut down entirely, but the interconnect architecture must be able to tolerate partial failure and variations and operate with performance, power or latency overhead. This dissertation focuses on techniques at different levels of abstraction to face with the reliability and variability issues in on-chip interconnection networks. By showing the test results of a GALS NoC testchip this dissertation motivates the need for techniques to detect and work around manufacturing faults and process variations in MPSoCs’ interconnection infrastructure. As a physical design technique, we propose the bundle routing framework as an effective way to route the Network on Chips’ global links. For architecture-level design, two cases are addressed: (I) Intra-cluster communication where we propose a low-latency interconnect with variability robustness (ii) Inter-cluster communication where an online functional testing with a reliable NoC configuration are proposed. We also propose dualVdd as an orthogonal way of compensating variability at the post-fabrication stage. This is an alternative strategy with respect to the design techniques, since it enforces the compensation at post silicon stage.