36 resultados para Networks on chip (NoC)
Resumo:
Multiprocessor system-on-chip (MPSoC) designs utilize the available technology and communication architectures to meet the requirements of the upcoming applications. In MPSoC, the communication platform is both the key enabler, as well as the key differentiator for realizing efficient MPSoCs. It provides product differentiation to meet a diverse, multi-dimensional set of design constraints, including performance, power, energy, reconfigurability, scalability, cost, reliability and time-to-market. The communication resources of a single interconnection platform cannot be fully utilized by all kind of applications, such as the availability of higher communication bandwidth for computation but not data intensive applications is often unfeasible in the practical implementation. This thesis aims to perform the architecture-level design space exploration towards efficient and scalable resource utilization for MPSoC communication architecture. In order to meet the performance requirements within the design constraints, careful selection of MPSoC communication platform, resource aware partitioning and mapping of the application play important role. To enhance the utilization of communication resources, variety of techniques such as resource sharing, multicast to avoid re-transmission of identical data, and adaptive routing can be used. For implementation, these techniques should be customized according to the platform architecture. To address the resource utilization of MPSoC communication platforms, variety of architectures with different design parameters and performance levels, namely Segmented bus (SegBus), Network-on-Chip (NoC) and Three-Dimensional NoC (3D-NoC), are selected. Average packet latency and power consumption are the evaluation parameters for the proposed techniques. In conventional computing architectures, fault on a component makes the connected fault-free components inoperative. Resource sharing approach can utilize the fault-free components to retain the system performance by reducing the impact of faults. Design space exploration also guides to narrow down the selection of MPSoC architecture, which can meet the performance requirements with design constraints.
Resumo:
Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.
Resumo:
Today's networked systems are becoming increasingly complex and diverse. The current simulation and runtime verification techniques do not provide support for developing such systems efficiently; moreover, the reliability of the simulated/verified systems is not thoroughly ensured. To address these challenges, the use of formal techniques to reason about network system development is growing, while at the same time, the mathematical background necessary for using formal techniques is a barrier for network designers to efficiently employ them. Thus, these techniques are not vastly used for developing networked systems. The objective of this thesis is to propose formal approaches for the development of reliable networked systems, by taking efficiency into account. With respect to reliability, we propose the architectural development of correct-by-construction networked system models. With respect to efficiency, we propose reusable network architectures as well as network development. At the core of our development methodology, we employ the abstraction and refinement techniques for the development and analysis of networked systems. We evaluate our proposal by employing the proposed architectures to a pervasive class of dynamic networks, i.e., wireless sensor network architectures as well as to a pervasive class of static networks, i.e., network-on-chip architectures. The ultimate goal of our research is to put forward the idea of building libraries of pre-proved rules for the efficient modelling, development, and analysis of networked systems. We take into account both qualitative and quantitative analysis of networks via varied formal tool support, using a theorem prover the Rodin platform and a statistical model checker the SMC-Uppaal.
Resumo:
Advancements in IC processing technology has led to the innovation and growth happening in the consumer electronics sector and the evolution of the IT infrastructure supporting this exponential growth. One of the most difficult obstacles to this growth is the removal of large amount of heatgenerated by the processing and communicating nodes on the system. The scaling down of technology and the increase in power density is posing a direct and consequential effect on the rise in temperature. This has resulted in the increase in cooling budgets, and affects both the life-time reliability and performance of the system. Hence, reducing on-chip temperatures has become a major design concern for modern microprocessors. This dissertation addresses the thermal challenges at different levels for both 2D planer and 3D stacked systems. It proposes a self-timed thermal monitoring strategy based on the liberal use of on-chip thermal sensors. This makes use of noise variation tolerant and leakage current based thermal sensing for monitoring purposes. In order to study thermal management issues from early design stages, accurate thermal modeling and analysis at design time is essential. In this regard, spatial temperature profile of the global Cu nanowire for on-chip interconnects has been analyzed. It presents a 3D thermal model of a multicore system in order to investigate the effects of hotspots and the placement of silicon die layers, on the thermal performance of a modern ip-chip package. For a 3D stacked system, the primary design goal is to maximise the performance within the given power and thermal envelopes. Hence, a thermally efficient routing strategy for 3D NoC-Bus hybrid architectures has been proposed to mitigate on-chip temperatures by herding most of the switching activity to the die which is closer to heat sink. Finally, an exploration of various thermal-aware placement approaches for both the 2D and 3D stacked systems has been presented. Various thermal models have been developed and thermal control metrics have been extracted. An efficient thermal-aware application mapping algorithm for a 2D NoC has been presented. It has been shown that the proposed mapping algorithm reduces the effective area reeling under high temperatures when compared to the state of the art.
Resumo:
This thesis presents a novel design paradigm, called Virtual Runtime Application Partitions (VRAP), to judiciously utilize the on-chip resources. As the dark silicon era approaches, where the power considerations will allow only a fraction chip to be powered on, judicious resource management will become a key consideration in future designs. Most of the works on resource management treat only the physical components (i.e. computation, communication, and memory blocks) as resources and manipulate the component to application mapping to optimize various parameters (e.g. energy efficiency). To further enhance the optimization potential, in addition to the physical resources we propose to manipulate abstract resources (i.e. voltage/frequency operating point, the fault-tolerance strength, the degree of parallelism, and the configuration architecture). The proposed framework (i.e. VRAP) encapsulates methods, algorithms, and hardware blocks to provide each application with the abstract resources tailored to its needs. To test the efficacy of this concept, we have developed three distinct self adaptive environments: (i) Private Operating Environment (POE), (ii) Private Reliability Environment (PRE), and (iii) Private Configuration Environment (PCE) that collectively ensure that each application meets its deadlines using minimal platform resources. In this work several novel architectural enhancements, algorithms and policies are presented to realize the virtual runtime application partitions efficiently. Considering the future design trends, we have chosen Coarse Grained Reconfigurable Architectures (CGRAs) and Network on Chips (NoCs) to test the feasibility of our approach. Specifically, we have chosen Dynamically Reconfigurable Resource Array (DRRA) and McNoC as the representative CGRA and NoC platforms. The proposed techniques are compared and evaluated using a variety of quantitative experiments. Synthesis and simulation results demonstrate VRAP significantly enhances the energy and power efficiency compared to state of the art.
Resumo:
Diplomityössä esitellään menetelmiä sauvarikon toteamiseksi. Työn tarkoituksena on tutkia roottorivaurioita staattorivirran avulla. Työ jaetaan karkeasti kolmeen osa-alueeseen: oikosulkumoottorin vikoihin, roottorivaurioiden tunnistamiseen ja signaalinkäsittelymenetelmiin, jonka avulla havaitaan sauvarikko. Oikosulkumoottorin vikoja ovat staattorikäämien vauriot ja roottorivauriot. Roottorikäämien vaurioita ovat roottori sauvojen murtuminen sekä roottorisauvan irtoaminen oikosulkujenkaan päästä. Roottorivaurioiden tunnistamismenetelmiä ovat parametrin arviointi ja virtaspektrianalyysi. Työn alkuosassa esitellään oikosulkumoottorien rakenne ja toiminta. Esitellään moottoriin kohdistuvia vikoja ja etsitään ratkaisumenetelmiä roottorivaurioiden tunnistamisessa. Lopuksi tutkitaan, kuinka staattorimittaustietojen perusteella saadut tulokset voidaan käsitellä FFT -algoritmilla ja kuinka FFT -algoritmi voidaan toteuttaa sulautettuna Sharc -prosessorin avulla. Työssä käytetään ADSP 21062 EZ -LAB kehitysympäristöä, jonka avulla voidaan ajaa ohjelmia RAM-sirusta, joka on vuorovaikutuksessa SHARC -laudassa oleviin laitteisiin.
Resumo:
Nykypäivän maailma tukeutuu verkkoihin. Tietokoneverkot ja langattomat puhelimet ovat jo varsin tavallisia suurelle joukolle ihmisiä. Uusi verkkotyyppi on ilmestynyt edelleen helpottamaan ihmisten verkottunutta elämää. Ad hoc –verkot mahdollistavat joustavan verkonmuodostuksen langattomien päätelaitteiden välille ilman olemassa olevaa infrastruktuuria. Diplomityö esittelee uuden simulaatiotyökalun langattomien ad hoc –verkkojen simulointiin protokollatasolla. Se esittelee myös kyseisten verkkojen taustalla olevat periaatteet ja teoriat. Lähemmin tutkitaan OSI-mallin linkkikerroksen kaistanjakoprotokollia ad hoc –verkoissa sekä vastaavan toteutusta simulaattorissa. Lisäksi esitellään joukko simulaatioajoja esimerkiksi simulaattorin toiminnasta ja mahdollisista käyttökohteista.
Resumo:
Sähkömarkkinalain uudistuminen syksyllä 2013 aiheutti sähköverkkoyhtiöille velvoitteen parantaa sähköverkkoja siten, että ne täyttävät uuden lain mukaiset keskeytysaikavaatimukset. Laissa asemakaava-alueilla sallitaan enintään 6 tunnin mittainen sähkönjakelunkeskeytys ja haja-asutusalueella 36 tunnin keskeytys. Diplomityössä kehitetään ElMil Oy:lle palvelumalli, jonka avulla pyritään parantamaan sähköverkkojen säävarmuutta ja arvioimaan parannuksien aiheuttamia taloudellisia vaikutuksia verkkoyhtiön kannattavuuteen verkkoliiketoiminnan valvontamallin kautta. Työn teoriaosiossa käydään läpi uuden sähkömarkkinalain muutoksia toimitusvarmuuden kannalta sekä avataan verkkoliiketoiminnan valvontamallin komponentteja ja sitä kuinka niitä on hyödynnetty tässä työssä. Kyseisiä tietoja hyödynnetään case-tarkastelussa, jossa testataan kehitetyn palvelumallin toimivuutta Järvi-Suomen Energia Oy:n sähkönjakeluverkon kahden sähköaseman jakeluverkkojen kokoisella alueella. Tarkastelualueen sähkönjakeluverkolle tehdään suurhäiriömalli, jonka perusteella arvioidaan vaadittavaa säävarman verkon osuutta, jotta sähkömarkkinalain vaatimukset täyttyvät. Alueen investointikohteet optimoidaan kannattavuuden perusteella, jolloin saadaan kustannustehokas investointiohjelma tiettyjen reunaehtojen puitteissa. Lisäksi suurhäiriömallin parametreja varioidaan herkkyystarkasteluissa. Työn lopputuloksena saadaan ElMil Oy:lle kehitettyä palvelumalli. Case-tarkasteluissa havaitaan, että investointikustannukset nousevat merkittävästi. Verkkoliiketoiminnan valvontamallin kannustinvaikutuksista ja sallitusta tuotosta saadaan hyvä näkemys. Herkkyystarkasteluista nähdään, että suurhäiriömalli on hyvin riippuvainen valituista parametreista, jolloin niiden valintaan tulee kiinnittää huomiota
Resumo:
Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.
Resumo:
In modern society, the body health is a very important issue to everyone. With the development of the science and technology, the new and developed body health monitoring device and technology will play the key role in the daily medical activities. This paper focus on making progress in the design of the wearable vital sign system. A vital sign monitoring system has been proposed and designed. The whole detection system is composed of signal collecting subsystem, signal processing subsystem, short-range wireless communication subsystem and user interface subsystem. The signal collecting subsystem is composed of light source and photo diode, after emiting light of two different wavelength, the photo diode collects the light signal reflected by human body tissue. The signal processing subsystem is based on the analog front end AFE4490 and peripheral circuits, the collected analog signal would be filtered and converted into digital signal in this stage. After a series of processing, the signal would be transmitted to the short-range wireless communication subsystem through SPI, this subsystem is mainly based on Bluetooth 4.0 protocol and ultra-low power System on Chip(SoC) nRF51822. Finally, the signal would be transmitted to the user end. After proposing and building the system, this paper focus on the research of the key component in the system, that is, the photo detector. Based on the study of the perovskite materials, a low temperature processed photo detector has been proposed, designed and researched. The device is made up of light absorbing layer, electron transporting and hole blocking layer, hole transporting and electron blocking layer, conductive substrate layer and metal electrode layer. The light absorbing layer is the important part of whole device, and it is fabricated by perovskite materials. After accepting the light, the electron-hole pair would be produced in this layer, and due to the energy level difference, the electron and hole produced would be transmitted to metal electrode and conductive substrate electrode through electron transporting layer and hole transporting layer respectively. In this way the response current would be produced. Based on this structure, the specific fabrication procedure including substrate cleaning; PEDOT:PSS layer preparation; pervoskite layer preparation; PCBM layer preparation; C60, BCP, and Ag electrode layer preparation. After the device fabrication, a series of morphological characterization and performance testing has been done. The testing procedure including film-forming quality inspection, response current and light wavelength analysis, linearity and response time and other optical and electrical properties testing. The testing result shows that the membrane has been fabricated uniformly; the device can produce obvious response current to the incident light with the wavelength from 350nm to 800nm, and the response current could be changed along with the light wavelength. When the light wavelength keeps constant, there exists a good linear relationship between the intensity of the response current and the power of the incident light, based on which the device could be used as the photo detector to collect the light information. During the changing period of the light signal, the response time of the device is several microseconds, which is acceptable working as a photo detector in our system. The testing results show that the device has good electronic and optical properties, and the fabrication procedure is also repeatable, the properties of the devices has good uniformity, which illustrates the fabrication method and procedure could be used to build the photo detector in our wearable system. Based on a series of testing results, the paper has drawn the conclusion that the photo detector fabricated could be integrated on the flexible substrate and is also suitable for the monitoring system proposed, thus made some progress on the research of the wearable monitoring system and device. Finally, some future prospect in system design aspect and device design and fabrication aspect are proposed.
Resumo:
Tämän tutkimuksen tavoitteena oli selvittää kuinka yrittäjän verkostot muodostuvat ja minkälainen niiden rakenne on. Tavoitteena oli myös saada tietoa yrittäjän sosiaalisten verkostojen merkityksestä yrittäjälle. Tutkimus toteutettiin laadullisena tutkimuksena. Tutkimusaineisto kerättiin strukturoidulla haastattelulla ja tutkimusaineiston rinnalla käytettiin yrittäjien havainnollistamia piirroksia sekä tieteellisiä julkaisuja ja kirjallisuutta. Haastateltavina oli sekä nais- että miesyrittäjiä eri paikkakunnilta Etelä-Suomen alueelta. Tutkimustulosten mukaan yrittäjien verkostoissa voidaan havaita eri tasoja, jotka rakentuvat eri tavoilla ja ovat yrittäjälle merkitykseltään erilaisia. Verkoston eri tasoille sijoittuu yrittäjälle tärkeitä ihmisiä sen mukaan mitä hyötyä ja arvoa heillä yrittäjälle on. Tulokset osoittavat, että verkostojen rakentamiseen vaikuttavat sekä liiketoiminnan laatu, motiivit sekä yrittäjän persoona. Tulokset ovat linjassa tämän tutkimuksen perustana olevien teorioiden kanssa. Viitteitä löytyy myös siitä, että yrittäjän sukupuolella on myös merkitystä siihen, millaiseksi sosiaaliset verkostot rakentuvat sekä henkilökohtaisessa elämässä että liike-elämän puolella. Sosiaalisista verkostoista saatavat hyödyt ovat moninaisia. Lisätutkimuksia tulisi tehdä tehokkaiden työkalujen kehittämiseksi henkilöstön ja yrittäjän omien resurssien tunnistamiseksi. Tätä kautta myös yrittäjyyskasvatusta ja yrittäjäneuvontaa voitaisiin tehostaa.
Resumo:
The parameter setting of a differential evolution algorithm must meet several requirements: efficiency, effectiveness, and reliability. Problems vary. The solution of a particular problem can be represented in different ways. An algorithm most efficient in dealing with a particular representation may be less efficient in dealing with other representations. The development of differential evolution-based methods contributes substantially to research on evolutionary computing and global optimization in general. The objective of this study is to investigatethe differential evolution algorithm, the intelligent adjustment of its controlparameters, and its application. In the thesis, the differential evolution algorithm is first examined using different parameter settings and test functions. Fuzzy control is then employed to make control parameters adaptive based on an optimization process and expert knowledge. The developed algorithms are applied to training radial basis function networks for function approximation with possible variables including centers, widths, and weights of basis functions and both having control parameters kept fixed and adjusted by fuzzy controller. After the influence of control variables on the performance of the differential evolution algorithm was explored, an adaptive version of the differential evolution algorithm was developed and the differential evolution-based radial basis function network training approaches were proposed. Experimental results showed that the performance of the differential evolution algorithm is sensitive to parameter setting, and the best setting was found to be problem dependent. The fuzzy adaptive differential evolution algorithm releases the user load of parameter setting and performs better than those using all fixedparameters. Differential evolution-based approaches are effective for training Gaussian radial basis function networks.
Resumo:
This work is based on the utilisation of sawdust and wood chip screenings for different purposes. A substantial amount of these byproducts are readily available in the Finnish forest industry. A black liquor impregnation study showed that sawdust-like wood material behaves differently from normal chips. Furthermore, the fractionation and removal of the smallest size fractions did not have a significant effect on the impregnation of sawdust-like wood material. Sawdust kraft cooking equipped with an impregnation stage increases the cooking yield and decreases the lignin content of the produced pulp. Impregnation also increases viscosity of the pulp and decreases chlorine dioxide consumption in bleaching. In addition, impregnation increases certain pulp properties after refining. Hydrotropic extraction showed that more lignin can be extracted from hardwood than softwood. However, the particle size had a major influence on the lignin extraction. It was possible to extract more lignin from spruce sawdust than spruce chips. Wood chip screenings are usually combusted to generate energy. They can also be used in the production of kraft pulp, ethanol and chemicals. It is not economical to produce ethanol from wood chip screenings because of the expensive wood material. Instead, they should be used for production of steam and energy, kraft pulp and higher value added chemicals. Bleached sawdust kraft pulp can be used to replace softwood kraft pulp in mechanical pulp based papers because it can improve certain physical properties. It is economically more feasible to use bleached sawdust kraft pulp in stead of softwood kraft pulp, especially when the reinforcement power requirement is moderate.
Resumo:
The focus in this thesis is to study both technical and economical possibilities of novel on-line condition monitoring techniques in underground low voltage distribution cable networks. This thesis consists of literature study about fault progression mechanisms in modern low voltage cables, laboratory measurements to determine the base and restrictions of novel on-line condition monitoring methods, and economic evaluation, based on fault statistics and information gathered from Finnish distribution system operators. This thesis is closely related to master’s thesis “Channel Estimation and On-line Diagnosis of LV Distribution Cabling”, which focuses more on the actual condition monitoring methods and signal theory behind them.
Resumo:
The Finnish electricity distribution sector, rural areas in particular, is facing major challenges because of the economic regulation, tightening supply security requirements and the ageing network asset. Therefore, the target in the distribution network planning and asset management is to develop and renovate the networks to meet these challenges in compliance with the regulations in an economically feasible way. Concerning supply security, the new Finnish Electricity Market Act limits the maximum duration of electricity supply interruptions to six hours in urban areas and 36 hours in rural areas. This has a significant impact on distribution network planning, especially in rural areas where the distribution networks typically require extensive modifications and renovations to meet the supply security requirements. This doctoral thesis introduces a methodology to analyse electricity distribution system development. The methodology is based on and combines elements of reliability analysis, asset management and economic regulation. The analysis results can be applied, for instance, to evaluate the development of distribution reliability and to consider actions to meet the tightening regulatory requirements. Thus, the methodology produces information for strategic decision-making so that DSOs can respond to challenges arising in the electricity distribution sector. The key contributions of the thesis are a network renovation concept for rural areas, an analysis to assess supply security, and an evaluation of the effects of economic regulation on the strategic network planning. In addition, the thesis demonstrates how the reliability aspect affects the placement of automation devices and how the reserve power can be arranged in a rural area network.