30 resultados para bandwidth AMSC: 11T71,94A15,14G50
Resumo:
Through advances in technology, System-on-Chip design is moving towards integrating tens to hundreds of intellectual property blocks into a single chip. In such a many-core system, on-chip communication becomes a performance bottleneck for high performance designs. Network-on-Chip (NoC) has emerged as a viable solution for the communication challenges in highly complex chips. The NoC architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication challenges such as wiring complexity, communication latency, and bandwidth. Furthermore, the combined benefits of 3D IC and NoC schemes provide the possibility of designing a high performance system in a limited chip area. The major advantages of 3D NoCs are the considerable reductions in average latency and power consumption. There are several factors degrading the performance of NoCs. In this thesis, we investigate three main performance-limiting factors: network congestion, faults, and the lack of efficient multicast support. We address these issues by the means of routing algorithms. Congestion of data packets may lead to increased network latency and power consumption. Thus, we propose three different approaches for alleviating such congestion in the network. The first approach is based on measuring the congestion information in different regions of the network, distributing the information over the network, and utilizing this information when making a routing decision. The second approach employs a learning method to dynamically find the less congested routes according to the underlying traffic. The third approach is based on a fuzzy-logic technique to perform better routing decisions when traffic information of different routes is available. Faults affect performance significantly, as then packets should take longer paths in order to be routed around the faults, which in turn increases congestion around the faulty regions. We propose four methods to tolerate faults at the link and switch level by using only the shortest paths as long as such path exists. The unique characteristic among these methods is the toleration of faults while also maintaining the performance of NoCs. To the best of our knowledge, these algorithms are the first approaches to bypassing faults prior to reaching them while avoiding unnecessary misrouting of packets. Current implementations of multicast communication result in a significant performance loss for unicast traffic. This is due to the fact that the routing rules of multicast packets limit the adaptivity of unicast packets. We present an approach in which both unicast and multicast packets can be efficiently routed within the network. While suggesting a more efficient multicast support, the proposed approach does not affect the performance of unicast routing at all. In addition, in order to reduce the overall path length of multicast packets, we present several partitioning methods along with their analytical models for latency measurement. This approach is discussed in the context of 3D mesh networks.
Resumo:
Kapasitiivinen mittaustekniikka perustuu anturin ja kohteen välisen kapasitanssin muutok-seen: kun kapasitanssi muuttuu, muuttuu myös anturin impedanssi. Tätä yhteyttä hyödyn-tämällä voidaan tuottaa mittaussignaali muuttuvasta parametrista. Tässä työssä esitellään lyhyesti pienen välimatkan tarkkaan paikanmittaukseen käytettäviä tekniikoita ja selvitetään kapasitiivisten paikanmittausanturien perusominaisuuksia sekä käytännön toteutukseen vaadittavia asioita lähdemateriaalin ja simuloinnin avulla. Lisäksi tämän hetken kaupallisia eri tekniikoihin perustuvia mittausjärjestelmiä vertaillaan keskenään. Vertailun perusteella kapasitiiviset mittausjärjestelmät tarjoavat korkeimman mittaustark-kuuden lyhyellä mittausalueella, kun mittausympäristö ja kohde on kapasitiiviselle anturille soveltuva. Induktiiviset anturit tarjoavat suuremman mittauskaistanleveyden ja soveltuvat kapasitiivisia antureita paremmin likaisiin ympäristöihin. Optiset järjestelmät mahdollistavat puolestaan suuremman mittausalueen.
Resumo:
Multiprocessor system-on-chip (MPSoC) designs utilize the available technology and communication architectures to meet the requirements of the upcoming applications. In MPSoC, the communication platform is both the key enabler, as well as the key differentiator for realizing efficient MPSoCs. It provides product differentiation to meet a diverse, multi-dimensional set of design constraints, including performance, power, energy, reconfigurability, scalability, cost, reliability and time-to-market. The communication resources of a single interconnection platform cannot be fully utilized by all kind of applications, such as the availability of higher communication bandwidth for computation but not data intensive applications is often unfeasible in the practical implementation. This thesis aims to perform the architecture-level design space exploration towards efficient and scalable resource utilization for MPSoC communication architecture. In order to meet the performance requirements within the design constraints, careful selection of MPSoC communication platform, resource aware partitioning and mapping of the application play important role. To enhance the utilization of communication resources, variety of techniques such as resource sharing, multicast to avoid re-transmission of identical data, and adaptive routing can be used. For implementation, these techniques should be customized according to the platform architecture. To address the resource utilization of MPSoC communication platforms, variety of architectures with different design parameters and performance levels, namely Segmented bus (SegBus), Network-on-Chip (NoC) and Three-Dimensional NoC (3D-NoC), are selected. Average packet latency and power consumption are the evaluation parameters for the proposed techniques. In conventional computing architectures, fault on a component makes the connected fault-free components inoperative. Resource sharing approach can utilize the fault-free components to retain the system performance by reducing the impact of faults. Design space exploration also guides to narrow down the selection of MPSoC architecture, which can meet the performance requirements with design constraints.
Resumo:
There is much enthusiasm about developing eLearning coures in Nigeria, but majority of the eLearning platforms introduced from developed countries to Nigeria hardly resulted in desired outcome. Proposed reasons are lack of infrastructures such as stable electricity, inadequate rate of internet penetration, low bandwidth and low accessibility of undergraduates to sophisticated devices. These seem valid initially, but findings of this study proved otherwise. This study took a deeper evaluation of the scenarios and made viable discoveries which deviate from early findings. First, the former attempts to introduce eLearning for students in Nigeria were implemented with a rural mindset. Secondly, the undergraduate student`s technical readiness were not properly studied, also their technology user acceptance was also not properly checked and the eLearning platforms were not localized. This study conducted interviews among tertiary students at Yaba College of technology and gathered valuable information towards their readiness for eLearning.
Resumo:
In this work, the feasibility of the floating-gate technology in analog computing platforms in a scaled down general-purpose CMOS technology is considered. When the technology is scaled down the performance of analog circuits tends to get worse because the process parameters are optimized for digital transistors and the scaling involves the reduction of supply voltages. Generally, the challenge in analog circuit design is that all salient design metrics such as power, area, bandwidth and accuracy are interrelated. Furthermore, poor flexibility, i.e. lack of reconfigurability, the reuse of IP etc., can be considered the most severe weakness of analog hardware. On this account, digital calibration schemes are often required for improved performance or yield enhancement, whereas high flexibility/reconfigurability can not be easily achieved. Here, it is discussed whether it is possible to work around these obstacles by using floating-gate transistors (FGTs), and analyze problems associated with the practical implementation. FGT technology is attractive because it is electrically programmable and also features a charge-based built-in non-volatile memory. Apart from being ideal for canceling the circuit non-idealities due to process variations, the FGTs can also be used as computational or adaptive elements in analog circuits. The nominal gate oxide thickness in the deep sub-micron (DSM) processes is too thin to support robust charge retention and consequently the FGT becomes leaky. In principle, non-leaky FGTs can be implemented in a scaled down process without any special masks by using “double”-oxide transistors intended for providing devices that operate with higher supply voltages than general purpose devices. However, in practice the technology scaling poses several challenges which are addressed in this thesis. To provide a sufficiently wide-ranging survey, six prototype chips with varying complexity were implemented in four different DSM process nodes and investigated from this perspective. The focus is on non-leaky FGTs, but the presented autozeroing floating-gate amplifier (AFGA) demonstrates that leaky FGTs may also find a use. The simplest test structures contain only a few transistors, whereas the most complex experimental chip is an implementation of a spiking neural network (SNN) which comprises thousands of active and passive devices. More precisely, it is a fully connected (256 FGT synapses) two-layer spiking neural network (SNN), where the adaptive properties of FGT are taken advantage of. A compact realization of Spike Timing Dependent Plasticity (STDP) within the SNN is one of the key contributions of this thesis. Finally, the considerations in this thesis extend beyond CMOS to emerging nanodevices. To this end, one promising emerging nanoscale circuit element - memristor - is reviewed and its applicability for analog processing is considered. Furthermore, it is discussed how the FGT technology can be used to prototype computation paradigms compatible with these emerging two-terminal nanoscale devices in a mature and widely available CMOS technology.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
Distributed storage systems are studied. The interest in such system has become relatively wide due to the increasing amount of information needed to be stored in data centers or different kinds of cloud systems. There are many kinds of solutions for storing the information into distributed devices regarding the needs of the system designer. This thesis studies the questions of designing such storage systems and also fundamental limits of such systems. Namely, the subjects of interest of this thesis include heterogeneous distributed storage systems, distributed storage systems with the exact repair property, and locally repairable codes. For distributed storage systems with either functional or exact repair, capacity results are proved. In the case of locally repairable codes, the minimum distance is studied. Constructions for exact-repairing codes between minimum bandwidth regeneration (MBR) and minimum storage regeneration (MSR) points are given. These codes exceed the time-sharing line of the extremal points in many cases. Other properties of exact-regenerating codes are also studied. For the heterogeneous setup, the main result is that the capacity of such systems is always smaller than or equal to the capacity of a homogeneous system with symmetric repair with average node size and average repair bandwidth. A randomized construction for a locally repairable code with good minimum distance is given. It is shown that a random linear code of certain natural type has a good minimum distance with high probability. Other properties of locally repairable codes are also studied.
Resumo:
Energian kulutuksen vähentäminen ja sen tutkiminen on kasvavan kiinnostuksen kohteena. Syntyneen lämmön mittaaminen on yksi tapa mitata energian siirtymistä. Lämpötilan mittaaminen on yleistä, vaikka usein on merkittävämpää selvittää missä ja miten lämpöenergia on siirtynyt. Tästä syystä tarvitaan lämpövuoantureita, jotka reagoivat suoraan lämpövuohon eli lämpöenergian siirtymiseen. Tässä tutkimuksessa suunnitellaan ja toteutetaan lämpövuoanturin mittauselektroniikka vaativaan käyttöympäristöön. Työssä käytettävän gradienttilämpövuoanturin tuottama jännitesignaali on mikrovolttiluokkaa ja ympäristön aiheuttama kohina voi olla huomattavasti suurempi. Tämän takia anturin tuottamaa signaalia on vahvistettava, jotta sitä voidaan mitata luotettavasti. Tutkimuksessa keskitytään vahvistimen suunnitteluun, mutta suunnittelussa on otettava huomioon koko järjestelmä. Anturin sähköiset ominaisuudet ja ympäristö asettavat rajoitteita vahvistimelle. Tavoitteena on selvittää miten voidaan mitata mikrovolttien jännitesignaalia mahdollisimman suurella taajuuskaistalla vaativassa käyttöympäristössä. Työn tuloksena syntyi mittalaite, jota voidaan käyttää vaativassa ympäristössä lämpövuon mittaamiseen. Suunnitteluparametrien mukainen vahvistus ja päästökaista sekä offset-jännitteen ryömintä saavutettiin suunnitellulla mittalaitteella, mutta offsetjännite ja kohina olivat hieman suunniteltua suuremmat. Mittalaitteella ja lämpövuoanturilla havaittiin selvästi lämpövuon muutoksia keinotekoisilla herätteillä.
Resumo:
The energy consumption of IT equipments is becoming an issue of increasing importance. In particular, network equipments such as routers and switches are major contributors to the energy consumption of internet. Therefore it is important to understand how the relationship between input parameters such as bandwidth, number of active ports, traffic-load, hibernation-mode and their impact on energy consumption of a switch. In this paper, the energy consumption of a switch is analyzed in extensive experiments. A fuzzy rule-based model of energy consumption of a switch is proposed based on the result of experiments. The model can be used to predict the energy saving when deploying new switches by controlling the parameters to achieve desired energy consumption and subsequent performance. Furthermore, the model can also be used for further researches on energy saving techniques such as energy-efficient routing protocol, dynamic link shutdown, etc.
Resumo:
At present, one of the main concerns of green network is to minimize the power consumption of network infrastructure. Surveys show that, the highest amount of power is consumed by the network devices during its runtime. However to control this power consumption it is important to know which factors has highest impact on this matter. This paper is focused on the measurement and modeling the power consumption of an Ethernet switch during its runtime considering various types of input parameters with all possible combinations. For the experiment, three input parameters are chosen. They are bandwidth, link load and number of connections. The output to be measured is the power consumption of the Ethernet switch. Due to the uncertain power consuming pattern of the Ethernet switch a fully-comprehensive experimental evaluation would require an unfeasible and cumbersome experimental phase. Because of that, design of experiment (DoE) method has been applied to obtain adequate information on the effects of each input parameters on the power consumption. The whole work consists of three parts. In the first part a test bed is planned with input parameters and the power consumption of the switch is measured. The second part is about generating a mathematical model with the help of design of experiment tools. This model can be used for measuring precise power consumption in different scenario and also pinpoint the parameters with higher influence in power consumption. And in the last part, the mathematical model is evaluated by comparing with the experimental values.
Resumo:
Recent advances in Information and Communication Technology (ICT), especially those related to the Internet of Things (IoT), are facilitating smart regions. Among many services that a smart region can offer, remote health monitoring is a typical application of IoT paradigm. It offers the ability to continuously monitor and collect health-related data from a person, and transmit the data to a remote entity (for example, a healthcare service provider) for further processing and knowledge extraction. An IoT-based remote health monitoring system can be beneficial in rural areas belonging to the smart region where people have limited access to regular healthcare services. The same system can be beneficial in urban areas where hospitals can be overcrowded and where it may take substantial time to avail healthcare. However, this system may generate a large amount of data. In order to realize an efficient IoT-based remote health monitoring system, it is imperative to study the network communication needs of such a system; in particular the bandwidth requirements and the volume of generated data. The thesis studies a commercial product for remote health monitoring in Skellefteå, Sweden. Based on the results obtained via the commercial product, the thesis identified the key network-related requirements of a typical remote health monitoring system in terms of real-time event update, bandwidth requirements and data generation. Furthermore, the thesis has proposed an architecture called IReHMo - an IoT-based remote health monitoring architecture. This architecture allows users to incorporate several types of IoT devices to extend the sensing capabilities of the system. Using IReHMo, several IoT communication protocols such as HTTP, MQTT and CoAP has been evaluated and compared against each other. Results showed that CoAP is the most efficient protocol to transmit small size healthcare data to the remote servers. The combination of IReHMo and CoAP significantly reduced the required bandwidth as well as the volume of generated data (up to 56 percent) compared to the commercial product. Finally, the thesis conducted a scalability analysis, to determine the feasibility of deploying the combination of IReHMo and CoAP in large numbers in regions in north Sweden.
Resumo:
Rare-earth based upconverting nanoparticles (UCNPs) have attracted much attention due to their unique luminescent properties. The ability to convert multiple photons of lower energy to ones with higher energy through an upconversion (UC) process offers a wide range of applications for UCNPs. The emission intensities and wavelengths of UCNPs are important performance characteristics, which determine the appropriate applications. However, insufficient intensities still limit the use of UCNPs; especially the efficient emission of blue and ultraviolet (UV) light via upconversion remains challenging, as these events require three or more near-infrared (NIR) photons. The aim of the study was to enhance the blue and UV upconversion emission intensities of Tm3+ doped NaYF4 nanoparticles and to demonstrate their utility in in vitro diagnostics. As the distance between the sensitizer and the activator significantly affect the energy transfer efficiency, different strategies were explored to change the local symmetry around the doped lanthanides. One important strategy is the intentional co-doping of active (participate in energy transfer) or passive (do not participate in energy transfer) impurities into the host matrix. The roles of doped passive impurities (K+ and Sc3+) in enhancing the blue and UV upconversions, as well as in influencing the intense UV upconversion emission through excess sensitization (active impurity) were studied. Additionally, the effects of both active and passive impurity doping on the morphological and optical performance of UCNPs were investigated. The applicability of UV emitting UCNPs as an internal light source for glucose sensing in a dry chemistry test strip was demonstrated. The measurements were in agreement with the traditional method based on reflectance measurements using an external UV light source. The use of UCNPs in the glucose test strip offers an alternative detection method with advantages such as control signals for minimizing errors and high penetration of the NIR excitation through the blood sample, which gives more freedom for designing the optical setup. In bioimaging, the excitation of the UCNPs in the transparent IR region of the tissue permits measurements, which are free of background fluorescence and have a high signal-to-background ratio. In addition, the narrow emission bandwidth of the UCNPs enables multiplexed detections. An array-in-well immunoassay was developed using two different UC emission colours. The differentiation between different viral infections and the classification of antibody responses were achieved based on both the position and colour of the signal. The study demonstrates the potential of spectral and spatial multiplexing in the imaging based array-in-well assays.
Resumo:
Erilaisten paikkatietoon ja karttoihin perustuvien palveluiden määrä on viime vuosina kasvanut suuresti. Eräänä mahdollistajana tähän lienee Euroopan Unionin Inspire-direktiivi, jonka tavoitteena on tehostaa paikkatietoaineistojen käyttöä, laajentaa viranomaisten välistä yhteistyötä, sekä edistää monipuolisten kansalaispalveluiden syntymistä. Toisena tekijänä lienee mobiiliverkkojen tiedonsiirtokapasiteetin kasvu. Tämä diplomityö keskittyy geneerisen karttakomponentin kehitykseen. Työssä tutustutaan erilaisiin paikkatiedon välityksessä käytettäviin tiedonsiirtotekniikoihin ja paikkatiedon visuaalisen esityksen mahdollistaviin työkaluihin. Tekniikoiden esittelyn jälkeen selvitetään mitä käytettävyydellä ja geneerisyydellä tarkoitetaan kirjallisuuslähteitä apuna käyttäen. Työn seuraavassa vaiheessa toteutetaan prototyyppi uudelleenkäytettävästä karttakomponentista, sekä selvitetään hyvä tallennusratkaisu erilaisille kartalle tehtäville merkinnöille. Viimeisessä vaiheessa analysoidaan toteutusta ja valittuja ratkaisuja. Työn tuloksina saadaan uudelleenkäytettävän karttakomponentin prototyyppi sekä yrityksen tuotteisiin sopiva tallennusratkaisu karttamerkinnöille. Lisäksi prototyypin analysoinnista saatavat tulokset edesauttavat paremman tuotantoversion kehittämisessä.
Resumo:
Tässä kandidaatintyössä selvitetään kirjallisuustutkimuksena sähkön yleissiirtohinnat Suomessa vuonna 2015. Työssä vertaillaan sähkön siirtohintoja erilaisilla kotitalouskuluttajilla ja tutkitaan sähkönsiirron hinnoitteluun vaikuttavia tekijöitä. Työssä esitetään myös tulevaisuudessa mahdollisesti käytettäviä siirtotariffeja. Työ tuo esiin sähkön siirtohinnan komponenttien osuudet sähkönsiirron kokonaishinnasta, sekä havainnollistaa visuaalisesti komponenttien maantieteellistä vaihtelua. Siirtomaksujen vaihtelua erilaisilla kulutuksilla havainnollistetaan käyttämällä kolmea esimerkkikuluttajaa, sekä vertailemalla esimerkkikuluttajien siirtomaksuja keskenään. Siirtomaksujen kokonaiskustannusten maantieteellistä vaihtelua havainnollistetaan kuvien avulla. Energiaviraston regulaation vaikutuksia siirtohintaan tarkastellaan verkkoyhtiöiden yli- ja alijäämien avulla. Tulevaisuuden siirtotariffeista huomio kiinnittyy kaistahinnoitteluun. Kaistahinnoittelua tarkastellaan kuluttajan kannalta ja samalla tuodaan esiin kaistahinnoittelun etuja nykyisin käytössä oleviin siirtotariffeihin. Työn perusteella voidaan tehdä johtopäätös, että sähkön kulutuksen kasvaessa siirtomaksu kulutettua energiayksikköä kohden pienenee. Toisin sanoen perusmaksun osuus on merkittävä pienillä kulutusmäärillä ja sen suhteellinen osuus siirtomaksuista pienenee kulutuksen kasvaessa.
Resumo:
Tämän kandidaatintyön aiheena on D-luokan audiovahvistimen särö ja kohina. Tarkoituksena on selvittää vahvistinluokan merkittävin särö- ja kohinamekanismi, sekä arvioida, voidaanko häiriöitä vähentää lähdön suodattimella. Tutkimusmenetelminä on kirjallisuus ja simulointi. Aineistona on käytetty IEEE:ssä julkaistuja tieteelisiä artikkeleita, eri valmistajien laatimia ohjeita, sekä aihetta käsitteleviä kirjoja. Keskeisimmät tulokset olivat, että merkittävin särömekanismi on transistoreiden suoja-ajan aiheuttama vääristymä, sekä että merkittävin kohina syntyy modulaatiossa käytetystä kantoaallosta. Kantoaallon näkyvyyteen kuormassa voidaan vaikuttaa ulostulon alipäästösuodattimella. Suoja-ajan aiheuttama harmoninen kokonaissärö asettuu musiikin kaistanleveydelle, joten sitä ei voida poistaa suodattamalla.