68 resultados para Armer, Chip
Resumo:
Through advances in technology, System-on-Chip design is moving towards integrating tens to hundreds of intellectual property blocks into a single chip. In such a many-core system, on-chip communication becomes a performance bottleneck for high performance designs. Network-on-Chip (NoC) has emerged as a viable solution for the communication challenges in highly complex chips. The NoC architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication challenges such as wiring complexity, communication latency, and bandwidth. Furthermore, the combined benefits of 3D IC and NoC schemes provide the possibility of designing a high performance system in a limited chip area. The major advantages of 3D NoCs are the considerable reductions in average latency and power consumption. There are several factors degrading the performance of NoCs. In this thesis, we investigate three main performance-limiting factors: network congestion, faults, and the lack of efficient multicast support. We address these issues by the means of routing algorithms. Congestion of data packets may lead to increased network latency and power consumption. Thus, we propose three different approaches for alleviating such congestion in the network. The first approach is based on measuring the congestion information in different regions of the network, distributing the information over the network, and utilizing this information when making a routing decision. The second approach employs a learning method to dynamically find the less congested routes according to the underlying traffic. The third approach is based on a fuzzy-logic technique to perform better routing decisions when traffic information of different routes is available. Faults affect performance significantly, as then packets should take longer paths in order to be routed around the faults, which in turn increases congestion around the faulty regions. We propose four methods to tolerate faults at the link and switch level by using only the shortest paths as long as such path exists. The unique characteristic among these methods is the toleration of faults while also maintaining the performance of NoCs. To the best of our knowledge, these algorithms are the first approaches to bypassing faults prior to reaching them while avoiding unnecessary misrouting of packets. Current implementations of multicast communication result in a significant performance loss for unicast traffic. This is due to the fact that the routing rules of multicast packets limit the adaptivity of unicast packets. We present an approach in which both unicast and multicast packets can be efficiently routed within the network. While suggesting a more efficient multicast support, the proposed approach does not affect the performance of unicast routing at all. In addition, in order to reduce the overall path length of multicast packets, we present several partitioning methods along with their analytical models for latency measurement. This approach is discussed in the context of 3D mesh networks.
Resumo:
Lempäälään aiotaan rakentaa uusi kaukolämpölaitos, jossa polttoaineena käytettäisiin haketta. Nykyään Lempäälässä tuotetaan kaukolämpöä maakaasulla, jonka käyttämisestä halutaan siirtyä käyttämään lähialueilta saatavaa biopolttoainetta. Tässä työssä halutaan selvittää, mitä hyötyjä saataisiin hakkeen koneellisesta kuivauksesta. Työn toisena tavoitteena on suunnitella ja pohtia biopolttoaineterminaalin rakentamista sekä käsitellä hakkeen varastointia yleensä. Työssä tutustutaan hakkeeseen aiheesta kertovan kirjallisuuden avulla. Työssä on myös laskettu hakkeen kuivauksesta saatavia hyötyjä hakkeen lämpöarvoon sekä energiatiheyteen. Erityisesti perehdytään metsätähdehakkeeseen, rankahakkeeseen, kuorihakkeeseen sekä sahanpuruun. Laskelmien tuloksista on havaittu, että suurin hyöty hakkeen energiatiheyden parantumisessa saadaan kun hake kuivataan 35 % kosteuspitoisuuteen. Tämän jälkeen energiatiheyden paraneminen tapahtuu hitaammin. Hakkeen kuivauksesta saadaan myös muita hyötyjä kuin energiatiheyden paraneminen. Kuivan hakkeen käsittelyn ja varastoinnin on havaittu olevan vaivattomampaa kuin märän hakkeen. Biopolttoaineterminaalin ja voimalaitoksen tulisi sijaita rinnakkain, jotta hakkeen kuivauksesta saadaan mahdollisimman kustannustehokasta. Näin ollen syntyisi myös säästöjä hakkeen kuljetuksen suhteen. Biopolttoaineterminaalin rakentamista varten tarvittaisiin tilaa alustavien laskelmien perusteella noin yksi hehtaari. Työssä on myös laskettu biopolttoaineterminaalin rakentamisesta aiheutuvia kustannuksia sekä hakkeen kuljetuksesta koituvia logistiikka kustannuksia. Haketerminaalin ja voimalaitoksen sijaintia Lempäälässä on myös kartoitettu.
Resumo:
In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.
Resumo:
Today's networked systems are becoming increasingly complex and diverse. The current simulation and runtime verification techniques do not provide support for developing such systems efficiently; moreover, the reliability of the simulated/verified systems is not thoroughly ensured. To address these challenges, the use of formal techniques to reason about network system development is growing, while at the same time, the mathematical background necessary for using formal techniques is a barrier for network designers to efficiently employ them. Thus, these techniques are not vastly used for developing networked systems. The objective of this thesis is to propose formal approaches for the development of reliable networked systems, by taking efficiency into account. With respect to reliability, we propose the architectural development of correct-by-construction networked system models. With respect to efficiency, we propose reusable network architectures as well as network development. At the core of our development methodology, we employ the abstraction and refinement techniques for the development and analysis of networked systems. We evaluate our proposal by employing the proposed architectures to a pervasive class of dynamic networks, i.e., wireless sensor network architectures as well as to a pervasive class of static networks, i.e., network-on-chip architectures. The ultimate goal of our research is to put forward the idea of building libraries of pre-proved rules for the efficient modelling, development, and analysis of networked systems. We take into account both qualitative and quantitative analysis of networks via varied formal tool support, using a theorem prover the Rodin platform and a statistical model checker the SMC-Uppaal.
Resumo:
Paremmin lastuttavia M-käsiteltyjä teräksiä on käytetty yrityksissä jo yli 20 vuoden ajan. Ominaisuuksiensa ansiosta M-teräksillä on pystytty pienentämään koneistuskustannuksia ja parantamaan kilpailukykyä. Viime vuosien aikana lastuavat terät ja työstökoneet ovat kuitenkin kehittyneet ja ero M-terästen ja tavanomaisten terästen välillä on voinut kaventua. Tämän diplomityön tavoitteena oli tutkia, saavutetaanko M-teräksen käytöllä taloudellisia etuja nykyaikaisissa konepajaolosuhteissa. Tutkimuksessa vertailtiin M-käsitellyn ja tavanomaisen 42CrMo4 – teräksen koneistusta. Valmistuskokeissa tarkasteltiin terien kulumista, lastun muotoa ja pinnanlaatua. Koekappaleena toimi olakkeellinen kuusiomutteri M64 kierteellä. Tuotteita valmistettiin yli 500 kappaletta ja materiaalia poistettiin noin 2000 kg. Koetulosten perusteella tuotteille laskettiin koneistuskustannukset kuvitteellisessa yrityksessä. Ero materiaalien välillä oli suurin työvaiheissa, joissa lastuaminen oli jatkuvaa. Sisä- ja ulkosorvauksessa M-käsiteltyä terästä lastunneiden terien kestoikä oli noin kaksinkertainen ja kierteen sorvauksessa noin nelinkertainen tavalliseen teräkseen verrattuna. Hakkaavassa työstössä terien kestoikä oli molemmilla materiaaleilla sama. Työssä suoritettujen kokeiden ja kustannuslaskelmien perusteella, käyttämällä M-käsiteltyä terästä voidaan pienentää valmistuskustannuksia. Materiaalien välinen ero korostuu, kun hakkaavaa työstöä on vähän, sarjat ovat suuria ja tuotanto on miehittämätöntä.
Resumo:
The European Organization for Nuclear Research (CERN) operates the largest particle collider in the world. This particle collider is called the Large Hadron Collider (LHC) and it will undergo a maintenance break sometime in 2017 or 2018. During the break, the particle detectors, which operate around the particle collider, will be serviced and upgraded. Following the improvement in performance of the particle collider, the requirements for the detector electronics will be more demanding. In particular, the high amount of radiation during the operation of the particle collider sets requirements for the electronics that are uncommon in commercial electronics. Electronics that are built to function in the challenging environment of the collider have been designed at CERN. In order to meet the future challenges of data transmission, a GigaBit Transceiver data transmission module and an E-Link data bus have been developed. The next generation of readout electronics is designed to benefit from these technologies. However, the current readout electronics chips are not compatible with these technologies. As a result, in addition to new Gas Electron Multiplier (GEM) detectors and other technology, a new compatible chip is developed to function within the GEMs for the Compact Muon Solenoid (CMS) project. In this thesis, the objective was to study a data transmission interface that will be located on the readout chip between the E-Link bus and the control logic of the chip. The function of the module is to handle data transmission between the chip and the E-Link. In the study, a model of the interface was implemented with the Verilog hardware description language. This process was simulated by using chip design software by Cadence. State machines and operating principles with alternative possibilities for implementation are introduced in the E-Link interface design procedure. The functionality of the designed logic is demonstrated in simulation results, in which the implemented model is proven to be suitable for its task. Finally, suggestions that should be considered for improving the design have been presented.
Resumo:
Multiprocessor system-on-chip (MPSoC) designs utilize the available technology and communication architectures to meet the requirements of the upcoming applications. In MPSoC, the communication platform is both the key enabler, as well as the key differentiator for realizing efficient MPSoCs. It provides product differentiation to meet a diverse, multi-dimensional set of design constraints, including performance, power, energy, reconfigurability, scalability, cost, reliability and time-to-market. The communication resources of a single interconnection platform cannot be fully utilized by all kind of applications, such as the availability of higher communication bandwidth for computation but not data intensive applications is often unfeasible in the practical implementation. This thesis aims to perform the architecture-level design space exploration towards efficient and scalable resource utilization for MPSoC communication architecture. In order to meet the performance requirements within the design constraints, careful selection of MPSoC communication platform, resource aware partitioning and mapping of the application play important role. To enhance the utilization of communication resources, variety of techniques such as resource sharing, multicast to avoid re-transmission of identical data, and adaptive routing can be used. For implementation, these techniques should be customized according to the platform architecture. To address the resource utilization of MPSoC communication platforms, variety of architectures with different design parameters and performance levels, namely Segmented bus (SegBus), Network-on-Chip (NoC) and Three-Dimensional NoC (3D-NoC), are selected. Average packet latency and power consumption are the evaluation parameters for the proposed techniques. In conventional computing architectures, fault on a component makes the connected fault-free components inoperative. Resource sharing approach can utilize the fault-free components to retain the system performance by reducing the impact of faults. Design space exploration also guides to narrow down the selection of MPSoC architecture, which can meet the performance requirements with design constraints.
Resumo:
Activated T helper (Th) cells have ability to differentiate into functionally distinct Th1, Th2 and Th17 subsets through a series of overlapping networks that include signaling and transcriptional control and the epigenetic mechanisms to direct immune responses. However, inappropriate execution in the differentiation process and abnormal function of these Th cells can lead to the development of several immune mediated diseases. Therefore, the thesis aimed at identifying genes and gene regulatory mechanisms responsible for Th17 differentiation and to study epigenetic changes associated with early stage of Th1/Th2 cell differentiation. Genome wide transcriptional profiling during early stages of human Th17 cell differentiation demonstrated differential regulation of several novel and currently known genes associated with Th17 differentiation. Selected candidate genes were further validated at protein level and their specificity for Th17 as compared to other T helper subsets was analyzed. Moreover, combination of RNA interference-mediated downregulation of gene expression, genome-wide transcriptome profiling and chromatin immunoprecipitation followed by massive parallel sequencing (ChIP-seq), combined with computational data integration lead to the identification of direct and indirect target genes of STAT3, which is a pivotal upstream transcription factor for Th17 cell polarization. Results indicated that STAT3 directly regulates the expression of several genes that are known to play a role in activation, differentiation, proliferation, and survival of Th17 cells. These results provide a basis for constructing a network regulating gene expression during early human Th17 differentiation. Th1 and Th2 lineage specific enhancers were identified from genome-wide maps of histone modifications generated from the cells differentiating towards Th1 and Th2 lineages at 72h. Further analysis of lineage-specific enhancers revealed known and novel transcription factors that potentially control lineage-specific gene expression. Finally, we found an overlap of a subset of enhancers with SNPs associated with autoimmune diseases through GWASs suggesting a potential role for enhancer elements in the disease development. In conclusion, the results obtained have extended our knowledge of Th differentiation and provided new mechanistic insights into dysregulation of Th cell differentiation in human immune mediated diseases.
Resumo:
Advancements in IC processing technology has led to the innovation and growth happening in the consumer electronics sector and the evolution of the IT infrastructure supporting this exponential growth. One of the most difficult obstacles to this growth is the removal of large amount of heatgenerated by the processing and communicating nodes on the system. The scaling down of technology and the increase in power density is posing a direct and consequential effect on the rise in temperature. This has resulted in the increase in cooling budgets, and affects both the life-time reliability and performance of the system. Hence, reducing on-chip temperatures has become a major design concern for modern microprocessors. This dissertation addresses the thermal challenges at different levels for both 2D planer and 3D stacked systems. It proposes a self-timed thermal monitoring strategy based on the liberal use of on-chip thermal sensors. This makes use of noise variation tolerant and leakage current based thermal sensing for monitoring purposes. In order to study thermal management issues from early design stages, accurate thermal modeling and analysis at design time is essential. In this regard, spatial temperature profile of the global Cu nanowire for on-chip interconnects has been analyzed. It presents a 3D thermal model of a multicore system in order to investigate the effects of hotspots and the placement of silicon die layers, on the thermal performance of a modern ip-chip package. For a 3D stacked system, the primary design goal is to maximise the performance within the given power and thermal envelopes. Hence, a thermally efficient routing strategy for 3D NoC-Bus hybrid architectures has been proposed to mitigate on-chip temperatures by herding most of the switching activity to the die which is closer to heat sink. Finally, an exploration of various thermal-aware placement approaches for both the 2D and 3D stacked systems has been presented. Various thermal models have been developed and thermal control metrics have been extracted. An efficient thermal-aware application mapping algorithm for a 2D NoC has been presented. It has been shown that the proposed mapping algorithm reduces the effective area reeling under high temperatures when compared to the state of the art.
Resumo:
Heat shock factors (HSFs) are an evolutionarily well conserved family of transcription factors that coordinate stress-induced gene expression and direct versatile physiological processes in eukaryote organisms. The essentiality of HSFs for cellular homeostasis has been well demonstrated, mainly through HSF1-induced transcription of heat shock protein (HSP) genes. HSFs are important regulators of many fundamental processes such as gametogenesis, metabolic control and aging, and are involved in pathological conditions including cancer progression and neurodegenerative diseases. In each of the HSF-mediated processes, however, the detailed mechanisms of HSF family members and their complete set of target genes have remained unknown. Recently, rapid advances in chromatin studies have enabled genome-wide characterization of protein binding sites in a high resolution and in an unbiased manner. In this PhD thesis, these novel methods that base on chromatin immunoprecipitation (ChIP) are utilized and the genome-wide target loci for HSF1 and HSF2 are identified in cellular stress responses and in developmental processes. The thesis and its original publications characterize the individual and shared target genes of HSF1 and HSF2, describe HSF1 as a potent transactivator, and discover HSF2 as an epigenetic regulator that coordinates gene expression throughout the cell cycle progression. In male gametogenesis, novel physiological functions for HSF1 and HSF2 are revealed and HSFs are demonstrated to control the expression of X- and Y-chromosomal multicopy genes in a silenced chromatin environment. In stressed human cells, HSF1 and HSF2 are shown to coordinate the expression of a wide variety of genes including genes for chaperone machinery, ubiquitin, regulators of cell cycle progression and signaling. These results highlight the importance of cell type and cell cycle phase in transcriptional responses, reveal the myriad of processes that are adjusted in a stressed cell and describe novel mechanisms that maintain transcriptional memory in mitotic cell division.
Resumo:
Nykyaikana yhteiskunta tavoittelee uusiutuvaa ja ympäristöä säästävää energiantuotantoa. Biopolttoaineiden käyttö vähentää fossiilisten polttoaineiden osuutta energiantuotannossa. Jotta biopolttoaineilla voidaan korvata fossiilisia polttoaineita, biopolttoaineita täytyy jalostaa. Tämän diplomityön tarkoituksena on selvittää puuhakkeen jalostuksen merkitystä hakkeen käytölle ja kannattavuudelle. Hakkeen kuivaamisella ja seulonnalla voidaan parantaa hakkeen käsittely- ja poltto-ominaisuuksia. Kosteuden ja tasalaatuisuuden merkitys suurenee, kun haketta käytetään pienissä kattiloissa. Pienissä kattiloissa lämmöntuotannon hyötysuhde pienenee merkittävästi kosteuden suurentuessa. Tällöin polttoaineen kulutus ja energiantuotantokustannukset suurenevat. Suuremmissa kattiloissa hyvälaatuisella hakkeella on mahdollista korvata kalliimpia vara- ja huippukuormapolttoaineita, kuten öljyä. Tällöin fossiilisten polttoaineiden osuus pienenee. Lisäksi kuivaaminen ja seulominen ovat edullisia jalostusprosesseja esimerkiksi pelletin tuotantoon verrattuna.
Resumo:
The objective of this work was to study the effects of partial removal of wood hemicelluloses on the properties of kraft pulp.The work was conducted by extracting hemicelluloses (1) by a softwood chip pretreatment process prior to kraft pulping, (2) by alkaline extraction from bleached birch kraft pulp, and (3) by enzymatic treatment, xylanase treatment in particular, of bleached birch kraft pulp. The qualitative and quantitative changes in fibers and paper properties were evaluated. In addition, the applicability of the extraction concepts and hemicellulose-extracted birch kraft pulp as a raw material in papermaking was evaluated in a pilot-scale papermaking environment. The results showed that each examined hemicellulose extraction method has its characteristic effects on fiber properties, seen as differences in both the physical and chemical nature of the fibers. A prehydrolysis process prior to the kraft pulping process offered reductions in cooking time, bleaching chemical consumption and produced fibers with low hemicellulose content that are more susceptible to mechanically induced damages and dislocations. Softwood chip pretreatment for hemicellulose recovery prior to cooking, whether acidic or alkaline, had an impact on the physical properties of the non-refined and refined pulp. In addition, all the pretreated pulps exhibited slower beating response than the unhydrolyzed reference pulp. Both alkaline extraction and enzymatic (xylanase) treatment of bleached birch kraft pulp fibers indicated very selective hemicellulose removal, particularly xylan removal. Furthermore, these two hemicellulose-extracted birch kraft pulps were utilized in a pilot-scale papermaking environment in order to evaluate the upscalability of the extraction concepts. Investigations made using pilot paper machine trials revealed that some amount of alkalineextracted birch kraft pulp, with a 24.9% reduction in the total amount of xylan, could be used in the papermaking stock as a mixture with non-extracted pulp when producing 75 g/m2 paper. For xylanase-treated fibers there were no reductions in the mechanical properties of the 180 g/m2 paper produced compared to paper made from the control pulp, although there was a 14.2% reduction in the total amount of xylan in the xylanase-treated pulp compared to the control birch kraft pulp. This work emphasized the importance of the hemicellulose extraction method in providing new solutions to create functional fibers and in providing a valuable hemicellulose co-product stream. The hemicellulose removal concept therefore plays an important role in the integrated forest biorefinery scenario, where the target is to the co-production of hemicellulose-extracted pulp and hemicellulose-based chemicals or fuels.
Resumo:
In this work, the feasibility of the floating-gate technology in analog computing platforms in a scaled down general-purpose CMOS technology is considered. When the technology is scaled down the performance of analog circuits tends to get worse because the process parameters are optimized for digital transistors and the scaling involves the reduction of supply voltages. Generally, the challenge in analog circuit design is that all salient design metrics such as power, area, bandwidth and accuracy are interrelated. Furthermore, poor flexibility, i.e. lack of reconfigurability, the reuse of IP etc., can be considered the most severe weakness of analog hardware. On this account, digital calibration schemes are often required for improved performance or yield enhancement, whereas high flexibility/reconfigurability can not be easily achieved. Here, it is discussed whether it is possible to work around these obstacles by using floating-gate transistors (FGTs), and analyze problems associated with the practical implementation. FGT technology is attractive because it is electrically programmable and also features a charge-based built-in non-volatile memory. Apart from being ideal for canceling the circuit non-idealities due to process variations, the FGTs can also be used as computational or adaptive elements in analog circuits. The nominal gate oxide thickness in the deep sub-micron (DSM) processes is too thin to support robust charge retention and consequently the FGT becomes leaky. In principle, non-leaky FGTs can be implemented in a scaled down process without any special masks by using “double”-oxide transistors intended for providing devices that operate with higher supply voltages than general purpose devices. However, in practice the technology scaling poses several challenges which are addressed in this thesis. To provide a sufficiently wide-ranging survey, six prototype chips with varying complexity were implemented in four different DSM process nodes and investigated from this perspective. The focus is on non-leaky FGTs, but the presented autozeroing floating-gate amplifier (AFGA) demonstrates that leaky FGTs may also find a use. The simplest test structures contain only a few transistors, whereas the most complex experimental chip is an implementation of a spiking neural network (SNN) which comprises thousands of active and passive devices. More precisely, it is a fully connected (256 FGT synapses) two-layer spiking neural network (SNN), where the adaptive properties of FGT are taken advantage of. A compact realization of Spike Timing Dependent Plasticity (STDP) within the SNN is one of the key contributions of this thesis. Finally, the considerations in this thesis extend beyond CMOS to emerging nanodevices. To this end, one promising emerging nanoscale circuit element - memristor - is reviewed and its applicability for analog processing is considered. Furthermore, it is discussed how the FGT technology can be used to prototype computation paradigms compatible with these emerging two-terminal nanoscale devices in a mature and widely available CMOS technology.
Resumo:
This thesis presents a novel design paradigm, called Virtual Runtime Application Partitions (VRAP), to judiciously utilize the on-chip resources. As the dark silicon era approaches, where the power considerations will allow only a fraction chip to be powered on, judicious resource management will become a key consideration in future designs. Most of the works on resource management treat only the physical components (i.e. computation, communication, and memory blocks) as resources and manipulate the component to application mapping to optimize various parameters (e.g. energy efficiency). To further enhance the optimization potential, in addition to the physical resources we propose to manipulate abstract resources (i.e. voltage/frequency operating point, the fault-tolerance strength, the degree of parallelism, and the configuration architecture). The proposed framework (i.e. VRAP) encapsulates methods, algorithms, and hardware blocks to provide each application with the abstract resources tailored to its needs. To test the efficacy of this concept, we have developed three distinct self adaptive environments: (i) Private Operating Environment (POE), (ii) Private Reliability Environment (PRE), and (iii) Private Configuration Environment (PCE) that collectively ensure that each application meets its deadlines using minimal platform resources. In this work several novel architectural enhancements, algorithms and policies are presented to realize the virtual runtime application partitions efficiently. Considering the future design trends, we have chosen Coarse Grained Reconfigurable Architectures (CGRAs) and Network on Chips (NoCs) to test the feasibility of our approach. Specifically, we have chosen Dynamically Reconfigurable Resource Array (DRRA) and McNoC as the representative CGRA and NoC platforms. The proposed techniques are compared and evaluated using a variety of quantitative experiments. Synthesis and simulation results demonstrate VRAP significantly enhances the energy and power efficiency compared to state of the art.
Resumo:
The Large Hadron Collider (LHC) in The European Organization for Nuclear Research (CERN) will have a Long Shutdown sometime during 2017 or 2018. During this time there will be maintenance and a possibility to install new detectors. After the shutdown the LHC will have a higher luminosity. A promising new type of detector for this high luminosity phase is a Triple-GEM detector. During the shutdown these detectors will be installed at the Compact Muon Solenoid (CMS) experiment. The Triple-GEM detectors are now being developed at CERN and alongside also a readout ASIC chip for the detector. In this thesis a simulation model was developed for the ASICs analog front end. The model will help to carry out more extensive simulations and also simulate the whole chip before the whole design is finished. The proper functioning of the model was tested with simulations, which are also presented in the thesis.