921 resultados para ChIp-chip
Resumo:
Nykyaikana yhteiskunta tavoittelee uusiutuvaa ja ympäristöä säästävää energiantuotantoa. Biopolttoaineiden käyttö vähentää fossiilisten polttoaineiden osuutta energiantuotannossa. Jotta biopolttoaineilla voidaan korvata fossiilisia polttoaineita, biopolttoaineita täytyy jalostaa. Tämän diplomityön tarkoituksena on selvittää puuhakkeen jalostuksen merkitystä hakkeen käytölle ja kannattavuudelle. Hakkeen kuivaamisella ja seulonnalla voidaan parantaa hakkeen käsittely- ja poltto-ominaisuuksia. Kosteuden ja tasalaatuisuuden merkitys suurenee, kun haketta käytetään pienissä kattiloissa. Pienissä kattiloissa lämmöntuotannon hyötysuhde pienenee merkittävästi kosteuden suurentuessa. Tällöin polttoaineen kulutus ja energiantuotantokustannukset suurenevat. Suuremmissa kattiloissa hyvälaatuisella hakkeella on mahdollista korvata kalliimpia vara- ja huippukuormapolttoaineita, kuten öljyä. Tällöin fossiilisten polttoaineiden osuus pienenee. Lisäksi kuivaaminen ja seulominen ovat edullisia jalostusprosesseja esimerkiksi pelletin tuotantoon verrattuna.
Resumo:
The objective of this work was to study the effects of partial removal of wood hemicelluloses on the properties of kraft pulp.The work was conducted by extracting hemicelluloses (1) by a softwood chip pretreatment process prior to kraft pulping, (2) by alkaline extraction from bleached birch kraft pulp, and (3) by enzymatic treatment, xylanase treatment in particular, of bleached birch kraft pulp. The qualitative and quantitative changes in fibers and paper properties were evaluated. In addition, the applicability of the extraction concepts and hemicellulose-extracted birch kraft pulp as a raw material in papermaking was evaluated in a pilot-scale papermaking environment. The results showed that each examined hemicellulose extraction method has its characteristic effects on fiber properties, seen as differences in both the physical and chemical nature of the fibers. A prehydrolysis process prior to the kraft pulping process offered reductions in cooking time, bleaching chemical consumption and produced fibers with low hemicellulose content that are more susceptible to mechanically induced damages and dislocations. Softwood chip pretreatment for hemicellulose recovery prior to cooking, whether acidic or alkaline, had an impact on the physical properties of the non-refined and refined pulp. In addition, all the pretreated pulps exhibited slower beating response than the unhydrolyzed reference pulp. Both alkaline extraction and enzymatic (xylanase) treatment of bleached birch kraft pulp fibers indicated very selective hemicellulose removal, particularly xylan removal. Furthermore, these two hemicellulose-extracted birch kraft pulps were utilized in a pilot-scale papermaking environment in order to evaluate the upscalability of the extraction concepts. Investigations made using pilot paper machine trials revealed that some amount of alkalineextracted birch kraft pulp, with a 24.9% reduction in the total amount of xylan, could be used in the papermaking stock as a mixture with non-extracted pulp when producing 75 g/m2 paper. For xylanase-treated fibers there were no reductions in the mechanical properties of the 180 g/m2 paper produced compared to paper made from the control pulp, although there was a 14.2% reduction in the total amount of xylan in the xylanase-treated pulp compared to the control birch kraft pulp. This work emphasized the importance of the hemicellulose extraction method in providing new solutions to create functional fibers and in providing a valuable hemicellulose co-product stream. The hemicellulose removal concept therefore plays an important role in the integrated forest biorefinery scenario, where the target is to the co-production of hemicellulose-extracted pulp and hemicellulose-based chemicals or fuels.
Resumo:
In this work, the feasibility of the floating-gate technology in analog computing platforms in a scaled down general-purpose CMOS technology is considered. When the technology is scaled down the performance of analog circuits tends to get worse because the process parameters are optimized for digital transistors and the scaling involves the reduction of supply voltages. Generally, the challenge in analog circuit design is that all salient design metrics such as power, area, bandwidth and accuracy are interrelated. Furthermore, poor flexibility, i.e. lack of reconfigurability, the reuse of IP etc., can be considered the most severe weakness of analog hardware. On this account, digital calibration schemes are often required for improved performance or yield enhancement, whereas high flexibility/reconfigurability can not be easily achieved. Here, it is discussed whether it is possible to work around these obstacles by using floating-gate transistors (FGTs), and analyze problems associated with the practical implementation. FGT technology is attractive because it is electrically programmable and also features a charge-based built-in non-volatile memory. Apart from being ideal for canceling the circuit non-idealities due to process variations, the FGTs can also be used as computational or adaptive elements in analog circuits. The nominal gate oxide thickness in the deep sub-micron (DSM) processes is too thin to support robust charge retention and consequently the FGT becomes leaky. In principle, non-leaky FGTs can be implemented in a scaled down process without any special masks by using “double”-oxide transistors intended for providing devices that operate with higher supply voltages than general purpose devices. However, in practice the technology scaling poses several challenges which are addressed in this thesis. To provide a sufficiently wide-ranging survey, six prototype chips with varying complexity were implemented in four different DSM process nodes and investigated from this perspective. The focus is on non-leaky FGTs, but the presented autozeroing floating-gate amplifier (AFGA) demonstrates that leaky FGTs may also find a use. The simplest test structures contain only a few transistors, whereas the most complex experimental chip is an implementation of a spiking neural network (SNN) which comprises thousands of active and passive devices. More precisely, it is a fully connected (256 FGT synapses) two-layer spiking neural network (SNN), where the adaptive properties of FGT are taken advantage of. A compact realization of Spike Timing Dependent Plasticity (STDP) within the SNN is one of the key contributions of this thesis. Finally, the considerations in this thesis extend beyond CMOS to emerging nanodevices. To this end, one promising emerging nanoscale circuit element - memristor - is reviewed and its applicability for analog processing is considered. Furthermore, it is discussed how the FGT technology can be used to prototype computation paradigms compatible with these emerging two-terminal nanoscale devices in a mature and widely available CMOS technology.
Resumo:
This thesis presents a novel design paradigm, called Virtual Runtime Application Partitions (VRAP), to judiciously utilize the on-chip resources. As the dark silicon era approaches, where the power considerations will allow only a fraction chip to be powered on, judicious resource management will become a key consideration in future designs. Most of the works on resource management treat only the physical components (i.e. computation, communication, and memory blocks) as resources and manipulate the component to application mapping to optimize various parameters (e.g. energy efficiency). To further enhance the optimization potential, in addition to the physical resources we propose to manipulate abstract resources (i.e. voltage/frequency operating point, the fault-tolerance strength, the degree of parallelism, and the configuration architecture). The proposed framework (i.e. VRAP) encapsulates methods, algorithms, and hardware blocks to provide each application with the abstract resources tailored to its needs. To test the efficacy of this concept, we have developed three distinct self adaptive environments: (i) Private Operating Environment (POE), (ii) Private Reliability Environment (PRE), and (iii) Private Configuration Environment (PCE) that collectively ensure that each application meets its deadlines using minimal platform resources. In this work several novel architectural enhancements, algorithms and policies are presented to realize the virtual runtime application partitions efficiently. Considering the future design trends, we have chosen Coarse Grained Reconfigurable Architectures (CGRAs) and Network on Chips (NoCs) to test the feasibility of our approach. Specifically, we have chosen Dynamically Reconfigurable Resource Array (DRRA) and McNoC as the representative CGRA and NoC platforms. The proposed techniques are compared and evaluated using a variety of quantitative experiments. Synthesis and simulation results demonstrate VRAP significantly enhances the energy and power efficiency compared to state of the art.
Resumo:
The Large Hadron Collider (LHC) in The European Organization for Nuclear Research (CERN) will have a Long Shutdown sometime during 2017 or 2018. During this time there will be maintenance and a possibility to install new detectors. After the shutdown the LHC will have a higher luminosity. A promising new type of detector for this high luminosity phase is a Triple-GEM detector. During the shutdown these detectors will be installed at the Compact Muon Solenoid (CMS) experiment. The Triple-GEM detectors are now being developed at CERN and alongside also a readout ASIC chip for the detector. In this thesis a simulation model was developed for the ASICs analog front end. The model will help to carry out more extensive simulations and also simulate the whole chip before the whole design is finished. The proper functioning of the model was tested with simulations, which are also presented in the thesis.
Resumo:
The interpretation of oligonucleotide array experiments depends on the quality of the target cRNA used. cRNA target quality is assessed by quantitative analysis of the representation of 5' and 3' sequences of control genes using commercially available Test arrays. The Test array provides an economically priced means of determining the quality of labeled target prior to analysis on whole genome expression arrays. This manuscript validates the use of a duplex RT-PCR assay as a faster (6 h) and less expensive (
Resumo:
Feature extraction is the part of pattern recognition, where the sensor data is transformed into a more suitable form for the machine to interpret. The purpose of this step is also to reduce the amount of information passed to the next stages of the system, and to preserve the essential information in the view of discriminating the data into different classes. For instance, in the case of image analysis the actual image intensities are vulnerable to various environmental effects, such as lighting changes and the feature extraction can be used as means for detecting features, which are invariant to certain types of illumination changes. Finally, classification tries to make decisions based on the previously transformed data. The main focus of this thesis is on developing new methods for the embedded feature extraction based on local non-parametric image descriptors. Also, feature analysis is carried out for the selected image features. Low-level Local Binary Pattern (LBP) based features are in a main role in the analysis. In the embedded domain, the pattern recognition system must usually meet strict performance constraints, such as high speed, compact size and low power consumption. The characteristics of the final system can be seen as a trade-off between these metrics, which is largely affected by the decisions made during the implementation phase. The implementation alternatives of the LBP based feature extraction are explored in the embedded domain in the context of focal-plane vision processors. In particular, the thesis demonstrates the LBP extraction with MIPA4k massively parallel focal-plane processor IC. Also higher level processing is incorporated to this framework, by means of a framework for implementing a single chip face recognition system. Furthermore, a new method for determining optical flow based on LBPs, designed in particular to the embedded domain is presented. Inspired by some of the principles observed through the feature analysis of the Local Binary Patterns, an extension to the well known non-parametric rank transform is proposed, and its performance is evaluated in face recognition experiments with a standard dataset. Finally, an a priori model where the LBPs are seen as combinations of n-tuples is also presented
Resumo:
Työssä tutkittiin polttoaineterminaalissa varastoitavan puupolttoaineen laadunmuutoksia. Tutkimuksessa tarkasteltiin hakettamattomien rankapuiden ja rankapuuhakkeen kosteuden ja lämpöarvon muutosta. Myös kuiva-ainetappiota tutkittiin aikaisempien tutkimusten perusteella. Tutkimusaineisto kerättiin Etelä-Savon Energian polttoaineterminaaleista. Kosteus-pitoisuuksia mitattiin Hydromette M2050 -pikakosteusmittarilla ja uunikuivaus-menetelmällä standardin SFS-EN 14774 mukaisesti. Tutkimuksessa huomattiin pikakosteusmittarin toimivan riittävän luotettavalla tasolla rankapuiden mittauksissa, mutta hakkeen mittauksissa mittari osoittautui toimimattomaksi. Varastointiaika ei vaikuttanut polttoaineiden lämpöarvoihin, mutta kosteuspitoisuus vaihteli suuresti. Tutkimustuloksista pääteltiin rangan kuivuvan terminaalivarastossa ja hakkeen kosteuden pysyvän vakiona. Energiasisällön puolesta rankapuuta voidaan varastoida yli 2 vuotta, mutta hakkeen varastointiaika tulisi pitää mahdollisimman lyhyenä.
Resumo:
Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.
Resumo:
The original contribution of this thesis to knowledge are novel digital readout architectures for hybrid pixel readout chips. The thesis presents asynchronous bus-based architecture, a data-node based column architecture and a network-based pixel matrix architecture for data transportation. It is shown that the data-node architecture achieves readout efficiency 99% with half the output rate as a bus-based system. The network-based solution avoids “broken” columns due to some manufacturing errors, and it distributes internal data traffic more evenly across the pixel matrix than column-based architectures. An improvement of > 10% to the efficiency is achieved with uniform and non-uniform hit occupancies. Architectural design has been done using transaction level modeling (TLM) and sequential high-level design techniques for reducing the design and simulation time. It has been possible to simulate tens of column and full chip architectures using the high-level techniques. A decrease of > 10 in run-time is observed using these techniques compared to register transfer level (RTL) design technique. Reduction of 50% for lines-of-code (LoC) for the high-level models compared to the RTL description has been achieved. Two architectures are then demonstrated in two hybrid pixel readout chips. The first chip, Timepix3 has been designed for the Medipix3 collaboration. According to the measurements, it consumes < 1 W/cm^2. It also delivers up to 40 Mhits/s/cm^2 with 10-bit time-over-threshold (ToT) and 18-bit time-of-arrival (ToA) of 1.5625 ns. The chip uses a token-arbitrated, asynchronous two-phase handshake column bus for internal data transfer. It has also been successfully used in a multi-chip particle tracking telescope. The second chip, VeloPix, is a readout chip being designed for the upgrade of Vertex Locator (VELO) of the LHCb experiment at CERN. Based on the simulations, it consumes < 1.5 W/cm^2 while delivering up to 320 Mpackets/s/cm^2, each packet containing up to 8 pixels. VeloPix uses a node-based data fabric for achieving throughput of 13.3 Mpackets/s from the column to the EoC. By combining Monte Carlo physics data with high-level simulations, it has been demonstrated that the architecture meets requirements of the VELO (260 Mpackets/s/cm^2 with efficiency of 99%).
Resumo:
Rough turning is an important form of manufacturing cylinder-symmetric parts. Thus far, increasing the level of automation in rough turning has included process monitoring methods or adaptive turning control methods that aim to keep the process conditions constant. However, in order to improve process safety, quality and efficiency, an adaptive turning control should be transformed into an intelligent machining system optimizing cutting values to match process conditions or to actively seek to improve process conditions. In this study, primary and secondary chatter and chip formation are studied to understand how to measure the effect of these phenomena to the process conditions and how to avoid undesired cutting conditions. The concept of cutting state is used to address the combination of these phenomena and the current use of the power capacity of the lathe. The measures to the phenomena are not developed based on physical measures, but instead, the severity of the measures is modelled against expert opinion. Based on the concept of cutting state, an expert system style fuzzy control system capable of optimizing the cutting process was created. Important aspects of the system include the capability to adapt to several cutting phenomena appearing at once, even if the said phenomena would potentially require conflicting control action.
Resumo:
Potato pulp waste (PPW) drying was investigated under different experimental conditions (temperatures from 50 to 70 °C and air flow from 0.06 to 0.092 m³ m- 2 s- 1) as a possible way to recover the waste generated by potato chip industries and to select the best-fit model to the experimental results of PPW drying. As a criterion to evaluate the fitting of mathematical models, a method based on the sum of the scores assigned to the four evaluated statistical parameters was used: regression coefficient (R²), relative mean error P (%), root mean square error (RMSE), and reduced chi-square (χ²). The results revealed that temperature and air velocity are important parameters to reduce PPW drying time. The models Midilli and Diffusion had the lowest sum values, i.e., with the best fit to the drying data, satisfactorily representing the drying kinetics of PPW.
Resumo:
Raskaankaluston ajoneuvojen aerodynaaminen kehitys on kulkenut väärään suuntaan jo vuosisadan verran ja niiden muodon määräävä tekijä on kuljetustilan maksimointi ja toi-minnallisuus. Lähiaikoina on astumassa EU:ssa uusi direktiivi voimaan, joka sallii lisämas-san käyttämisen aerodynaamisiin lisäosiin. Työn tarkoituksena on tutkia mahdollisuuksia parantaa hakeperävaunun aerodynamiikkaa yksinkertaisilla lisäosilla. Työ on rajattu koskemaan perävaununetuosaan, -sivuosaan ja -pohjaan. Lisäksi rajoitteita asettaa Suomenlainsäädäntö ja EU:n direktiivit. Työssä perehdytään ilmanvastusvoiman syntymekanismeihin raskaankaluston ajoneuvojen kannalta ja käydään läpi merkittävimmät vaikuttavat tekijät ilmanvastusvoimiin sekä kuorma-auton perävaunun eri muotojen ja osien vaikutus. Perävaunuun asetettavien il-manohjaimien vaikutukset ja toiminta selvitetään. Avainasemassa hakeperävaunun aerodynamiikan parantamisessa on ilmavirtauksen estä-minen perävaunun alle sekä etupuolelta. Lisäksi virtausten muuttamien perävaunun takana on hyödyllistä. Merkittävimmät hyödyt saadaan niistä ratkaisuista, jotka estävät ilman virtausta perävau-nun alle sekä renkaisiin. Näitä ratkaisuja olivat sivuhelmat ja pohjan sekä renkaiden kote-lointi. Lisäksi ilmavirtauksen estäminen perävaunun edestä tuotti merkittävää hyötyä. Levy perävaunun välissä tai koko välin peittämällä saadaan aikaan huomattavaa vähenemistä ilmanvastuksessa.
Resumo:
Hakeperävaunun rungon esikorotuksen suuruutta laskennallisesti ei ollut aiemmin määritet-ty Konepaja Antti Ranta Oy:ssä. Hakeperävaunun runko on valmistettu teräksestä. Tutki-muksessa luoduilla laskentamalleilla selvitettiin viisiakselisen hakeperävaunun rungon pys-tysuuntainen taipuma kuormitettuna. Tutkimus suoritettiin laskemalla hakeperävaunun pystysuuntainen siirtymä 42 tonnin ja 36 tonnin kokonaismassojen kuormituksilla hakeperävaunun rungon pituuden suhteen. Käsin-laskentamenetelmä on tässä tutkimuksessa englannin kieliseltä nimeltään conjugate beam method, suoraan käännettynä konjugaattipalkkimenetelmä. FE-analyysia sovellettiin kah-della eri laskentamallilla; käsinlaskentaa vertailevalla ja todellista hakeperävaunun runkoa vertailevilla FE-analyyseilla. Tutkimuksessa käytettyjen eri laskentatapojen tulokset vastasivat toisiaan sekä 42 tonnin että 36 tonnin kokonaismassojen kuormituksilla. Esikorotus määritettiin 42 tonnin koko-naismassalla kuormitetun todellista hakeperävaunun runkoa vastaavan 3D-mallin pysty-suuntaisesta taipumasta, josta luotiin esikorotettu hakeperävaunun rungon 3D-malli. Tutkimuksessa kehitettyjä laskentamalleja voidaan tulevaisuudessa käyttää yrityksen tuo-tekehityksessä. Esikorotuksella voidaan kompensoida pystysuuntaista taipumaa, jos esiko-rotuksesta ei ole haittaa itse rakenteelle.
Resumo:
Suomen tavoitteena on nostaa metsähakkeen käyttö 25 TWh:iin vuoteen 2020 mennessä. Tavoitteen saavuttaminen edellyttää, että metsähakkeen käyttö ja tuotanto on kannatta-vuudeltaan houkuttelevaa. Suuri osa Suomen energiapuusta saadaan ainespuun hakkuun yhteydessä kerättävistä hakkuutähteistä, josta valmistetaan haketta lämpölaitosten tarpee-seen. Tämän työn tavoitteena oli tarkastella suomalaisen metsäteollisuusyrityksen metsähak-keen tuottamiseen liittyvää arvoketjua ja löytää sieltä ne prosessin osat, joissa arvoa syn-tyy eniten sekä ne prosessin osat, joissa arvoa mahdollisesti menetetään. Arvoketjun tarkastelun lisäksi työn tavoitteena oli selvittää mahdollisuuksia latvusmassa-hakkeen kannattavuuden parantamiseen case yrityksessä. Arvoketju analyysiin perustuen tutkimuksessa voitiin todeta suurimman metsäenergian arvon syntyvän hakkuutähteen kuivumisen seurauksena. Liian pitkä varastointiaika sitä vastoin aiheuttaa kuiva-ainetappiosta johtuvaa arvon menetystä hankintaketjussa. Suurin vaikutus hakkuutähdehakkeen kannattavuuteen on hakkeen alhaisella kosteudella. Tutkimuksessa todettiin lisäksi kaukokuljetuskustannusten vaikuttavan erityisen suuresti haketuotannon kannattavuuteen.