27 resultados para Moving average
Resumo:
Abstract
Resumo:
COD discharges out of processes have increased in line with elevating brightness demands for mechanical pulp and papers. The share of lignin-like substances in COD discharges is on average 75%. In this thesis, a plant dynamic model was created and validated as a means to predict COD loading and discharges out of a mill. The assays were carried out in one paper mill integrate producing mechanical printing papers. The objective in the modeling of plant dynamics was to predict day averages of COD load and discharges out of mills. This means that online data, like 1) the level of large storage towers of pulp and white water 2) pulp dosages, 3) production rates and 4) internal white water flows and discharges were used to create transients into the balances of solids and white water, referred to as “plant dynamics”. A conversion coefficient was verified between TOC and COD. The conversion coefficient was used for predicting the flows from TOC to COD to the waste water treatment plant. The COD load was modeled with similar uncertainty as in reference TOC sampling. The water balance of waste water treatment was validated by the reference concentration of COD. The difference of COD predictions against references was within the same deviation of TOC-predictions. The modeled yield losses and retention values of TOC in pulping and bleaching processes and the modeled fixing of colloidal TOC to solids between the pulping plant and the aeration basin in the waste water treatment plant were similar to references presented in literature. The valid water balances of the waste water treatment plant and the reduction model of lignin-like substances produced a valid prediction of COD discharges out of the mill. A 30% increase in the release of lignin-like substances in the form of production problems was observed in pulping and bleaching processes. The same increase was observed in COD discharges out of waste water treatment. In the prediction of annual COD discharge, it was noticed that the reduction of lignin has a wide deviation from year to year and from one mill to another. This made it difficult to compare the parameters of COD discharges validated in plant dynamic simulation with another mill producing mechanical printing papers. However, a trend of moving from unbleached towards high-brightness TMP in COD discharges was valid.
Resumo:
The purpose of this thesis was to study the design of demand forecasting processes. A literature review in the field of forecasting was conducted, including general forecasting process design, forecasting methods and techniques, the role of human judgment in forecasting and forecasting performance measurement. The purpose of the literature review was to identify the important design choices that an organization aiming to design or re-design their demand forecasting process would have to make. In the empirical part of the study, these choices and the existing knowledge behind them was assessed in a case study where a demand forecasting process was re-designed for a company in the fast moving consumer goods business. The new target process is described, as well as the reasoning behind the design choices made during the re-design process. As a result, the most important design choices are highlighted, as well as their immediate effect on other processes directly tied to the demand forecasting process. Additionally, some new insights on the organizational aspects of demand forecasting processes are explored. The preliminary results indicate that in this case the new process did improve forecasting accuracy, although organizational issues related to the process proved to be more challenging than anticipated.
Resumo:
Through advances in technology, System-on-Chip design is moving towards integrating tens to hundreds of intellectual property blocks into a single chip. In such a many-core system, on-chip communication becomes a performance bottleneck for high performance designs. Network-on-Chip (NoC) has emerged as a viable solution for the communication challenges in highly complex chips. The NoC architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication challenges such as wiring complexity, communication latency, and bandwidth. Furthermore, the combined benefits of 3D IC and NoC schemes provide the possibility of designing a high performance system in a limited chip area. The major advantages of 3D NoCs are the considerable reductions in average latency and power consumption. There are several factors degrading the performance of NoCs. In this thesis, we investigate three main performance-limiting factors: network congestion, faults, and the lack of efficient multicast support. We address these issues by the means of routing algorithms. Congestion of data packets may lead to increased network latency and power consumption. Thus, we propose three different approaches for alleviating such congestion in the network. The first approach is based on measuring the congestion information in different regions of the network, distributing the information over the network, and utilizing this information when making a routing decision. The second approach employs a learning method to dynamically find the less congested routes according to the underlying traffic. The third approach is based on a fuzzy-logic technique to perform better routing decisions when traffic information of different routes is available. Faults affect performance significantly, as then packets should take longer paths in order to be routed around the faults, which in turn increases congestion around the faulty regions. We propose four methods to tolerate faults at the link and switch level by using only the shortest paths as long as such path exists. The unique characteristic among these methods is the toleration of faults while also maintaining the performance of NoCs. To the best of our knowledge, these algorithms are the first approaches to bypassing faults prior to reaching them while avoiding unnecessary misrouting of packets. Current implementations of multicast communication result in a significant performance loss for unicast traffic. This is due to the fact that the routing rules of multicast packets limit the adaptivity of unicast packets. We present an approach in which both unicast and multicast packets can be efficiently routed within the network. While suggesting a more efficient multicast support, the proposed approach does not affect the performance of unicast routing at all. In addition, in order to reduce the overall path length of multicast packets, we present several partitioning methods along with their analytical models for latency measurement. This approach is discussed in the context of 3D mesh networks.
Resumo:
Biokuvainformatiikan kehittäminen – mikroskopiasta ohjelmistoratkaisuihin – sovellusesimerkkinä α2β1-integriini Kun ihmisen genomi saatiin sekvensoitua vuonna 2003, biotieteiden päätehtäväksi tuli selvittää eri geenien tehtävät, ja erilaisista biokuvantamistekniikoista tuli keskeisiä tutkimusmenetelmiä. Teknologiset kehitysaskeleet johtivat erityisesti fluoresenssipohjaisten valomikroskopiatekniikoiden suosion räjähdysmäiseen kasvuun, mutta mikroskopian tuli muuntua kvalitatiivisesta tieteestä kvantitatiiviseksi. Tämä muutos synnytti uuden tieteenalan, biokuvainformatiikan, jonka on sanottu mahdollisesti mullistavan biotieteet. Tämä väitöskirja esittelee laajan, poikkitieteellisen työkokonaisuuden biokuvainformatiikan alalta. Väitöskirjan ensimmäinen tavoite oli kehittää protokollia elävien solujen neliulotteiseen konfokaalimikroskopiaan, joka oli yksi nopeimmin kasvavista biokuvantamismenetelmistä. Ihmisen kollageenireseptori α2β1-integriini, joka on tärkeä molekyyli monissa fysiologisissa ja patologisissa prosesseissa, oli sovellusesimerkkinä. Työssä saavutettiin selkeitä visualisointeja integriinien liikkeistä, yhteenkeräytymisestä ja solun sisään siirtymisestä, mutta työkaluja kuvainformaation kvantitatiiviseen analysointiin ei ollut. Väitöskirjan toiseksi tavoitteeksi tulikin tällaiseen analysointiin soveltuvan tietokoneohjelmiston kehittäminen. Samaan aikaan syntyi biokuvainformatiikka, ja kipeimmin uudella alalla kaivattiin erikoistuneita tietokoneohjelmistoja. Tämän väitöskirjatyön tärkeimmäksi tulokseksi muodostui näin ollen BioImageXD, uudenlainen avoimen lähdekoodin ohjelmisto moniulotteisten biokuvien visualisointiin, prosessointiin ja analysointiin. BioImageXD kasvoi yhdeksi alansa suurimmista ja monipuolisimmista. Se julkaistiin Nature Methods -lehden biokuvainformatiikkaa käsittelevässä erikoisnumerossa, ja siitä tuli tunnettu ja laajalti käytetty. Väitöskirjan kolmas tavoite oli soveltaa kehitettyjä menetelmiä johonkin käytännönläheisempään. Tehtiin keinotekoisia piidioksidinanopartikkeleita, joissa oli "osoitelappuina" α2β1-integriinin tunnistavia vasta-aineita. BioImageXD:n avulla osoitettiin, että nanopartikkeleilla on potentiaalia lääkkeiden täsmäohjaussovelluksissa. Tämän väitöskirjatyön yksi perimmäinen tavoite oli edistää uutta ja tuntematonta biokuvainformatiikan tieteenalaa, ja tämä tavoite saavutettiin erityisesti BioImageXD:n ja sen lukuisten julkaistujen sovellusten kautta. Väitöskirjatyöllä on merkittävää potentiaalia tulevaisuudessa, mutta biokuvainformatiikalla on vakavia haasteita. Ala on liian monimutkainen keskimääräisen biolääketieteen tutkijan hallittavaksi, ja alan keskeisin elementti, avoimen lähdekoodin ohjelmistokehitystyö, on aliarvostettu. Näihin seikkoihin tarvitaan useita parannuksia,
Resumo:
Tämän tutkimuksen tarkoituksena on selvittää, minkälaista vaihtelua esiintyy maahanmuuttajaoppilaiden suomen kielen taidoissa peruskoulun kuudennella luokalla. Tutkimuksen tavoitteena on myös selvittää, minkälainen yhteys taustamuuttujilla (sukupuoli, äidinkieli, maahantuloikä, maahantulon syy, maassaoloaika ja vanhempien koulutausta) ja opetusjärjestelyillä, kuten perusopetukseen valmistavalla opetuksella, suomi toisena kielenä -opetuksella ja oman äidinkielen opetuksella, on suomen kielen taidon tasoon. Lisäksi tutkimuksen tavoitteena on selvittää oppilaan käyttämän kielen (suomen kieli ja äidinkieli) yhteyttä suomen kielen taidon tasoon. Tutkimusmetodina toimi mixed methods -tutkimus, ja tutkimuksen lähestymistapoja olivat kvantitatiivinen survey-tutkimus ja kvalitatiivinen sisällön analyysi. Tutkimukseen osallistui 219 maahanmuuttajaoppilasta 20:stä Turun koulusta. Tutkimusaineisto kerättiin Turun erityisopettajien ja suomi toisena kielenä -opettajien laatiman kielitestipaketin avulla. Oppilaan suullista ja kirjallista tuottamista arvioivat lasta opettavat opettajat eurooppalaisen viitekehyksen kielitaitotasojen kriteereitä käyttäen. Oppilaat arvioivat omaa äidinkielen ja suomen kielen taitoaan. Lisäksi oppilaat ja vanhemmat täyttivät tutkijan laatimat taustatietolomakkeet. Kielitestien tulosten mukaan oppilaista yli puolella oli tyydyttävä suomen kielen taito. Kaikista neljästä kielellisestä osiosta maahanmuuttajataustaiset oppilaat menestyivät parhaiten rakennekokeessa ja sanelussa, kun taas kuullun ja luetun ymmärtämisen tulokset olivat heikompia. Opettajien arviointien perusteella oppilaiden suulliset taidot vastasivat keskimäärin itsenäisen kielenkäyttäjän osaajan tasoa (B2) ja kirjoittamistaidot kynnystasoa (B1). Oppilaiden suomi toisena kielenä -arvosanan keskiarvo oli 7,26. Suomessa asumisen kestolla, maahantulon syyllä, äidinkielellä, maahantuloiällä, ja vanhempien koulutaustaustalla oli tilastollisesti merkitsevä yhteys suomen kielen taidon tasoon. Mitä kauemmin oppilaat olivat asuneet Suomessa ja mitä nuorempina he olivat tulleet Suomeen, sitä paremmin he menestyivät kielitesteissä. Paluumuuttajat menestyivät kielitaitotehtävissä kaikkein parhaiten ja pakolaiset heikoiten. Somalinkieliset erottuivat muista kieliryhmän edustajista heikoimpina suomen kielen taidon tasoltaan. Venäjänkieliset ja vietnaminkieliset saavuttivat parhaat tulokset kaikissa mittareissa. Erityisesti äidin korkeampi koulutustaso oli yhteydessä oppilaiden korkeampaan suomen kielen taidon tasoon. Oppilaat arvioivat suomen kielen taitonsa omaa äidinkieltään paremmaksi puhumisessa, lukemisessa ja kirjoittamisessa. Parhaiten eri mittareissa menestyivät oppilaat, jotka eivät olleet osallistuneet perusopetukseen valmistavaan opetukseen eivätkä erilliseen suomi toisena kielenä -opetukseen. Omaa äidinkieltään enemmän opiskelleet menestyivät kielitaitotehtävissä paremmin kuin vähän aikaa omaa äidinkieltään opiskelleet, mutta yhtä hyvin kuin ne, jotka eivät olleet opiskelleet omaa äidinkieltään lainkaan. Oppilaat, jotka puhuivat kaveriensa kanssa sekä omaa äidinkieltään että suomen kieltä, osoittautuivat kielitaidon tasoltaan paremmiksi kielitesteissä ja opettajien arvioinneissa.
Resumo:
In Mobile Ad-hoc Networks (MANET) the participating nodes have several roles such as sender, receiver and router. Hence there is a lot of energy consumed by the nodes for the normal working of the network since each node has many different roles. Also in MANET the nodes keep moving constantly and this in turn consumes a lot of energy. Since battery capacity of these nodes is limited it fails to fulfil the high demand of energy. The scarcity of energy makes the energy conservation in mobile ad-hoc networks an important concern. There is several research carried out on the energy consumption of mobile ad-hoc networks these days. Some of this research suggests sleep mode, transmission power control, load balancing etc. In this thesis, we are comparing various proposed energy efficient models for some of the ad-hoc protocols. We compare different energy efficient models for Optimised Linked State Algorithm (OLSR) and Ad-hoc On Demand Distance Vector (AODV). The routing protocols are compared for different parameters such as average remaining energy, number of nodes alive, payload data received and performance with different mobility speed. The simulation results helps in benchmarking the various energy efficient routing models for OLSR and AODV protocols. The benchmarking of the routing protocols can be based on many factors but this thesis concentrates on benchmarking the MANET routing protocols mainly based on the energy efficiency and increased network lifetime.
Resumo:
Lipid movement in cells occurs by a variety of methods. Lipids diffuse freely along the lateral plane of a membrane and can translocate between the lipid leaflets, either spontaneously or with the help of enzymes. Lipid translocation between the different cellular compartments predominantly takes place through vesicular transport. Specialized lipid transport proteins (LTPs) have also emerged as important players in lipid movement, as well as other cellular processes. In this thesis we have studied the glycolipid transport protein (GLTP), a protein that transports glycosphingolipids (GSLs). While the in vitro properties of GLTP have been well characterized, its cell biological role remains elusive. By altering GSL and GLTP levels in cells, we have extracted clues towards the protein's function. Based on the results presented in this thesis and in previous works, we hypothesize that GLTP is involved in the GSL homeostasis in cells. GLTP most likely functions as a transporter or sensor of newly synthesized glucosylceramide (GlcCer), at or near the site of GlcCer synthesis. GLTP also seems to be involved in the synthesis of globotriacylceramide, perhaps in a manner that is similar to that of the fourphosphate adaptor protein 2, another GlcCer-transporting LTP. Additionally, we have developed and studied a novel method of introducing ceramides to cells, using a solvent-free approach. Ceramides are important lipids that are implicated in several cellular functions. Their role as proapoptotic molecules is particularly evident. Ceramides form stable bilayer structures when complexed with cholesterol phosphocholine (CholPC), a large-headgroup sterol. By adding ceramide/CholPC complexes to the growth medium, various chain length ceramides were successfully delivered to cells in culture. The uptake rate was dependent on the chain length of the ceramide, where shorter lipids were internalized more quickly. The rate of uptake also determined how the cells metabolised the ceramides. Faster uptake favored conversion of ceramide to GlcCer, whereas slower delivery resulted mainly in breakdown of the lipid.
Resumo:
Currently, laser scribing is growing material processing method in the industry. Benefits of laser scribing technology are studied for example for improving an efficiency of solar cells. Due high-quality requirement of the fast scribing process, it is important to monitor the process in real time for detecting possible defects during the process. However, there is a lack of studies of laser scribing real time monitoring. Commonly used monitoring methods developed for other laser processes such a laser welding, are sufficient slow and existed applications cannot be implemented in fast laser scribing monitoring. The aim of this thesis is to find a method for laser scribing monitoring with a high-speed camera and evaluate reliability and performance of the developed monitoring system with experiments. The laser used in experiments is an IPG ytterbium pulsed fiber laser with 20 W maximum average power and Scan head optics used in the laser is Scanlab’s Hurryscan 14 II with an f100 tele-centric lens. The camera was connected to laser scanner using camera adapter to follow the laser process. A powerful fully programmable industrial computer was chosen for executing image processing and analysis. Algorithms for defect analysis, which are based on particle analysis, were developed using LabVIEW system design software. The performance of the algorithms was analyzed by analyzing a non-moving image from the scribing line with resolution 960x20 pixel. As a result, the maximum analysis speed was 560 frames per second. Reliability of the algorithm was evaluated by imaging scribing path with a variable number of defects 2000 mm/s when the laser was turned off and image analysis speed was 430 frames per second. The experiment was successful and as a result, the algorithms detected all defects from the scribing path. The final monitoring experiment was performed during a laser process. However, it was challenging to get active laser illumination work with the laser scanner due physical dimensions of the laser lens and the scanner. For reliable error detection, the illumination system is needed to be replaced.