917 resultados para post-processing method
Resumo:
Introduction: The field of Connectomic research is growing rapidly, resulting from methodological advances in structural neuroimaging on many spatial scales. Especially progress in Diffusion MRI data acquisition and processing made available macroscopic structural connectivity maps in vivo through Connectome Mapping Pipelines (Hagmann et al, 2008) into so-called Connectomes (Hagmann 2005, Sporns et al, 2005). They exhibit both spatial and topological information that constrain functional imaging studies and are relevant in their interpretation. The need for a special-purpose software tool for both clinical researchers and neuroscientists to support investigations of such connectome data has grown. Methods: We developed the ConnectomeViewer, a powerful, extensible software tool for visualization and analysis in connectomic research. It uses the novel defined container-like Connectome File Format, specifying networks (GraphML), surfaces (Gifti), volumes (Nifti), track data (TrackVis) and metadata. Usage of Python as programming language allows it to by cross-platform and have access to a multitude of scientific libraries. Results: Using a flexible plugin architecture, it is possible to enhance functionality for specific purposes easily. Following features are already implemented: * Ready usage of libraries, e.g. for complex network analysis (NetworkX) and data plotting (Matplotlib). More brain connectivity measures will be implemented in a future release (Rubinov et al, 2009). * 3D View of networks with node positioning based on corresponding ROI surface patch. Other layouts possible. * Picking functionality to select nodes, select edges, get more node information (ConnectomeWiki), toggle surface representations * Interactive thresholding and modality selection of edge properties using filters * Arbitrary metadata can be stored for networks, thereby allowing e.g. group-based analysis or meta-analysis. * Python Shell for scripting. Application data is exposed and can be modified or used for further post-processing. * Visualization pipelines using filters and modules can be composed with Mayavi (Ramachandran et al, 2008). * Interface to TrackVis to visualize track data. Selected nodes are converted to ROIs for fiber filtering The Connectome Mapping Pipeline (Hagmann et al, 2008) processed 20 healthy subjects into an average Connectome dataset. The Figures show the ConnectomeViewer user interface using this dataset. Connections are shown that occur in all 20 subjects. The dataset is freely available from the homepage (connectomeviewer.org). Conclusions: The ConnectomeViewer is a cross-platform, open-source software tool that provides extensive visualization and analysis capabilities for connectomic research. It has a modular architecture, integrates relevant datatypes and is completely scriptable. Visit www.connectomics.org to get involved as user or developer.
Resumo:
The Iowa DOT has been using the AASHTO Present Serviceability Index (PSI) rating procedure since 1968 to rate the condition of pavement sections. A ride factor and a cracking and patching factor make up the PSI value. Crack and patch surveys have been done by sending crews out to measure and record the distress. Advances in video equipment and computers make it practical to videotape roads and do the crack and patch measurements in the office. The objective of the study was to determine the feasibility of converting the crack and patch survey operation to a video recording system with manual post processing. The summary and conclusions are as follows: Video crack and patch surveying is a feasible alternative to the current crack and patch procedure. The cost per mile should be about 25 percent less than the current procedure. More importantly, the risk of accidents is reduced by getting the people and vehicles off the roadway and shoulder. Another benefit is the elimination of the negative public perceptions of the survey crew on the shoulder.
Resumo:
Purpose: Cross-sectional imaging techniques have pioneered forensic medicine. The involvement of a radiographer and formation of "forensic radiographers" allows an improvement of the quality of radiological examinations and facilitates the implementation of techniques, such as sample collections, and the performance of post-mortem angiography. Methods and Materials: During a period of three months, five radiographers with clinical experience have undergone a special training in order to learn procedures dedicated to forensic imaging. These procedures involved: I). acquisition of MDCT data, II). sample collection for toxicological or histological analyses by performing CT-guided biopsies and liquid sampling, III). post-mortem angiography and IV). post-processing of all data acquired. To perform the post-mortem angiography, radiographers were in charge of the preparation of the perfusion device and the investigated body. Therefore, cannulas were inserted into the femoral vessels and connected to the machine. For angiography, the radiographers had to synchronize the perfusion with the CT-acquisitions. Results: All five radiographers have acquired new skills to become "forensic radiographers". They were able to perform post-mortem MDCT, sample collection, post-mortem angiography and post-processing of the acquired data all by themselves. Most problems have been observed concerning the preparation of the body for post-mortem angiography. Conclusion: Our experience shows that radiographers are able to perform high quality examinations after a short period of training. Their collaboration is well accepted by the forensic team and regarding the increase of radiological exams in forensic department, it would be nonsense to exclude radiographers from the forensic-radiological team.
Resumo:
Forecasting coal resources and reserves is critical for coal mine development. Thickness maps are commonly used for assessing coal resources and reserves; however they are limited for capturing coal splitting effects in thick and heterogeneous coal zones. As an alternative, three-dimensional geostatistical methods are used to populate facies distributionwithin a densely drilled heterogeneous coal zone in the As Pontes Basin (NWSpain). Coal distribution in this zone is mainly characterized by coal-dominated areas in the central parts of the basin interfingering with terrigenous-dominated alluvial fan zones at the margins. The three-dimensional models obtained are applied to forecast coal resources and reserves. Predictions using subsets of the entire dataset are also generated to understand the performance of methods under limited data constraints. Three-dimensional facies interpolation methods tend to overestimate coal resources and reserves due to interpolation smoothing. Facies simulation methods yield similar resource predictions than conventional thickness map approximations. Reserves predicted by facies simulation methods are mainly influenced by: a) the specific coal proportion threshold used to determine if a block can be recovered or not, and b) the capability of the modelling strategy to reproduce areal trends in coal proportions and splitting between coal-dominated and terrigenousdominated areas of the basin. Reserves predictions differ between the simulation methods, even with dense conditioning datasets. Simulation methods can be ranked according to the correlation of their outputs with predictions from the directly interpolated coal proportion maps: a) with low-density datasets sequential indicator simulation with trends yields the best correlation, b) with high-density datasets sequential indicator simulation with post-processing yields the best correlation, because the areal trends are provided implicitly by the dense conditioning data.
Resumo:
The objective of industrial crystallization is to obtain a crystalline product which has the desired crystal size distribution, mean crystal size, crystal shape, purity, polymorphic and pseudopolymorphic form. Effective control of the product quality requires an understanding of the thermodynamics of the crystallizing system and the effects of operation parameters on the crystalline product properties. Therefore, obtaining reliable in-line information about crystal properties and supersaturation, which is the driving force of crystallization, would be very advantageous. Advanced techniques, such asRaman spectroscopy, attenuated total reflection Fourier transform infrared (ATR FTIR) spectroscopy, and in-line imaging techniques, offer great potential for obtaining reliable information during crystallization, and thus giving a better understanding of the fundamental mechanisms (nucleation and crystal growth) involved. In the present work, the relative stability of anhydrate and dihydrate carbamazepine in mixed solvents containing water and ethanol were investigated. The kinetics of the solvent mediated phase transformation of the anhydrate to hydrate in the mixed solvents was studied using an in-line Raman immersion probe. The effects of the operation parameters in terms of solvent composition, temperature and the use of certain additives on the phase transformation kineticswere explored. Comparison of the off-line measured solute concentration and the solid-phase composition measured by in-line Raman spectroscopy allowedthe identification of the fundamental processes during the phase transformation. The effects of thermodynamic and kinetic factors on the anhydrate/hydrate phase of carbamazepine crystals during cooling crystallization were also investigated. The effect of certain additives on the batch cooling crystallization of potassium dihydrogen phosphate (KDP) wasinvestigated. The crystal growth rate of a certain crystal face was determined from images taken with an in-line video microscope. An in-line image processing method was developed to characterize the size and shape of thecrystals. An ATR FTIR and a laser reflection particle size analyzer were used to study the effects of cooling modes and seeding parameters onthe final crystal size distribution of an organic compound C15. Based on the obtained results, an operation condition was proposed which gives improved product property in terms of increased mean crystal size and narrowersize distribution.
Resumo:
Tämän diplomityön aiheena oli toteuttaa signaalin laatua mittaavien tunnuslukujen keräämiseen tarkoitettu ohjelmisto Nokia GSM-siirtoverkkoon kuuluvista keskuksen päätelaitteista. Keräämiseen tarkoitettu ohjelmisto täydensi tukiasemaohjaimen tilastointijärjestelmään siten, että signaalin laatua kuvaavia tunnuslukuja saadaan nyt kerättyä keskitetysti koko tukiasemajärjestelmän siirtoyhteysverkosta. Signaalin laatua mittaavat tunnusluvut perustuvat kahden siirtolaitteen välisessä yhteydessä havaittuihin bittivirheisiin. Tunnuslukujen keräämisen kohteena olevat päätelaitteet voivat sijaita joko tukiasemaohjaimessa tai toisen sukupolven transkooderi submultipleksenssä. Molemmille tapauksille on toteutettu oma mittaustyyppi tilastointijärjestelmään. Tukiasemaohjaimen tilastointijärjestelmä koostuu mittausten hallintarajapinnasta, mittausten keskitetystä osasta sekä hajautetusta osasta. Kerätty tieto siirretään tilastointijärjestelmän keskitetystä osasta verkonhallintajärjestelmälle jälkikäsittelyä varten. Tässä diplomityössä on esitelty jälkikäsittelyn osa-alueita, joita ovat: turmeltuneiden näytteiden karsinta, tiivistäminen, ennustaminen, puuttuvien näytteiden estimointi sekä mittaustulosten esittäminen.
Resumo:
Diplomityön tavoitteena oli selvittää erilaisia jatkojalostusmahdollisuuksia, joilla voidaan nostaa suuren mäntysahan tuotteiston arvoa. Lisäksi tuli tarkastella jalostuksen integrointia sahan tuotantoprosessiin. Työn taustalla on toisaalta puutuotemarkkinoiden muuttuminen ja toisaalta raaka-aineen laadullinen huononeminen. Molemmat seikat vaikuttavat negatiivisesti perinteisen mäntysahan kannattavuuteen .Jatkojalostuksen integroinnilla sahatavaraprosessiin saavutetaan säästöjä tuotantokustannuksissa, kun tarkastellaan koko prosessia tukista jatkojalosteeksi. Myös raaka-aineen tuottavuutta voidaan nostaa integraation avulla. Jatkojalostus voidaan integroida sahatavaraprosessiin raaka-aineen valikoinnilla sahatavaraprosessin eri osissa, on-line –jalostuksella sekä taloudellisesti. Sahatavaraprosessissa tapahtuva raaka-aineen valikointi voidaan suorittaa tukeista ja sahatavarasta. Valikointikriteerinä voi olla puun ominaisuudet, sahatavaran mitat ja laatu. Valikointiin voidaan nykyteknologiasta hyödyntää röntgentekniikkaa sekä konenäköä. On-line –jalostus tarkoittaa kiinteästi sahatavaraprosessiin liittyvää jalostusta, jolloin ns. turhia prosessivaiheita jää pois ja syntyy säästöjä. On-line –jalostuksen edellytys on raaka-aineen jonkin asteinen valikointi, esim. pituus. Taloudellisesti integroitu jalostus tarkoittaa, että jalostuslaitoksella pyritään nollatulokseen ja jalostuksen lisäarvo palautetaan sahan toimittamaan raaka-aineen hintaan. Tällainen toiminta yhtiön sisällä poistaa turhaa keskustelua raaka-aineen siirtohinnoista ja siten vapauttaa osaltaan resursseja tuottavampaan toimintaan. Erilaisten jatkojalostusmuotojen ja puun ominaisuuksien hyödyntämisen seulonnan perusteella löytyi yksi jalostusmuoto, jolla voidaan kohottaa mäntysahan tuotteiston arvoa. Työn tuloksena syntyi investointiehdotus aihiotankotuotannosta ikkunateollisuuden tarpeisiin. Raaka-aineen hyödynnettäviä ominaisuuksia ovat männyn sydänpuun luonnollinen kestävyys sekä keskimääräinen oksaväli. Valikointi tehdään välitukeista, joiden sahaamisen kannattavuus on männyn rungon osista heikoin. Aihiotankoprosessissa hyödynnetään konenäköä ja sormijatkostekniikkaa. Jatkojalostuksen integrointi sahatavaraprosessiin toteutetaan rakentamalla on-line –jalostuslaitos sekä soveltamalla röntgentekniikkaa raaka-aineen valinnassa.
Resumo:
Työssä tarkastellaan hitsauksen tuotekehityksen suunnittelun toteuttamista ja hitsauksen kehittämistä konepajalla. Yritykset tietävät omat valmistustekniset ongelmansa, mutta kehittämisen aloittaminen on monesti hankalaa tai sitä ei edes aloiteta. Pienimmillä yrityksillä ei usein ole mahdollisuuksia eikä osaamista uusien tuotantomenetelmien ja tuotteiden kehittämiseen. Työssä annetaan neuvoja miten alentaa kehittämistyön aloituskynnystä ja auttaa suunnittelutehtävän jäsentelyssä. Tuotteen suunnittelu on vaativa tehtävä, koska siinä lyödään lukkoon tuotteen toimivuus ja suurin osa kustannuksista. Suunnittelijoiden on tiedettävä eri valmistusmenetelmistä ja valmistusystävällisestä suunnittelusta, jotta he voisivat huomioida valmistettavuus- ja kokoonpanonäkökohdat. Hyvässä hitsatussa tuotteessa on mahdollisimman vähän hitsiä, käytetty modulointia ja standardointia sekä käytetty hitsausta korvaavia menetelmiä. Työssä on esitelty valmistusystävälliseen suunnitteluun kehitelty "Design For Welding" -malli, jossa käydään läpi hitsatun rakenteen erityispiirteet suunnittelun kannalta. Valmistusystävällisen suunnittelun tarkoituksena on vähentää tilauskohtaista suunnittelua, hitsausta, hitsauksen jälkeistä käsittelyä ja koneistusta.
Resumo:
The dissertation is based on four articles dealing with recalcitrant lignin water purification. Lignin, a complicated substance and recalcitrant to most treatment technologies, inhibits seriously pulp and paper industry waste management. Therefore, lignin is studied, using WO as a process method for its degradation. A special attention is paid to the improvement in biodegradability and the reduction of lignin content, since they have special importance for any following biological treatment. In most cases wet oxidation is not used as a complete ' mineralization method but as a pre treatment in order to eliminate toxic components and to reduce the high level of organics produced. The combination of wet oxidation with a biological treatment can be a good option due to its effectiveness and its relatively low technology cost. The literature part gives an overview of Advanced Oxidation Processes (AOPs). A hot oxidation process, wet oxidation (WO), is investigated in detail and is the AOP process used in the research. The background and main principles of wet oxidation, its industrial applications, the combination of wet oxidation with other water treatment technologies, principal reactions in WO, and key aspects of modelling and reaction kinetics are presented. There is also given a wood composition and lignin characterization (chemical composition, structure and origin), lignin containing waters, lignin degradation and reuse possibilities, and purification practices for lignin containing waters. The aim of the research was to investigate the effect of the operating conditions of WO, such as temperature, partial pressure of oxygen, pH and initial concentration of wastewater, on the efficiency, and to enhance the process and estimate optimal conditions for WO of recalcitrant lignin waters. Two different waters are studied (a lignin water model solution and debarking water from paper industry) to give as appropriate conditions as possible. Due to the great importance of re using and minimizing the residues of industries, further research is carried out using residual ash of an Estonian power plant as a catalyst in wet oxidation of lignin-containing water. Developing a kinetic model that includes in the prediction such parameters as TOC gives the opportunity to estimate the amount of emerging inorganic substances (degradation rate of waste) and not only the decrease of COD and BOD. The degradation target compound, lignin is included into the model through its COD value (CODligning). Such a kinetic model can be valuable in developing WO treatment processes for lignin containing waters, or other wastewaters containing one or more target compounds. In the first article, wet oxidation of "pure" lignin water was investigated as a model case with the aim of degrading lignin and enhancing water biodegradability. The experiments were performed at various temperatures (110 -190°C), partial oxygen pressures (0.5 -1.5 MPa) and pH (5, 9 and 12). The experiments showed that increasing the temperature notably improved the processes efficiency. 75% lignin reduction was detected at the lowest temperature tested and lignin removal improved to 100% at 190°C. The effect of temperature on the COD removal rate was lower, but clearly detectable. 53% of organics were oxidized at 190°C. The effect of pH occurred mostly on lignin removal. Increasing the pH enhanced the lignin removal efficiency from 60% to nearly 100%. A good biodegradability ratio (over 0.5) was generally achieved. The aim of the second article was to develop a mathematical model for "pure" lignin wet oxidation using lumped characteristics of water (COD, BOD, TOC) and lignin concentration. The model agreed well with the experimental data (R2 = 0.93 at pH 5 and 12) and concentration changes during wet oxidation followed adequately the experimental results. The model also showed correctly the trend of biodegradability (BOD/COD) changes. In the third article, the purpose of the research was to estimate optimal conditions for wet oxidation (WO) of debarking water from the paper industry. The WO experiments were' performed at various temperatures, partial oxygen pressures and pH. The experiments showed that lignin degradation and organics removal are affected remarkably by temperature and pH. 78-97% lignin reduction was detected at different WO conditions. Initial pH 12 caused faster removal of tannins/lignin content; but initial pH 5 was more effective for removal of total organics, represented by COD and TOC. Most of the decrease in organic substances concentrations occurred in the first 60 minutes. The aim of the fourth article was to compare the behaviour of two reaction kinetic models, based on experiments of wet oxidation of industrial debarking water under different conditions. The simpler model took into account only the changes in COD, BOD and TOC; the advanced model was similar to the model used in the second article. Comparing the results of the models, the second model was found to be more suitable for describing the kinetics of wet oxidation of debarking water. The significance of the reactions involved was compared on the basis of the model: for instance, lignin degraded first to other chemically oxidizable compounds rather than directly to biodegradable products. Catalytic wet oxidation of lignin containing waters is briefly presented at the end of the dissertation. Two completely different catalysts were used: a commercial Pt catalyst and waste power plant ash. CWO showed good performance using 1 g/L of residual ash gave lignin removal of 86% and COD removal of 39% at 150°C (a lower temperature and pressure than with WO). It was noted that the ash catalyst caused a remarkable removal rate for lignin degradation already during the pre heating for `zero' time, 58% of lignin was degraded. In general, wet oxidation is not recommended for use as a complete mineralization method, but as a pre treatment phase to eliminate toxic or difficultly biodegradable components and to reduce the high level of organics. Biological treatment is an appropriate post treatment method since easily biodegradable organic matter remains after the WO process. The combination of wet oxidation with subsequent biological treatment can be an effective option for the treatment of lignin containing waters.
Resumo:
Tässä diplomityössä määritettiin pituussuunnassa hitsatun rakenneputken väsymislujuus HF –hitsausta käytettäessä. Työn tavoitteena oli vertailla eri jälkikäsittelyjen ja kahden eri materiaalin vaikutusta väsymislujuuteen. Materiaaleina kokeissa oli Ruukki double grade ja Optim 700 Plus MH. Eurokoodi 3 ja IIW määrittelee kiinnityshitseille väsymiskestävyysluokaksi 125-140, kun SN -käyrän kaltevuus, m = 3. Koetuloksia vertailtiin näihin valmiiksi määriteltyihin väsymiskestävyysluokkiin. Ruukki Metals Oy toimitti koemateriaalin kahdenlaisia väsytyskokeita varten, joiden avulla väsymislujuus voitiin määrittää. Ensimmäinen koe suoritettiin värähtelyllä hyödyntäen rakenneputken ominaistaajuutta, jolloin kuormituksen rajajännityssuhde on -1. Toinen väsytyskoe suoritettiin rakenneputkista leikatuille väsytyssauvoille, missä pituussuuntainen HF –hitsi on keskellä sauvaa. Tämä väsytyskoe suoritettiin vetotykytyksellä rajajännityssuhteella 0,1. Väsytyskokeissa havaittiin, että kiinnityshitsi täyttää sille asetetut vaatimukset, koska murtumat tapahtuivat perusaineesta. Tällöin koetuloksia vertaillaan Eurokoodi 3:n perusaineen väsymisluokkaan 160, SN -käyrän kaltevuudella m = 5. Kaikkien koetulosten väsymiskestävyysluokan keskiarvoksi saatiin 185 95 %:n todennäköisyydellä. Tutkimuksessa saatiin arvokasta tietoa kahden eri materiaalin väsymislujuudesta. Materiaalilla ei kuitenkaan havaittu vaikutusta lujuuteen. Eri jälkikäsittelyt eli juuren puolen metallipurseen höyläys tai pituussuuntaisen hitsisauman hehkutus eivät vaikuttaneet väsymislujuuteen. Kokeissa väsymismurtuma ydintyi rakenneputken pinnalta olevista alkuvioista, joten putkien käsittelyä valmistuksen jälkeen voitaisiin parantaa.
Resumo:
Ionic liquids, ILs, have recently been studied with accelerating interest to be used for a deconstruction/fractionation, dissolution or pretreatment processing method of lignocellulosic biomass. ILs are usually utilized combined with heat. Regarding lignocellulosic recalcitrance toward fractionation and IL utilization, most of the studies concern IL utilization in the biomass fermentation process prior to the enzymatic hydrolysis step. It has been demonstrated that IL-pretreatment gives more efficient hydrolysis of the biomass polysaccharides than enzymatic hydrolysis alone. Both cellulose (especially cellulose) and lignin are very resistant towards fractionation and even dissolution methods. As an example, it can be mentioned that softwood, hardwood and grass-type plant species have different types of lignin structures leading to the fact that softwood lignin (guaiacyl lignin dominates) is the most difficult to solubilize or chemically disrupt. In addition to the known conventional biomass processing methods, several ILs have also been found to efficiently dissolve either cellulose and/or wood samples – different ILs are suitable for different purposes. An IL treatment of wood usually results in non-fibrous pulp, where lignin is not efficiently separated and wood components are selectively precipitated, as cellulose is not soluble or degradable in ionic liquids under mild conditions. Nevertheless, new ILs capable of rather good fractionation performance have recently emerged. The capability of the IL to dissolve or deconstruct wood or cellulose depends on several factors, (e.g. sample origin, the particle size of the biomass, mechanical treatments as pulverization, initial biomassto-IL ratio, water content of the biomass, possible impurities of IL, reaction conditions, temperature etc). The aim of this study was to obtain (fermentable) saccharides and other valuable chemicals from wood by a combined heat and IL-treatment. Thermal treatments alone contribute to the degradation of polysaccharides (e.g. 150 °C alone is said to cause the degradation of polysaccharides), thus temperatures below that should be used, if the research interest lies on the IL effectiveness. On the other hand, the efficiency of the IL-treatment can also be enhanced to combine other treatment methods, (e.g. microwave heating). The samples of spruce, pine and birch sawdust were treated with either 1-Ethyl-3-methylimidazolium chloride, Emim Cl, or 1-Ethyl-3-methylimidazolium acetate, Emim Ac, (or with ionized water for comparison) at various temperatures (where focus was between 80 and 120 °C). The samples were withdrawn at fixed time intervals (the main interest treatment time area lied between 0 and 100 hours). Double experiments were executed. The selected mono- and disaccharides, as well as their known degradation products, 5-hydroxymethylfurfural, 5-HMF, and furfural were analyzed with capillary electrophoresis, CE, and high-performance liquid chromatography, HPLC. Initially, even GC and GC-MS were utilized. Galactose, glucose, mannose and xylose were the main monosaccharides that were present in the wood samples exposed to ILs at elevated temperatures; in addition, furfural and 5-HMF were detected; moreover, the quantitative amount of the two latter ones were naturally increasing in line with the heating time or the IL:wood ratio.
Resumo:
Energiantuotannossa syntyvä tuhka voi olla laadultaan hyvin vaihtelevaa ja laadunvaihtelulle on haastavaa löytää yksiselitteistä syy-seuraussuhdetta. Ympäristönsuojelulainsäädäntö ja taloudelliset intressit ohjaavat tuhkantuottajia etsimään tuhkalle sopivia hyötykäyttökohteita, ja sen vuoksi tuhkan laatuun ja hyötykäyttökelpoisuuteen vaikuttavia tekijöitä on tarpeen selvittää. Tässä diplomityössä on tutkittu pienissä, alle 50 MW:n polttolaitoksissa syntyvää tuhkaa. Tavoitteena oli selvittää, kuinka tuhkan hyötykäyttökelpoisuuteen voidaan vaikuttaa. Tutkimuksen kohteena oli polttoainekoostumuksen, poltto-olosuhteiden ja tuhkan jälkikäsittelyn vaikutus tuhkassa olevien haitta-aineiden pitoisuuksiin ja liukoisuuksin. Työhön sisältyi myös aiemmin tehtyjen tuhka-analyysien tarkastelu sekä tuhkakokeet kahdella kohderyhmään kuuluvalla laitoksella. Työssä todettiin lentotuhkan haitta-ainepitoisuuksien ja -liukoisuuksien olevan keskimäärin korkeampia kuin pohjatuhkan vastaavien, ja että tyypillisesti arinakattilan tuhkien haitta-aineet ylittävät useammin hyötykäyttökelpoisuuden raja-arvoja kuin kuplaleijupetikattilan tuhkien. Lisäksi havaittiin metsätähdehaketuhkan kelpaavan useammin hyötykäyttöön kuin rankahaketuhkan.
Resumo:
In this thesis the sludge of Southeastern-Finland, the companies which produce sludge and the current methods and those still in development have been surveyed. 85 % of the waste sludge from industry comes from forest industries. The sludge from municipal waste water treatment plants is mostly used as a raw material for bioplants. The sludge from forest industry is mostly incinerated. New circulation methods increase the recycling value of the waste by creating new products but they still lack a full-scale plant. The political pressure from Europe and the politics driven by the government of Finland will drive circular economy forward and thus uplifting the processing options of waste slurries. This work is divided in two parts, first contains the survey and the second contains the experimental part and the operational methods based on the survey. In the experimental part wet hard sheet waste sludge was de-watered with shaking filter and the applications for waste sludge from cellulose factory were considered. The results are, that the wet hard sheet waste sludge can be dewatered to high enough total solids content for the inteded use. Also, the cellulose waste sludge has too high Cd content in almost all of the batches to be used as a land improment.
Resumo:
Currently, laser scribing is growing material processing method in the industry. Benefits of laser scribing technology are studied for example for improving an efficiency of solar cells. Due high-quality requirement of the fast scribing process, it is important to monitor the process in real time for detecting possible defects during the process. However, there is a lack of studies of laser scribing real time monitoring. Commonly used monitoring methods developed for other laser processes such a laser welding, are sufficient slow and existed applications cannot be implemented in fast laser scribing monitoring. The aim of this thesis is to find a method for laser scribing monitoring with a high-speed camera and evaluate reliability and performance of the developed monitoring system with experiments. The laser used in experiments is an IPG ytterbium pulsed fiber laser with 20 W maximum average power and Scan head optics used in the laser is Scanlab’s Hurryscan 14 II with an f100 tele-centric lens. The camera was connected to laser scanner using camera adapter to follow the laser process. A powerful fully programmable industrial computer was chosen for executing image processing and analysis. Algorithms for defect analysis, which are based on particle analysis, were developed using LabVIEW system design software. The performance of the algorithms was analyzed by analyzing a non-moving image from the scribing line with resolution 960x20 pixel. As a result, the maximum analysis speed was 560 frames per second. Reliability of the algorithm was evaluated by imaging scribing path with a variable number of defects 2000 mm/s when the laser was turned off and image analysis speed was 430 frames per second. The experiment was successful and as a result, the algorithms detected all defects from the scribing path. The final monitoring experiment was performed during a laser process. However, it was challenging to get active laser illumination work with the laser scanner due physical dimensions of the laser lens and the scanner. For reliable error detection, the illumination system is needed to be replaced.
Resumo:
Complex networks have recently attracted a significant amount of research attention due to their ability to model real world phenomena. One important problem often encountered is to limit diffusive processes spread over the network, for example mitigating pandemic disease or computer virus spread. A number of problem formulations have been proposed that aim to solve such problems based on desired network characteristics, such as maintaining the largest network component after node removal. The recently formulated critical node detection problem aims to remove a small subset of vertices from the network such that the residual network has minimum pairwise connectivity. Unfortunately, the problem is NP-hard and also the number of constraints is cubic in number of vertices, making very large scale problems impossible to solve with traditional mathematical programming techniques. Even many approximation algorithm strategies such as dynamic programming, evolutionary algorithms, etc. all are unusable for networks that contain thousands to millions of vertices. A computationally efficient and simple approach is required in such circumstances, but none currently exist. In this thesis, such an algorithm is proposed. The methodology is based on a depth-first search traversal of the network, and a specially designed ranking function that considers information local to each vertex. Due to the variety of network structures, a number of characteristics must be taken into consideration and combined into a single rank that measures the utility of removing each vertex. Since removing a vertex in sequential fashion impacts the network structure, an efficient post-processing algorithm is also proposed to quickly re-rank vertices. Experiments on a range of common complex network models with varying number of vertices are considered, in addition to real world networks. The proposed algorithm, DFSH, is shown to be highly competitive and often outperforms existing strategies such as Google PageRank for minimizing pairwise connectivity.