19 resultados para A Modification of de la Escalera’s Algorithm
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Ceramics are widely used in industrial applications due to their advantageous thermal and mechanical stability. Corrosion of ceramics is a great problem resulting in significant costs. Coating is one method of reducing adversities of corrosion. There are several different thin film deposition processes available such as sol-gel, Physical and Chemical Vapour Deposition (PVD and CVD). One of the CVD processes, called Atomic Layer Deposition (ALD) stands out for its excellent controllability, accuracy and wide process capability. The most commonly mentioned disadvantage of this method is its slowness which is partly compensated by its capability of processing large areas at once. Several factors affect the ALD process. Such factors include temperature, the grade of precursors, pulse-purge times and flux of precursors as well as the substrate used. Wrongly chosen process factors may cause loss of self-limiting growth and thus, non-uniformities in the deposited film. Porous substrates require longer pulse times than flat surfaces. The goal of this thesis was to examine the effects of ALD films on surface properties of a porous ceramic material. The analyses applied were for permeability, bubble point pressure and isoelectric point. In addition, effects of the films on corrosion resistance of the substrate in aqueous environment were investigated. After being exposured to different corrosive media the ceramics and liquid samples collected were analysed both mechanically and chemically. Visual and contentual differences between the exposed and coated ceramics versus the untreated and uncoated ones were analysed by scanning electron microscope. Two ALD film materials, dialuminium trioxide and titanium dioxide were deposited on the ceramic substrate using different pulse times. The results of both film materials indicated that surface properties of the ceramic material can be modified to some extent by the ALD method. The effect of the titanium oxide film on the corrosion resistance of the ceramic samples was observed to be fairly small regardless of the pulse time.
Resumo:
In the Russian Wholesale Market, electricity and capacity are traded separately. Capacity is a special good, the sale of which obliges suppliers to keep their generating equipment ready to produce the quantity of electricity indicated by the System Operator. The purpose of the formation of capacity trading was the maintenance of reliable and uninterrupted delivery of electricity in the wholesale market. The price of capacity reflects constant investments in construction, modernization and maintenance of power plants. So, the capacity sale creates favorable conditions to attract investments in the energy sector because it guarantees the investor that his investments will be returned.
Resumo:
The first objective of this study was to find out reliable laboratory methods to predict the effect of enzymes on specific energy consumption and fiber properties of TMP pulp. The second one was to find with interactive software called “Knowledge discovery in databases” enzymes or other additives that can be used in finding a solution to reduce energy consumption of TMP pulp. The chemical composition of wood and enzymes, which have activity on main wood components were presented in the literature part of the work. The results of previous research in energy reduction of TMP process with enzymes were also highlighted. The main principles of knowledge discovery have been included in literature part too. The experimental part of the work contains the methods description in which the standard size chip, crushed chip and fiberized spruce chip (fiberized pulp) were used. Different types of enzymatic treatment with different dosages and time were tested during the experiments and showed. Pectinase, endoglucanase and mixture of enzymes were used for evaluation of method reliability. The fines content and fiber length of pulp was measured and used as evidence of enzymes' effect. The refining method with “Bauer” laboratory disc refiner was evaluated as not highly reliable. It was not able to provide high repeatability of results, because of uncontrolled feeding capacity and refining consistency. The refining method with Valley refiner did not have a lot of variables and showed stable and repeatable results in energy saving. The results of experiments showed that efficient enzymes impregnation is probably the main target with enzymes application for energy saving. During the work the fiberized pulp showed high accessibility to enzymatic treatment and liquid penetration without special impregnating equipment. The reason was that fiberized pulp has larger wood surface area and thereby the contact area between the enzymatic solution and wood is also larger. Standard size chip and crushed chip treatment without special impregnator of enzymatic solution was evaluated as not efficient and did not show visible, repeatable results in energy consumption decrease. Thereby it was concluded that using of fiberized pulp and Valley refiner for measurements of enzymes' effectiveness in SEC decrease is more suitable than normal size chip and crushed chip with “Bauer” refiner. Endoglucanase with 5 kg/t dosage showed about 20% energy consumption decrease. Mixture of enzymes with 1.5 kg/t dosage showed about 15% decrease of energy consumption during the refining. Pectinase at different dosages and treatment times did not show significant effect on energy consumption. Results of knowledge discovery in databases showed the xylanase, cellulase and pectinase blend as most promising for energy reduction in TMP process. Surfactants were determined as effective additives for energy saving with enzymes.
Resumo:
Biorefining is defined as sustainable conversion of biomass into marketable products and energy. Forests cover almost one third of earth’s land area, and account for approximately 40% of the total annual biomass production. In forest biorefining, the wood components are, in addition to the traditional paper and board products, converted into chemicals and biofuels. The major components in wood are cellulose, hemicelluloses, and lignin. The main hemicellulose in softwoods, which are of interest especially for the Nordic forest industry, is O-acetyl galactoglucomannan (GGM). GGM can be isolated in industrial scale from the waste waters of the mechanical pulping process, but is not yet today industrially utilized. In order to attain desired properties of GGM for specific end-uses, chemical and enzymatic modifications can be performed. Regioselective modifications of GGM, and other galactose-containing polysaccharides were done by oxidations, and by combining oxidations with subsequent derivatizations of the formed carbonyl or carboxyl groups. Two different pathways were investigated: activation of the C-6 positions in different sugar units by TEMPO-mediated oxidation, and activation of C-6 position in only galactose-units by oxidation catalyzed by the enzyme galactose oxidase. The activated sites were further selectively derivatized; TEMPO-oxidized GGM by a carbodiimide-mediated reaction forming amides, and GO-oxidized GGM by indium-mediated allylation introducing double or triple bonds to the molecule. In order to better understand the reaction, and to develop a MALDI-TOF-MS method for characterization of regioselectively allylated GGM, α-D-galactopyranoside and raffinose were used as model compounds. All reactions were done in aqueous media. To investigate the applicability of the modified polysaccharides for, e.g., cellulose surface functionalization, their sorption onto pulp fibres was studied. Carboxylation affects the sorption tendency significantly; a higher degree of oxidation leads to lower sorption. By controlling the degree of oxidation of the polysaccharides and the ionic strength of the sorption media, high degrees of sorption of carboxylated polysaccharides onto cellulose could, however, be obtained. Anionic polysaccharides were used as templates during laccase-catalyzed polymerization of aniline, offering a green, chemo-enzymatic route for synthesis of conducting polyaniline (PANI) composite materials. Different polysaccharide templates, such as, native GGM, TEMPO-oxidized GGM, naturally anionic κ-carrageenan, and nanofibrillated cellulose produced by TEMPO-oxidation, were assessed. The conductivity of the synthesized polysaccharide/PANI biocomposites varies depending on the polysaccharide template; κ-CGN, the anionic polysaccharide with the lowest pKa value, produces the polysaccharide/PANI biocomposites with the highest conductivity. The presented derivatization, sorption, and polymerization procedures open new application windows for polysaccharides, such as spruce GGM. The modified polysaccharides and the conducting biocomposites produced provide potential applications in biosensors, electronic devices, and tissue engineering.
Resumo:
Increasing demand and shortage of energy resources and clean water due to the rapid development of industry, population growth and long term droughts have become an issue worldwide. As a result, global warming, long term droughts and pollution-related diseases are becoming more and more serious. The traditional technologies, such as precipitation, neutralization, sedimentation, filtration and waste immobilization, cannot prevent the pollution but restrict the waste chemicals only after the pollution emission. Meanwhile, most of these treatments cannot thoroughly degrade the contaminants and may generate toxic secondary pollutants into ecosystem. Heterogeneous photocatalysis as the innovative wastewater technology attracts many attention, because it is able to generate highly reactive transitory species for total degradation of organic compounds, water pathogens and disinfection by-products. Semiconductor as photocatalysts have demonstrated their efficiency in degrading a wide range of organics into readily biodegradable compounds, and eventually mineralized them to innocuous carbon dioxide and water. But, the efficiency of photocatalysis is limited, and hence, it is crucial issue to modify photocatalyst to enhance photocatalytic activity. In this thesis, first of all, two literature views are conducted. A survey of materials for photocatalysis has been carried out in order to summarize the properties and the applications of photocatalysts that have been developed in this field. Meanwhile, the strategy for the improvement of photocatalytic activity have been explicit discussed. Furthermore, all the raw material and chemicals used in this work have been listed as well as a specific experimental process and characterization method has been described. The synthesize methods of different photocatalysts have been depicted step by step. Among these cases, different modification strategies have been used to enhance the efficiency of photocatalyst on degradation of organic compounds (Methylene Blue or Phenol). For each case, photocatalytic experiments have been done to exhibit their photocatalytic activity.The photocatalytic experiments have been designed and its process have been explained and illustrated in detailed. Moreover, the experimental results have been shown and discussion. All the findings have been demonstrated in detail and discussed case by case. Eventually, the mechanisms on the improvement of photocatalytic activities have been clarified by characterization of samples and analysis of results. As a conclusion, the photocatalytic activities of selected semiconductors have been successfully enhanced via choosing appropriate strategy for the modification of photocatalysts.
Resumo:
In this thesis the basic structure and operational principals of single- and multi-junction solar cells are considered and discussed. Main properties and characteristics of solar cells are briefly described. Modified equipment for measuring the quantum efficiency for multi-junction solar cell is presented. Results of experimental research single- and multi-junction solar cells are described.
Resumo:
Poly-L-lactide (PLLA) is a widely used sustainable and biodegradable alternative to replace synthetic non-degradable plastic materials in the packaging industry. Conversely, its processing properties are not always optimal, e.g. insufficient melt strength at higher temperatures (necessary in extrusion coating processes). This thesis reports on research to improve properties of commercial PLLA grade (3051D from NatureWorks), to satisfy and extend end-use applications, such as food packaging by blending with modified PLLA. Adjustment of the processability by chain branching of commercial poly-L-lactide initiated by peroxide was evaluated. Several well-defined branched structures with four arms (sPLLA) were synthesized using pentaerythritol as a tetra-functional initiator. Finally, several block copolymers consisting of polyethylene glycol and PLLA (i.e. PEGLA) were produced to obtain a well extruded material with improved heat sealing properties. Reactive extrusion of poly-L-lactide was carried out in the presence of 0.1, 0.3 and 0.5 wt% of various peroxides [tert-butyl-peroxybenzoate (TBPB), 2,5-dimethyl-2,5-(tert-butylperoxy)-hexane (Lupersol 101; LOL1) and benzoyl peroxide (BPO)] at 190C. The peroxide-treated PLLAs showed increased complex viscosity and storage modulus at lower frequencies, indicating the formation of branched/cross linked architectures. The material property changes were dependent on the peroxide, and the used peroxide concentration. Gel fraction analysis showed that the peroxides, afforded different gel contents, and especially 0.5 wt% peroxide, produced both an extremely high molar mass, and a cross linked structure, not perhaps well suited for e.g. further use in a blending step. The thermal behavior was somewhat unexpected as the materials prepared with 0.5 wt% peroxide showed the highest ability for crystallization and cold crystallization, despite substantial cross linking. The peroxide-modified PLLA, i.e. PLLA melt extruded with 0.3 wt% of TBPB and LOL1 and 0.5 wt% BPO was added to linear PLLA in ratios of 5, 15 and 30 wt%. All blends showed increased zero shear viscosity, elastic nature (storage modulus) and shear sensitivity. All blends remained amorphous, though the ability of annealing was improved slightly. Extrusion coating on paperboard was conducted with PLLA, and peroxide-modified PLLA blends (90:10). All blends were processable, but only PLLA with 0.3 wt% of LOL1 afforded a smooth high quality surface with improved line speed. Adhesion levels between fiber and plastic, as well as heat seal performance were marginally reduced compared with pure 3051D. The water vapor transmission measurements (WVTR) of the blends containing LOL1 showed acceptable levels, only slightly lower than for comparable PLLA 3051D. A series of four-arm star-shaped poly-L-lactide (sPLLA) with different branch length was synthesized by ring opening polymerization (ROP) of L-lactide using pentaerythritol as initiator and stannous octoate as catalyst. The star-shaped polymers were further blended with its linear resin and studied for their melt flow and thermal properties. Blends containing 30 wt% of sPLLA with low molecular weight (30 wt%; Mwtotal: 2500 g mol-1 and 15000 g mol-1) showed lower zero shear viscosity and significantly increased shear thinning, while at the same time slightly increased crystallization of the blend. However, the amount of crystallization increased significantly with the higher molecular weight sPLLA, therefore the star-shaped structure may play a role as nucleating agent. PLLA-polyethylene glycol–PLLA triblock copolymers (PEGLA) with different PLLA block length were synthesized and their applicability as blends with linear PLLA (3051D NatureWorks) was investigated with the intention of improving heat-seal and adhesion properties of extrusion-coated paperboard. PLLA-PEG-PLLA was obtained by ring opening polymerization (ROP) of L-lactide using PEG (molecular weight 6000 g mol-1) as an initiator, and stannous octoate as catalyst. The structures of the PEGLAs were characterized by proton nuclear magnetic resonance spectroscopy (1H-NMR). The melt flow and thermal properties of all PEGLAs and their blends were evaluated using dynamic rheology, and differential scanning calorimeter (DSC). All blends containing 30 wt% of PEGLAs showed slightly higher zero shear viscosity, higher shear thinning and increased melt elasticity (based on tan delta). Nevertheless, no significant changes in thermal properties were distinguished. High molecular weight PEGLAs were used in extrusion coating line with 3051D without problems.
Resumo:
Työssä arvioidaan ja verifioidaan puheluiden luokitteluun suunniteltu Call Sequence Analysing Algorithm (CSA-algoritmi). Algoritmin tavoitteena on luokitella riittävän samankaltaiset puhelut ryhmiksi tarkempaa vika-analyysia varten. Työssä esitellään eri koneoppimisalgoritmien pääluokitukset ja niiden tyypilliset eroavaisuudet, eri luokitteluprosesseille ominaiset datatyypit, sekä toimintaympäristöt, joissa kyseinen toteutus on suunniteltu toimivaksi. CSA-algoritmille syötetään verkon ylläpitoviesteistä koostuvia viestisarjoja, joiden sisällön perusteella samankaltaiset sarjat ryhmitellään kokonaisuuksiksi. Algoritmin suorituskykyä arvioidaan 94 käsin luokitellun verrokkisarjan avulla. Sarjat on kerätty toimivasta 3G-verkon kontrollerista. Kahta sarjaa vertailemalla sarjaparille muodostetaan keskinäinen tunnusluku: sarjojen samanlaisuutta kuvaava etäisyys. Tässä työssä keskitytään erityisesti Hamming-etäisyyteen. Etäisyyden avulla sarjat koostetaan ryhmiksi. Muuttamalla hyväksyttävää maksimietäisyyttä, jonka perusteella kaksi sarjaa lasketaan kuuluvaksi samaan ryhmään, saadaan aikaiseksi alaryhmiä, joihin kuuluu ainoastaan samankaltaisia sarjoja. Hyväksyttävän etäisyyden kasvaessa, myös virheluokitusten määrä kasvaa. Oikeiden lajittelutulosten vertailukohteena toimii käsin luokiteltu ryhmittely. CSA-algoritmin luokittelutuloksen tarkkuus esitetään prosentuaalisena osuutena tavoiteryhmittelystä maksimietäisyyden funktiona. Työssä osoitetaan, miten etäisyysattribuutiksi valittu Hamming-etäisyys ei sovellu tämän datan luokitteluun. Työn lopussa ehdotetaan menetelmää ja työkalua, joiden avulla useampaa eri lajittelija-algoritmia voidaan testata nopealla kehityssyklillä.
Resumo:
Coherent anti-Stokes Raman scattering is the powerful method of laser spectroscopy in which significant successes are achieved. However, the non-linear nature of CARS complicates the analysis of the received spectra. The objective of this Thesis is to develop a new phase retrieval algorithm for CARS. It utilizes the maximum entropy method and the new wavelet approach for spectroscopic background correction of a phase function. The method was developed to be easily automated and used on a large number of spectra of different substances.. The algorithm was successfully tested on experimental data.
Resumo:
Simulation has traditionally been used for analyzing the behavior of complex real world problems. Even though only some features of the problems are considered, simulation time tends to become quite high even for common simulation problems. Parallel and distributed simulation is a viable technique for accelerating the simulations. The success of parallel simulation depends heavily on the combination of the simulation application, algorithm and message population in the simulation is sufficient, no additional delay is caused by this environment. In this thesis a conservative, parallel simulation algorithm is applied to the simulation of a cellular network application in a distributed workstation environment. This thesis presents a distributed simulation environment, Diworse, which is based on the use of networked workstations. The distributed environment is considered especially hard for conservative simulation algorithms due to the high cost of communication. In this thesis, however, the distributed environment is shown to be a viable alternative if the amount of communication is kept reasonable. Novel ideas of multiple message simulation and channel reduction enable efficient use of this environment for the simulation of a cellular network application. The distribution of the simulation is based on a modification of the well known Chandy-Misra deadlock avoidance algorithm with null messages. The basic Chandy Misra algorithm is modified by using the null message cancellation and multiple message simulation techniques. The modifications reduce the amount of null messages and the time required for their execution, thus reducing the simulation time required. The null message cancellation technique reduces the processing time of null messages as the arriving null message cancels other non processed null messages. The multiple message simulation forms groups of messages as it simulates several messages before it releases the new created messages. If the message population in the simulation is suffiecient, no additional delay is caused by this operation A new technique for considering the simulation application is also presented. The performance is improved by establishing a neighborhood for the simulation elements. The neighborhood concept is based on a channel reduction technique, where the properties of the application exclusively determine which connections are necessary when a certain accuracy for simulation results is required. Distributed simulation is also analyzed in order to find out the effect of the different elements in the implemented simulation environment. This analysis is performed by using critical path analysis. Critical path analysis allows determination of a lower bound for the simulation time. In this thesis critical times are computed for sequential and parallel traces. The analysis based on sequential traces reveals the parallel properties of the application whereas the analysis based on parallel traces reveals the properties of the environment and the distribution.
Resumo:
With the development of electronic devices, more and more mobile clients are connected to the Internet and they generate massive data every day. We live in an age of “Big Data”, and every day we generate hundreds of million magnitude data. By analyzing the data and making prediction, we can carry out better development plan. Unfortunately, traditional computation framework cannot meet the demand, so the Hadoop would be put forward. First the paper introduces the background and development status of Hadoop, compares the MapReduce in Hadoop 1.0 and YARN in Hadoop 2.0, and analyzes the advantages and disadvantages of them. Because the resource management module is the core role of YARN, so next the paper would research about the resource allocation module including the resource management, resource allocation algorithm, resource preemption model and the whole resource scheduling process from applying resource to finishing allocation. Also it would introduce the FIFO Scheduler, Capacity Scheduler, and Fair Scheduler and compare them. The main work has been done in this paper is researching and analyzing the Dominant Resource Fair algorithm of YARN, putting forward a maximum resource utilization algorithm based on Dominant Resource Fair algorithm. The paper also provides a suggestion to improve the unreasonable facts in resource preemption model. Emphasizing “fairness” during resource allocation is the core concept of Dominant Resource Fair algorithm of YARM. Because the cluster is multiple users and multiple resources, so the user’s resource request is multiple too. The DRF algorithm would divide the user’s resources into dominant resource and normal resource. For a user, the dominant resource is the one whose share is highest among all the request resources, others are normal resource. The DRF algorithm requires the dominant resource share of each user being equal. But for these cases where different users’ dominant resource amount differs greatly, emphasizing “fairness” is not suitable and can’t promote the resource utilization of the cluster. By analyzing these cases, this thesis puts forward a new allocation algorithm based on DRF. The new algorithm takes the “fairness” into consideration but not the main principle. Maximizing the resource utilization is the main principle and goal of the new algorithm. According to comparing the result of the DRF and new algorithm based on DRF, we found that the new algorithm has more high resource utilization than DRF. The last part of the thesis is to install the environment of YARN and use the Scheduler Load Simulator (SLS) to simulate the cluster environment.
Resumo:
A rigorous unit operation model is developed for vapor membrane separation. The new model is able to describe temperature, pressure, and concentration dependent permeation as wellreal fluid effects in vapor and gas separation with hydrocarbon selective rubbery polymeric membranes. The permeation through the membrane is described by a separate treatment of sorption and diffusion within the membrane. The chemical engineering thermodynamics is used to describe the equilibrium sorption of vapors and gases in rubbery membranes with equation of state models for polymeric systems. Also a new modification of the UNIFAC model is proposed for this purpose. Various thermodynamic models are extensively compared in order to verify the models' ability to predict and correlate experimental vapor-liquid equilibrium data. The penetrant transport through the selective layer of the membrane is described with the generalized Maxwell-Stefan equations, which are able to account for thebulk flux contribution as well as the diffusive coupling effect. A method is described to compute and correlate binary penetrant¿membrane diffusion coefficients from the experimental permeability coefficients at different temperatures and pressures. A fluid flow model for spiral-wound modules is derived from the conservation equation of mass, momentum, and energy. The conservation equations are presented in a discretized form by using the control volume approach. A combination of the permeation model and the fluid flow model yields the desired rigorous model for vapor membrane separation. The model is implemented into an inhouse process simulator and so vapor membrane separation may be evaluated as an integralpart of a process flowsheet.
Resumo:
The parameter setting of a differential evolution algorithm must meet several requirements: efficiency, effectiveness, and reliability. Problems vary. The solution of a particular problem can be represented in different ways. An algorithm most efficient in dealing with a particular representation may be less efficient in dealing with other representations. The development of differential evolution-based methods contributes substantially to research on evolutionary computing and global optimization in general. The objective of this study is to investigatethe differential evolution algorithm, the intelligent adjustment of its controlparameters, and its application. In the thesis, the differential evolution algorithm is first examined using different parameter settings and test functions. Fuzzy control is then employed to make control parameters adaptive based on an optimization process and expert knowledge. The developed algorithms are applied to training radial basis function networks for function approximation with possible variables including centers, widths, and weights of basis functions and both having control parameters kept fixed and adjusted by fuzzy controller. After the influence of control variables on the performance of the differential evolution algorithm was explored, an adaptive version of the differential evolution algorithm was developed and the differential evolution-based radial basis function network training approaches were proposed. Experimental results showed that the performance of the differential evolution algorithm is sensitive to parameter setting, and the best setting was found to be problem dependent. The fuzzy adaptive differential evolution algorithm releases the user load of parameter setting and performs better than those using all fixedparameters. Differential evolution-based approaches are effective for training Gaussian radial basis function networks.
Resumo:
The improvement of the dynamics of flexible manipulators like log cranes often requires advanced control methods. This thesis discusses the vibration problems in the cranes used in commercial forestry machines. Two control methods, adaptive filtering and semi-active damping, are presented. The adaptive filter uses a part of the lowest natural frequency of the crane as a filtering frequency. The payload estimation algorithm, filtering of control signal and algorithm for calculation of the lowest natural frequency of the crane are presented. The semi-active damping method is basedon pressure feedback. The pressure vibration, scaled with suitable gain, is added to the control signal of the valve of the lift cylinder to suppress vibrations. The adaptive filter cuts off high frequency impulses coming from the operatorand semi-active damping suppresses the crane?s oscillation, which is often caused by some external disturbance. In field tests performed on the crane, a correctly tuned (25 % tuning) adaptive filter reduced pressure vibration by 14-17 % and semi-active damping correspondingly by 21-43%. Applying of these methods require auxiliary transducers, installed in specific points in the crane, and electronically controlled directional control valves.