51 resultados para Purification techniques
Resumo:
Formal software development processes and well-defined development methodologies are nowadays seen as the definite way to produce high-quality software within time-limits and budgets. The variety of such high-level methodologies is huge ranging from rigorous process frameworks like CMMI and RUP to more lightweight agile methodologies. The need for managing this variety and the fact that practically every software development organization has its own unique set of development processes and methods have created a profession of software process engineers. Different kinds of informal and formal software process modeling languages are essential tools for process engineers. These are used to define processes in a way which allows easy management of processes, for example process dissemination, process tailoring and process enactment. The process modeling languages are usually used as a tool for process engineering where the main focus is on the processes themselves. This dissertation has a different emphasis. The dissertation analyses modern software development process modeling from the software developers’ point of view. The goal of the dissertation is to investigate whether the software process modeling and the software process models aid software developers in their day-to-day work and what are the main mechanisms for this. The focus of the work is on the Software Process Engineering Metamodel (SPEM) framework which is currently one of the most influential process modeling notations in software engineering. The research theme is elaborated through six scientific articles which represent the dissertation research done with process modeling during an approximately five year period. The research follows the classical engineering research discipline where the current situation is analyzed, a potentially better solution is developed and finally its implications are analyzed. The research applies a variety of different research techniques ranging from literature surveys to qualitative studies done amongst software practitioners. The key finding of the dissertation is that software process modeling notations and techniques are usually developed in process engineering terms. As a consequence the connection between the process models and actual development work is loose. In addition, the modeling standards like SPEM are partially incomplete when it comes to pragmatic process modeling needs, like light-weight modeling and combining pre-defined process components. This leads to a situation, where the full potential of process modeling techniques for aiding the daily development activities can not be achieved. Despite these difficulties the dissertation shows that it is possible to use modeling standards like SPEM to aid software developers in their work. The dissertation presents a light-weight modeling technique, which software development teams can use to quickly analyze their work practices in a more objective manner. The dissertation also shows how process modeling can be used to more easily compare different software development situations and to analyze their differences in a systematic way. Models also help to share this knowledge with others. A qualitative study done amongst Finnish software practitioners verifies the conclusions of other studies in the dissertation. Although processes and development methodologies are seen as an essential part of software development, the process modeling techniques are rarely used during the daily development work. However, the potential of these techniques intrigues the practitioners. As a conclusion the dissertation shows that process modeling techniques, most commonly used as tools for process engineers, can also be used as tools for organizing the daily software development work. This work presents theoretical solutions for bringing the process modeling closer to the ground-level software development activities. These theories are proven feasible by presenting several case studies where the modeling techniques are used e.g. to find differences in the work methods of the members of a software team and to share the process knowledge to a wider audience.
Resumo:
Utilization of biomass-based raw materials for the production of chemicals and materials is gaining an increasing interest. Due to the complex nature of biomass, a major challenge in its refining is the development of efficient fractionation and purification processes. Preparative chromatography and membrane filtration are selective, energy-efficient separation techniques which offer a great potential for biorefinery applications. Both of these techniques have been widely studied. On the other hand, only few process concepts that combine the two methods have been presented in the literature. The aim of this thesis was to find the possible synergetic effects provided by combining chromatographic and membrane separations, with a particular interest in biorefinery separation processes. Such knowledge could be used in the development of new, more efficient separation processes for isolating valuable compounds from complex feed solutions that are typical for the biorefinery environment. Separation techniques can be combined in various ways, from simple sequential coupling arrangements to fully-integrated hybrid processes. In this work, different types of combined separation processes as well as conventional chromatographic separation processes were studied for separating small molecules such as sugars and acids from biomass hydrolysates and spent pulping liquors. The combination of chromatographic and membrane separation was found capable of recovering high-purity products from complex solutions. For example, hydroxy acids of black liquor were successfully recovered using a novel multistep process based on ultrafiltration and size-exclusion chromatography. Unlike any other separation process earlier suggested for this challenging separation task, the new process concept does not require acidification pretreatment, and thus it could be more readily integrated into a pulp-mill biorefinery. In addition to the combined separation processes, steady-state recycling chromatography, which has earlier been studied for small-scale separations of high-value compounds only, was found a promising process alternative for biorefinery applications. In comparison to conventional batch chromatography, recycling chromatography provided higher product purity, increased the production rate and reduced the chemical consumption in the separation of monosaccharides from biomass hydrolysates. In addition, a significant further improvement in the process performance was obtained when a membrane filtration unit was integrated with recycling chromatography. In the light of the results of this work, separation processes based on combining membrane and chromatographic separations could be effectively applied for different biorefinery applications. The main challenge remains in the development of inexpensive separation materials which are resistant towards harsh process conditions and fouling.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Purification of hydrocarbon waste streams is needed to recycle valuable hydrocarbon products, reduce hazardous impacts on environment, and save energy. To obtain these goals, research must be focused on the search of effective and feasible purification and re-refining technologies. Hydrocarbon waste streams can contain both deliberately added additives to original product and during operation cycle accumulated undesired contaminants. Compounds may have degenerated or cross-reacted. Thus, the presence of unknown species cause additional challenges for the purification process. Adsorption process is most suitable to reduce impurities to very low concentrations. Main advantages are availability of selective commercial adsorbents and the regeneration option to recycle used separation material. Used hydrocarbon fraction was purified with various separation materials in the experimental part. First screening of suitable materials was done. In the second stage, temperature dependence and adsorption kinetics were studied. Finally, one fixed bed experiment was done with the most suitable material. Additionally, FTIR-measurements of hydrocarbon samples were carried out to develop a model to monitor the concentrations of three target impurities based on spectral data. Adsorption capacities of the tested separation materials were observed to be low to achieve high enough removal efficiencies for target impurities. Based on the obtained data, batch process would be more suitable than a fixed bed process and operation at high temperatures is favorable. Additional pretreatment step is recommended to improve removal efficiency. The FTIR-measurement was proven to be a reliable and fast analysis method for challenging hydrocarbon samples.
Resumo:
The amount of biological data has grown exponentially in recent decades. Modern biotechnologies, such as microarrays and next-generation sequencing, are capable to produce massive amounts of biomedical data in a single experiment. As the amount of the data is rapidly growing there is an urgent need for reliable computational methods for analyzing and visualizing it. This thesis addresses this need by studying how to efficiently and reliably analyze and visualize high-dimensional data, especially that obtained from gene expression microarray experiments. First, we will study the ways to improve the quality of microarray data by replacing (imputing) the missing data entries with the estimated values for these entries. Missing value imputation is a method which is commonly used to make the original incomplete data complete, thus making it easier to be analyzed with statistical and computational methods. Our novel approach was to use curated external biological information as a guide for the missing value imputation. Secondly, we studied the effect of missing value imputation on the downstream data analysis methods like clustering. We compared multiple recent imputation algorithms against 8 publicly available microarray data sets. It was observed that the missing value imputation indeed is a rational way to improve the quality of biological data. The research revealed differences between the clustering results obtained with different imputation methods. On most data sets, the simple and fast k-NN imputation was good enough, but there were also needs for more advanced imputation methods, such as Bayesian Principal Component Algorithm (BPCA). Finally, we studied the visualization of biological network data. Biological interaction networks are examples of the outcome of multiple biological experiments such as using the gene microarray techniques. Such networks are typically very large and highly connected, thus there is a need for fast algorithms for producing visually pleasant layouts. A computationally efficient way to produce layouts of large biological interaction networks was developed. The algorithm uses multilevel optimization within the regular force directed graph layout algorithm.
Resumo:
Switching power supplies are usually implemented with a control circuitry that uses constant clock frequency turning the power semiconductor switches on and off. A drawback of this customary operating principle is that the switching frequency and harmonic frequencies are present in both the conducted and radiated EMI spectrum of the power converter. Various variable-frequency techniques have been introduced during the last decade to overcome the EMC problem. The main objective of this study was to compare the EMI and steady-state performance of a switch mode power supply with different spread-spectrum/variable-frequency methods. Another goal was to find out suitable tools for the variable-frequency EMI analysis. This thesis can be divided into three main parts: Firstly, some aspects of spectral estimation and measurement are presented. Secondly, selected spread spectrum generation techniques are presented with simulations and background information. Finally, simulations and prototype measurements from the EMC and the steady-state performance are carried out in the last part of this work. Combination of the autocorrelation function, the Welch spectrum estimate and the spectrogram were used as a substitute for ordinary Fourier methods in the EMC analysis. It was also shown that the switching function can be used in preliminary EMC analysis of a SMPS and the spectrum and autocorrelation sequence of a switching function correlates with the final EMI spectrum. This work is based on numerous simulations and measurements made with the prototype. All these simulations and measurements are made with the boost DC/DC converter. Four different variable-frequency modulation techniques in six different configurations were analyzed and the EMI performance was compared to the constant frequency operation. Output voltage and input current waveforms were also analyzed in time domain to see the effect of the spread spectrum operation on these quantities. According to the results presented in this work, spread spectrum modulation can be utilized in power converter for EMI mitigation. The results from steady-state voltage measurements show, that the variable-frequency operation of the SMPS has effect on the voltage ripple, but the ripple measured from the prototype is still acceptable in some applications. Both current and voltage ripple can be controlled with proper main circuit and controller design.
Resumo:
Graphene is a material with extraordinary properties. Its mechanical and electrical properties are unparalleled but the difficulties in its production are hindering its breakthrough in on applications. Graphene is a two-dimensional material made entirely of carbon atoms and it is only a single atom thick. In this work, properties of graphene and graphene based materials are described, together with their common preparation techniques and related challenges. This Thesis concentrates on the topdown techniques, in which natural graphite is used as a precursor for the graphene production. Graphite consists of graphene sheets, which are stacked together tightly. In the top-down techniques various physical or chemical routes are used to overcome the forces keeping the graphene sheets together, and many of them are described in the Thesis. The most common chemical method is the oxidisation of graphite with strong oxidants, which creates a water-soluble graphene oxide. The properties of graphene oxide differ significantly from pristine graphene and, therefore, graphene oxide is often reduced to form materials collectively known as reduced graphene oxide. In the experimental part, the main focus is on the chemical and electrochemical reduction of graphene oxide. A novel chemical route using vanadium is introduced and compared to other common chemical graphene oxide reduction methods. A strong emphasis is placed on electrochemical reduction of graphene oxide in various solvents. Raman and infrared spectroscopy are both used in in situ spectroelectrochemistry to closely monitor the spectral changes during the reduction process. These in situ techniques allow the precise control over the reduction process and even small changes in the material can be detected. Graphene and few layer graphene were also prepared using a physical force to separate these materials from graphite. Special adsorbate molecules in aqueous solutions, together with sonic treatment, produce stable dispersions of graphene and few layer graphene sheets in water. This mechanical exfoliation method damages the graphene sheets considerable less than the chemical methods, although it suffers from a lower yield.
Resumo:
Työssä tutkittiin mahdollisuutta tehostaa kunnallista jätevedenpuhdistusta membraanitekniikkaa käyttäen. Teoriaosassa perehdyttiin kunnallisen jäteveden omi-naisuuksiin ja perinteiseen puhdistusprosessiin. Tämän lisäksi membraanien yleisiä ominaisuuksia, membraanityyppien kykyä erottaa jäteveden sisältämiä yhdisteitä sekä erilaisia mahdollisuuksia membraanisuodatukselle perinteisen jäteve-denpuhdistusprosessin yhteydessä käytiin läpi. Kokeellisessa osassa keskityttiin Savitaipaleen puhdistamon puhdistetun jäteveden laadun parantamiseen membraanisuodatukseen perustuvalla jälkikäsittely-yksiköllä eli tertiäärisuodatuksella. Erilaisia paineavusteisia membraanitekniikoita (MF, UF, NF ja RO) tutkittiin jäteveden puhdistamiseksi ja erotustehokkuuden vertailemiseksi. Tavoitteena oli löytää tehokas ja käytännössä toimiva tapa poistaa fosforia ja typpeä puhdistetusta jätevedestä. Tasomaisilla membraaneilla tehdyissä suodatuksissa MF-membraanit poistivat tehokkaasti fosforia (97 – 98 %) sekä kiintoainesta (100 %) jätevedestä. RO-membraanit alensivat fosfori- (100 %) ja typpipitoisuutta (90 – 94 %) tehokkaasti, poistaen myös liuenneita orgaanisia yhdisteitä (DOC, 90 – 94 %). Mikrosuodatukseen perustuvilla onttokuitumembraaneilla saavutettiin tehokkaan fosforinpoiston lisäksi tasainen kapasiteetti suodatuksen aikaista puhdistusmene-telmää optimoimalla. Pitoisuusalenemat olivat myös erittäin korkeita fosforille (97 – 99 %), kiintoaineelle (100 %), sameudelle (96 – 99 %) sekä COD:lle (38 – 55 %). Tulosten perusteella membraanitekniikkaan perustuva onttokuitusuodatus olisi tehokas ja toimiva jälkikäsittelyprosessi fosforinpoistoon ja jäteveden laadun pa-rantamiseen. Typen poistamiseen parhaiten toimi RO-membraani.
Resumo:
In this thesis, stepwise titration with hydrochloric acid was used to obtain chemical reactivities and dissolution rates of ground limestones and dolostones of varying geological backgrounds (sedimentary, metamorphic or magmatic). Two different ways of conducting the calculations were used: 1) a first order mathematical model was used to calculate extrapolated initial reactivities (and dissolution rates) at pH 4, and 2) a second order mathematical model was used to acquire integrated mean specific chemical reaction constants (and dissolution rates) at pH 5. The calculations of the reactivities and dissolution rates were based on rate of change of pH and particle size distributions of the sample powders obtained by laser diffraction. The initial dissolution rates at pH 4 were repeatedly higher than previously reported literature values, whereas the dissolution rates at pH 5 were consistent with former observations. Reactivities and dissolution rates varied substantially for dolostones, whereas for limestones and calcareous rocks, the variation can be primarily explained by relatively large sample standard deviations. A list of the dolostone samples in a decreasing order of initial reactivity at pH 4 is: 1) metamorphic dolostones with calcite/dolomite ratio higher than about 6% 2) sedimentary dolostones without calcite 3) metamorphic dolostones with calcite/dolomite ratio lower than about 6% The reactivities and dissolution rates were accompanied by a wide range of experimental techniques to characterise the samples, to reveal how different rocks changed during the dissolution process, and to find out which factors had an influence on their chemical reactivities. An emphasis was put on chemical and morphological changes taking place at the surfaces of the particles via X-ray Photoelectron Spectroscopy (XPS) and Scanning Electron Microscopy (SEM). Supporting chemical information was obtained with X-Ray Fluorescence (XRF) measurements of the samples, and Inductively Coupled Plasma-Mass Spectrometry (ICP-MS) and Inductively Coupled Plasma-Optical Emission Spectrometry (ICP-OES) measurements of the solutions used in the reactivity experiments. Information on mineral (modal) compositions and their occurrence was provided by X-Ray Diffraction (XRD), Energy Dispersive X-ray analysis (EDX) and studying thin sections with a petrographic microscope. BET (Brunauer, Emmet, Teller) surface areas were determined from nitrogen physisorption data. Factors increasing chemical reactivity of dolostones and calcareous rocks were found to be sedimentary origin, higher calcite concentration and smaller quartz concentration. Also, it is assumed that finer grain size and larger BET surface areas increase the reactivity although no certain correlation was found in this thesis. Atomic concentrations did not correlate with the reactivities. Sedimentary dolostones, unlike metamorphic ones, were found to have porous surface structures after dissolution. In addition, conventional (XPS) and synchrotron based (HRXPS) X-ray Photoelectron Spectroscopy were used to study bonding environments on calcite and dolomite surfaces. Both samples are insulators, which is why neutralisation measures such as electron flood gun and a conductive mask were used. Surface core level shifts of 0.7 ± 0.1 eV for Ca 2p spectrum of calcite and 0.75 ± 0.05 eV for Mg 2p and Ca 3s spectra of dolomite were obtained. Some satellite features of Ca 2p, C 1s and O 1s spectra have been suggested to be bulk plasmons. The origin of carbide bonds was suggested to be beam assisted interaction with hydrocarbons found on the surface. The results presented in this thesis are of particular importance for choosing raw materials for wet Flue Gas Desulphurisation (FGD) and construction industry. Wet FGD benefits from high reactivity, whereas construction industry can take advantage of slow reactivity of carbonate rocks often used in the facades of fine buildings. Information on chemical bonding environments may help to create more accurate models for water-rock interactions of carbonates.
Resumo:
Työn tarkoitus oli tutkia eläinrasvan puhdistusta biodieselin valmistusta varten. Eläinrasvaa syntyy elintarviketeollisuuden sivutuotteena ja sitä saadaan myös myymättä jääneistä elintarvikkeista. Rasva sisältää epäpuhtauksia, jotka on poistettava ennen biodieselprosessia. Tässä työssä tutkittavat epäpuhtaudet ovat typpi, fosfori, rauta, natrium, kalsium ja magnesium. Puhdistusmenetelminä käytettiin saostamista sitruunahapolla sekä adsorbointia kahdella eri adsorbentilla. Tavoitteena oli selvittää riittävä määrä happoa ja adsorbenttia sekä tutkia puhdistuksen mekanismia. Lisäksi tarkasteltiin lämpötilan vaikutusta adsorption aikana.
Resumo:
This thesis considers optimization problems arising in printed circuit board assembly. Especially, the case in which the electronic components of a single circuit board are placed using a single placement machine is studied. Although there is a large number of different placement machines, the use of collect-and-place -type gantry machines is discussed because of their flexibility and increasing popularity in the industry. Instead of solving the entire control optimization problem of a collect-andplace machine with a single application, the problem is divided into multiple subproblems because of its hard combinatorial nature. This dividing technique is called hierarchical decomposition. All the subproblems of the one PCB - one machine -context are described, classified and reviewed. The derived subproblems are then either solved with exact methods or new heuristic algorithms are developed and applied. The exact methods include, for example, a greedy algorithm and a solution based on dynamic programming. Some of the proposed heuristics contain constructive parts while others utilize local search or are based on frequency calculations. For the heuristics, it is made sure with comprehensive experimental tests that they are applicable and feasible. A number of quality functions will be proposed for evaluation and applied to the subproblems. In the experimental tests, artificially generated data from Markov-models and data from real-world PCB production are used. The thesis consists of an introduction and of five publications where the developed and used solution methods are described in their full detail. For all the problems stated in this thesis, the methods proposed are efficient enough to be used in the PCB assembly production in practice and are readily applicable in the PCB manufacturing industry.
Resumo:
Acid sulfate (a.s.) soils constitute a major environmental issue. Severe ecological damage results from the considerable amounts of acidity and metals leached by these soils in the recipient watercourses. As even small hot spots may affect large areas of coastal waters, mapping represents a fundamental step in the management and mitigation of a.s. soil environmental risks (i.e. to target strategic areas). Traditional mapping in the field is time-consuming and therefore expensive. Additional more cost-effective techniques have, thus, to be developed in order to narrow down and define in detail the areas of interest. The primary aim of this thesis was to assess different spatial modeling techniques for a.s. soil mapping, and the characterization of soil properties relevant for a.s. soil environmental risk management, using all available data: soil and water samples, as well as datalayers (e.g. geological and geophysical). Different spatial modeling techniques were applied at catchment or regional scale. Two artificial neural networks were assessed on the Sirppujoki River catchment (c. 440 km2) located in southwestern Finland, while fuzzy logic was assessed on several areas along the Finnish coast. Quaternary geology, aerogeophysics and slope data (derived from a digital elevation model) were utilized as evidential datalayers. The methods also required the use of point datasets (i.e. soil profiles corresponding to known a.s. or non-a.s. soil occurrences) for training and/or validation within the modeling processes. Applying these methods, various maps were generated: probability maps for a.s. soil occurrence, as well as predictive maps for different soil properties (sulfur content, organic matter content and critical sulfide depth). The two assessed artificial neural networks (ANNs) demonstrated good classification abilities for a.s. soil probability mapping at catchment scale. Slightly better results were achieved using a Radial Basis Function (RBF) -based ANN than a Radial Basis Functional Link Net (RBFLN) method, narrowing down more accurately the most probable areas for a.s. soil occurrence and defining more properly the least probable areas. The RBF-based ANN also demonstrated promising results for the characterization of different soil properties in the most probable a.s. soil areas at catchment scale. Since a.s. soil areas constitute highly productive lands for agricultural purpose, the combination of a probability map with more specific soil property predictive maps offers a valuable toolset to more precisely target strategic areas for subsequent environmental risk management. Notably, the use of laser scanning (i.e. Light Detection And Ranging, LiDAR) data enabled a more precise definition of a.s. soil probability areas, as well as the soil property modeling classes for sulfur content and the critical sulfide depth. Given suitable training/validation points, ANNs can be trained to yield a more precise modeling of the occurrence of a.s. soils and their properties. By contrast, fuzzy logic represents a simple, fast and objective alternative to carry out preliminary surveys, at catchment or regional scale, in areas offering a limited amount of data. This method enables delimiting and prioritizing the most probable areas for a.s soil occurrence, which can be particularly useful in the field. Being easily transferable from area to area, fuzzy logic modeling can be carried out at regional scale. Mapping at this scale would be extremely time-consuming through manual assessment. The use of spatial modeling techniques enables the creation of valid and comparable maps, which represents an important development within the a.s. soil mapping process. The a.s. soil mapping was also assessed using water chemistry data for 24 different catchments along the Finnish coast (in all, covering c. 21,300 km2) which were mapped with different methods (i.e. conventional mapping, fuzzy logic and an artificial neural network). Two a.s. soil related indicators measured in the river water (sulfate content and sulfate/chloride ratio) were compared to the extent of the most probable areas for a.s. soils in the surveyed catchments. High sulfate contents and sulfate/chloride ratios measured in most of the rivers demonstrated the presence of a.s. soils in the corresponding catchments. The calculated extent of the most probable a.s. soil areas is supported by independent data on water chemistry, suggesting that the a.s. soil probability maps created with different methods are reliable and comparable.
Resumo:
This study reviews the research on interaction techniques and methods that could be applied in mobile augmented reality scenarios. The review is focused on themost recent advances and considers especially the use of head-mounted displays. Inthe review process, we have followed a systematic approach, which makes the reviewtransparent, repeatable, and less prone to human errors than if it was conducted in amore traditional manner. The main research subjects covered in the review are headorientation and gaze-tracking, gestures and body part-tracking, and multimodality– as far as the subjects are related to human-computer interaction. Besides these,also a number of other areas of interest will be discussed.
Resumo:
A method to synthesize ethyl β-ᴅ-glucopyranoside (BEG) was searched. Feasibility of different ion exchange resins was examined to purify the product from the synthetic binary solution of BEG and glucose. The target was to produce at least 50 grams of 99 % pure BEG with a scaled up process. Another target was to transfer the batch process into steady-state recycle chromatography process (SSR). BEG was synthesized enzymatically with reverse hydrolysis utilizing β-glucosidase as a catalyst. 65 % of glucose reacted with ethanol into BEG during the synthesis. Different ion exchanger based resins were examined to separate BEG from glucose. Based on batch chromatography experiments the best adsorbent was chosen between styrene based strong acid cation exchange resins (SAC) and acryl based weak acid cation exchange resins (WAC). CA10GC WAC resin in Na+ form was chosen for the further separation studies. To produce greater amounts of the product the batch process was scaled up. The adsorption isotherms for the components were linear. The target purity was possible to reach already in batch without recycle with flowrate and injection size small enough. 99 % pure product was produced with scaled-up batch process. Batch process was transferred to SSR process utilizing the data from design pulse chromatograms and Matlab simulations. The optimal operating conditions for the system were determined. Batch and SSR separation results were compared and by using SSR 98 % pure products were gained with 40 % higher productivity and 40 % lower eluent consumption compared to batch process producing as pure products.